Now that the Chromium version of Edge has shipped and Microsoft’s new browser has built up a user base, it is time to finally say goodbye to Internet Explorer. IE has been around for more than 25 years, but this week, Microsoft announced plans to retire it.
Internet Explorer 11 will be the final version of the browser, but it will continue to be supported for a while longer, with Microsoft setting a June 15th 2022 date for end-of-life.
“Over the last year, you may have noticed our movement away from Internet Explorer (“IE”) support, such as an announcement of the end of IE support by Microsoft 365 online services”, writes Microsoft’s Sean Lyndersay in a blog post. “Today, we are at the next stage of that journey: we are announcing that the future of Internet Explorer on Windows 10 is in Microsoft Edge.”
Microsoft has already thought out legacy support concerns. In Microsoft Edge, you will be able to use “IE Mode”, which will enable access to legacy IE-based websites and applications. Since Microsoft Edge is capable of doing this, Internet Explorer will officially retire in June next year. This only applies to most versions of Windows 10 though – IE 11 will continue to be supported on Windows 10 LTSC and the server application will live on too.
KitGuru Says: What will we all use to download Chrome now?
Outriders may have had a questionable launch but the game still went on to be a significant success. This week, Square Enix revealed that the game garnered up 3.5 million players during its launch month and it could be the publisher’s next major franchise.
This week, Square Enix finally revealed how Outriders performed at launch. During April, the game had 3.5 million unique players and on average, those players ended up putting in 30 hours of playtime each, leading to “extremely high engagement for co-operative play”.
Unfortunately, Square Enix didn’t provide a breakdown of players between platforms, but we do know that the game launched day-one on Xbox Game Pass, which likely helped a lot with reaching 3.5 million unique players at launch.
Of course, Outriders did not have the best launch, with server issues and inventory bugs. Still, Square Enix wants People Can Fly’s co-op IP to become its “next major franchise”, indicating that expansions will be on the way. People Can Fly did not plan for Outriders to be a ‘live service’ game, instead, it looks like the developers will be taking inspiration from Diablo, with a highly replayable game that brings players back over time through big expansions.
KitGuru Says: Did many of you play Outriders last month? Would you like to see some expansions down the line?
Solid-state drives have a number of advantages when compared to hard drives, which include performance, dimensions, and reliability. Yet, for quite a while, HDDs offered a better balance between capacity, performance, and cost, which is why they outsold SSDs in terms of unit sales. Things have certainly changed for client PCs as 60% of new computers sold in Q1 2021 used SSDs instead of HDDs. That said, it’s not surprising that SSDs outsold HDDs almost 3:2 in the first quarter in terms of unit sales as, in 2020, SSDs outsold hard drives (by units not GBs), by 28 perecent.
Unit Sales: SSDs Win 3:2
Three makers of hard drives shipped as many as 64.17 million HDDs in Q1 2021, according to Trendfocus. Meanwhile, less than a dozen SSD suppliers, including those featured in our list of best SSDs, shipped 99.438 million solid-state drives in the first quarter, the same company claims (via StorageNewsletter).
Keeping in mind that many modern notebooks cannot accommodate a hard drive (and many desktops are shipped with an SSD by default), it is not particularly surprising that sales of SSDs are high. Furthermore, nowadays users want their PCs to be very responsive and that more or less requires an SSD. All in all, the majority of new PCs use SSDs as boot drives, some are also equipped with hard drives and much fewer use HDDs as boot drives.
Exabyte Sales: HDDs Win 4.5:1
But while many modern PCs do not host a lot of data, NAS, on-prem servers, and cloud datacenters do and this is where high-capacity NAS and nearline HDDs come into play. These hard drives can store up to 18TB of data and an average capacity of a 3.5-inch enterprise/nearline HDD is about 12TB these days nowadays. Thus, HDD sales in terms of exabytes vastly exceed those of SSDs (288.3EB vs 61.5EB).
Meanwhile, it should be noted that the vast majority of datacenters use SSDs for caching and HDDs for bulk storage, so it is impossible to build a datacenter purely based on solid-state storage (3D NAND) or hard drives.
Anyhow, as far as exabytes shipments are concerned, HDDs win. Total capacity of hard drives shipped in the first quarter 2021 was 288.28 EB, whereas SSDs sold in Q1 could store ‘only’ 66 EB s of data.
Since adoption of SSDs both by clients and servers is increasing, dollar sales of solid-state drives are strong too. Research and Markets values SSD market in 2020 at $34.86 billion and forecasts that it will total $80.34 billion by 2026. To put the numbers into context, Gartner estimated sales of HDDs to reach $20.7 billion in 2020 and expected them to grow to $22.6 billion in 2022.
Samsung Leads the Pack
When it comes to SSD market frontrunners, Samsung is an indisputable champion both in terms of unit and exabytes shipments. Samsung sold its HDD division to Seagate in 2011, a rather surprising move then. Yet, the rationale behind the move has always been there for the company that is the No. 1 supplier of NAND flash memory. Today, the move looks obvious.
Right now, Samsung challenges other SSD makers both in terms of unit (a 25.3% market share) and exabyte (a 34.3% chunk of the market) shipments. Such results are logical to expect as the company sells loads of drives to PC OEMs, and high-capacity drives to server makers and cloud giants.
Still, not everything is rosy for the SSD market in general and Samsung in particular due to shortage of SSD controllers. The company had to shut down its chip manufacturing facility that produces its SSD and NAND controllers in Austin, Texas, earlier this year, which forced it to consider outsourcing of such components. Potentially, shortage of may affect sales of SSDs by Samsung and other companies.
“Shortages of controllers and other NAND sub-components are causing supply chain uncertainty, putting upwards pressure on ASPs,” said Walt Coon, VP of NAND and Memory Research at Yole Développement. “The recent shutdown of Samsung’s manufacturing facility in Austin, Texas, USA, which manufactures NAND controllers for its SSDs, further amplifies this situation and will likely accelerate the NAND pricing recovery, particularly in the PC SSD and mobile markets, where impacts from the controller shortages are most pronounced.”
Storage Bosses Still Lead the Game
Western Digital follows Samsung in terms of SSD units (18.2%) and capacity (15.8%) share to a large degree because it sells loads of drives for applications previously served by HDDs and (perhaps we are speculating here) mission-critical hard drives supplied by Western Digital, HGST (as well as Hitachi and IBM before that).
The number three SSD supplier is Kioxia (formerly Toshiba Memory) with a 13.3% unit market share and a 9.4% exabyte market share, according to TrendFocus. Kioxia has inherited many shipment contracts (particularly in the business/mission-critical space) from Toshiba. Kioxia’s unit shipments (a 13.3% market share) are way lower when compared to those of its partner Western Digital (to some degree because the company is more aimed at the spot 3D NAND and retail SSD markets).
Being aimed primarily at high-capacity server and workstation applications, Intel is the number three SSD supplier in terms of capacity with an 11.5% market share, but when it comes to unit sales, Intel controls only 5% of the market. This situation is not particularly unexpected as Intel has always positioned its storage business as a part of its datacenter platform division, which is why the company has always been focused on high-capacity NAND ICs (unlike its former partner Micron) for advanced server-grade SSDs.
Speaking of Micron, its SSD unit market share is at an 8.4%, whereas its exabytes share is at 7.9%, which is an indicator that the company is balancing between the client and enterprise. SK Hynix also ships quite a lot of consumer drives (an 11.8% market share), but quite some higher-end enterprise-grade SSDs (as its exabytes share is 9.1%).
Seagate is perhaps one exception — among the historical storage bosses — that controls a 0.7% of the exabyte SSD market and only 0.3% of unit shipments. The company serves its loyal clientele and has yet to gain significant share in the SSD market.
Branded Client SSDs
One interesting thing about the SSD market is that while there are loads of consumer-oriented brands that sell flash-powered drives, they do not control a significant part of the market either in terms of units nor in terms of exabytes, according to Trendfocus.
Companies like Kingston, Lite-On, and a number of others make it to the headlines, yet in terms of volume, they control about 18% of the market, a significant, but not a definitive chunk. In terms of exabytes, their share is about 11.3%, which is quite high considering the fact that most of their drives are aimed at client PCs.
Summary
Client storage is going solid state in terms of unit shipments due to performance, dimensions, and power reasons. Datacenters continue to adopt SSDs for caching as well as business and mission-critical applications.
Being the largest supplier of 3D NAND (V-NAND in Samsung’s nomenclature), Samsung continues to be the leading supplier of SSDs both in terms of volumes and in terms of capacity shipments. Meanwhile, shortage of SSD controllers may have an impact on the company’s SSD sales.
Based on current trends, SSDs are set to continue taking unit market share from HDDs. Yet hard drives are not set to give up bulk storage.
Seagate has finally listed its dual-actuator hard disk drive — the Mach.2 Exos 2X14 — on its website and disclosed the official specs. With a 524MB/s sustained transfer rate, the Mach.2 is the fastest HDD ever, its sequential read and write performance is twice that of a normal drive. In fact, it can even challenge some inexpensive SATA SSDs.
The HDD is still available to select customers and will not be available on the open market, at least for the time being. Meanwhile, Seagate’s spec disclosure shows us what type of performance to expect from multi-actuator high-end hard drives.
Seagate Describes First Mach.2 HDD: the Exos 2X14
Seagate’s Exos 2X14 14TB hard drive is essentially two 7TB HDDs in one standard hermetically sealed helium-filled 3.5-inch chassis. The drive features a 7200 RPM spindle speed, is equipped with a 256MB multisegmented cache, and uses a single-port SAS 12Gb/s interface. The host system considers an Exos 2X14 as two logical drives that are independently addressable.
Seagate’s Exos 2X14 boasts a 524MB/s sustained transfer rate (outer diameter) of 304/384 random read/write IOPS, and a 4.16 ms average latency. The Exos 2X14 is even faster than Seagate’s 15K RPM Exos 15E900, so it is indeed the fastest HDD ever.
Furthermore, its sequential read/write speeds can challenge inexpensive SATA/SAS SSDs (at a far lower cost-per-TB). Obviously, any SSD will still be faster than any HDD in random read/write operations. However, hard drives and solid-state drives are used for different storage tiers in data centers, so the comparison is not exactly viable.
But performance increase comes at the cost of higher power consumption. An Exos 2X14 drive consumes 7.2W in idle mode and up to 13.5W under heavy load, which is higher than modern high-capacity helium-filled drives. Furthermore, that’s also higher than the 12W usually recommended for 3.5-inch HDDs.
Seagate says the power consumption is not an issue as some air-filled HDDs are power hungry too, so there are plenty of backplanes and servers that can deliver enough power and ensure proper cooling. Furthermore, the drive delivers quite a good balance of performance-per-Watt and IOPS-per-Watt. Also, data centers can use Seagate’s PowerBalance capability to reduce power consumption, but at the cost of 50% lower sequential read/write speeds and 5%~10% lower random reads/writes.
“3.5-inch air-filled HDDs have operated in a power envelope that is very similar to Exos 2X14 for many years now,” a spokesman for Seagate explained. “It is also worth noting that Exos 2X14 does support PowerBalance which is a setting that allows the customer to reduce power below 12W, [but] this does come with a performance reduction of 50% for sequential reads and 5%-10% for random reads.”
Since the Exos 2X14 is aimed primarily at cloud data centers, all of its peculiarities are set to be mitigated in one way or another, so slightly higher power consumption is hardly a problem for the intended customers. Nonetheless, the drive will not be available on the open market, at least for now.
Seagate has been publicly experimenting with dual-actuator HDDs (dubbed Mach.2) with Microsoft since late 2017, then it expanded availability to other partners, and earlier this year, it said that it would further increase shipments of such drives.
Broader availability of dual-actuator HDDs requires Seagate to better communicate its capabilities to customers, which is why it recently published the Exos 2X14’s specs.
“We began shipping [Mach.2 HDDs] in volume in 2019 and we are now expanding our customer base,” said Jeff Fochtman, Senior Vice President, Business and Marketing, Seagate Technology. “Well over a dozen major customers have active dual-actuator programs underway. As we increase capacities to meet customer needs, Mach.2 ensures the performance they require by essentially keeping the drive performance inside the storage to your expectations for hyperscale deployments.”
Keeping HDDs Competitive
Historically, HDD makers focused on capacity and performance: every new generation brought higher capacity and slightly increased performance. When the nearline HDD category emerged a little more than a decade ago, hard drive makers added power consumption to their focus as tens of thousands of HDDs per data center consumed loads of power, and it became an important factor for companies like AWS, Google, and Facebook.
As hard drive capacity grew further, it turned out that while normal performance increments brought by each new generation were still there, random read/write IOPS-per-TB performance dropped beyond comfortable levels for data centers and their quality-of-service (QoS) requirements. That’s when data centers started mitigating HDD random IOPS-per-TB performance with various caching mechanisms and even limiting HDD capacities.
In a bid to keep hard drives competitive, their manufacturers have to continuously increase capacity, increase or maintain sequential read/write performance, increase or maintain random read/write IOPS-pet-TB performance, and keep power consumption in check. A relatively straightforward way to improve the performance of an HDD is to use more than one actuator with read/write heads, as this can instantly double both sequential and random read/write speeds of a drive.
Not for Everyone. Yet
Seagate is the first to commercialize its dual-actuator HDD, but its rivals from Toshiba and Western Digital are also working on similar hard drives.
“Although Mach.2 is ramped and being used now, it’s also really still in a technology-staging mode,” said Fochtman. “When we reach capacity points above 30TB, it will become a standard feature in many large data center environments.”
For now, most of Seagate’s data center and server customers can get a high-capacity single-actuator HDD with the right balance between capacity and IOPS-per-TB performance, so the manufacturer doesn’t need to sell its Exos 2X14 through the channel. Meanwhile, when capacities of Seagate’s HAMR-based HDDs increase to over 50TB sometime in 2026, there will be customers that will need dual-actuator drives.
Eufy has put out a statement apologizing for a glitch that occurred two days ago, allowing some Eufy home security camera users to see video from other users’ homes. The statement explains that it happened during a software update, but the company claims it only affected a small number of users: just 712 people across the US, Canada, Mexico, Cuba, New Zealand, Australia, and Argentina. Eufy says that the issue was fixed with an emergency update less than two hours after it was identified.
In a statement to The Verge, Eufy confirmed that “users were able to access video feeds from other users’ cameras.” However, in its official statement posted to Twitter (which can be viewed in full below), Eufy doesn’t explain what the bug actually was. It does say it’s working to keep this from happening again in the future, by upgrading its network and the authentication mechanisms between the cameras, servers, and app.
The initial reports of the bug came from Reddit, with users reporting that they were able to see and control the live feeds from all the Eufy cameras in someone else’s home, as well as see any previously recorded footage and the other user’s email address.
Eufy suggests that that users in the affected countries (listed above) should unplug then replug their security home base, then log out of the Eufy security app before logging back in.
The full statement is below:
During a software update performed on our server in the United States on May 17th at 4:50 AM EDT, a bug occurred affecting a limited number of users in the United States, Canada, Mexico, Cuba, New Zealand, Australia, and Argentina. Users in Europe and other regions remain unaffected. Our engineering team identified the issue at 5:30 AM EDT and immediately rolled back the server version and deployed an emergency update. The incident was fixed at 6:30 AM EDT. We have confirmed that a total of 712 users were affected in this case.
Although the issue has been resolved, we recommend users in the affected countries (US, Canada, Mexico, Argentina, New Zealand, Australia, and Cuba) to:
– Please unplug and then reconnect the eufy security home base.
– Log out of the eufy security app and log in again.
All of our user video data is stored locally on the users’ devices. As a service provider, eufy provides account management, device management, and remote P2P access for users through AWS servers. All stored data and account information is encrypted.
In order to avoid this happening in the future, we are taking the following steps:
– We are upgrading our network architecture and strengthening our two-way authentication mechanism between the servers, devices, and the eufy Security app.
– We are upgrading our servers to improve their processing capacity in order to eliminate potential risks.
– We are also in the process of obtaining the TUV and BSI Privacy Information Management System (PIMS) certifications which will further improve our product security.
We understand that we need to build trust again with our customers. Thank you for trusting us with your security and our team is available 24/7 at support@eufylife.com and Mon-Fri 9AM-5PM (PT) through our online chat on eufylife.com.
There are new features, but it’s the biggest design update in years
Google is announcing the latest beta for Android 12 today at Google I/O. It has an entirely new design based on a system called “Material You,” featuring big, bubbly buttons, shifting colors, and smoother animations. It is “the biggest design change in Android’s history,” according to Sameer Samat, VP of product management, Android and Google Play.
That might be a bit of hyperbole, especially considering how many design iterations Android has seen over the past decade, but it’s justified. Android 12 exudes confidence in its design, unafraid to make everything much larger and a little more playful. Every big design change can be polarizing, and I expect Android users who prefer information density in their UI may find it a little off-putting. But in just a few days, it has already grown on me.
There are a few other functional features being tossed in beyond what’s already been announced for the developer betas, but they’re fairly minor. The new design is what matters. It looks new, but Android by and large works the same — though, of course, Google can’t help itself and again shuffled around a few system-level features.
I’ve spent a couple of hours demoing all of the new features and the subsequent few days previewing some of the new designs in the beta that’s being released today. Here’s what to expect in Android 12 when it is officially released later this year.
Material You design and better widgets
Android 12 is one implementation of a new design system Google is debuting called Material You. Cue the jokes about UX versus UI versus… You, I suppose. Unlike the first version of Material Design, this new system is meant to mainly be a set of principles for creating interfaces — one that goes well beyond the original paper metaphor. Google says it will be applied across all of its products, from the web to apps to hardware to Android. Though as before, it’s likely going to take a long time for that to happen.
In any case, the point is that the new elements in Android 12 are Google’s specific implementations of those principles on Pixel phones. Which is to say: other phones might implement those principles differently or maybe even not at all. I can tell you what Google’s version of Android 12 is going to look and act like, but only Samsung can tell you what Samsung’s version will do (and, of course, when it will arrive).
The feature Google will be crowing the most about is that when you change your wallpaper, you’ll have the option to automatically change your system colors as well. Android 12 will pull out both dominant and complementary colors from your wallpaper automatically and apply those colors to buttons and sliders and the like. It’s neat, but I’m not personally a fan of changing button colors that much.
The lock screen is also set for some changes: the clock is huge and centered if you have no notifications and slightly smaller but still more prominent if you do. It also picks up an accent color based on the theming system. I especially love the giant clock on the always-on display.
Android’s widget system has developed a well-deserved bad reputation. Many apps don’t bother with them, and many more haven’t updated their widget’s look since they first made one in days of yore. The result is a huge swath of ugly, broken, and inconsistent widgets for the home screen.
Google is hoping to fix all of that with its new widget system. As with everything else in Android 12, the widgets Google has designed for its own apps are big and bubbly, with a playful design that’s not in keeping with how most people might think of Android. One clever feature is that when you move a widget around on your wallpaper, it subtly changes its background color to be closer to the part of the image it’s set upon.
I don’t have especially high hopes that Android developers will rush to adopt this new widget system, so I hope Google has a plan to encourage the most-used apps to get on it. Apple came very late to the home screen widget game on the iPhone, but it’s already surpassed most of the crufty widget abandonware you’ll find from most Android apps.
Bigger buttons and more animation
As you’ve no doubt gathered already from the photos, the most noticeable change in Android 12 is that all of the design elements are big, bubbly, and much more liberal in their use of animation. It certainly makes the entire system more legible and perhaps more accessible, but it also means you’re just going to get fewer buttons and menu items visible on a single screen.
That tradeoff is worth it, I think. Simple things like brightness and volume sliders are just easier to adjust now, for example. As for the animations, so far, I like them. But they definitely involve more visual flourish than before. When you unlock or plug in your phone, waves of shadow and light play across the screen. Apps expand out clearly from their icon’s position, and drawers and other elements slide in and out with fade effects.
More animations mean more resources and potentially more jitter, but Samat says the Android team has optimized how Android displays core elements. The windows and package manager use 22 percent less CPU time, the system server uses 15 percent less of the big (read: more powerful and battery-intensive) core on the processor, and interrupts have been reduced, too.
Android has another reputation: solving for jitter and jank by just throwing ever-more-powerful hardware at the problem: faster chips, higher refresh rate screens, and the like. Hopefully none of that will be necessary to keep these animations smooth on lower-end devices. On my Pixel 5, they’ve been quite good.
One last bit: there’s a new “overscroll” animation — the thing the screen does when you scroll to the end of a page. Now, everything on the screen will sort of stretch a bit when you can’t scroll any further. Maybe an Apple patent expired.
Shuffling system spaces around
It wouldn’t be a new version of Android without Google mucking about with notifications, Google Assistant, or what happens when you press the power button. With Android 12, we’ve hit the trifecta. Luckily, the changes Google has made mostly represent walking back some of the changes it made in Android 11.
The combined Quick Settings / notifications shade remains mostly the same — though the huge buttons mean you’re going to see fewer of them in either collapsed or expanded views. The main difference in notifications is mostly aesthetic. Like everything else, they’re big and bubbly. There’s a big, easy-to-hit down arrow for expanding them, and groups of notifications are put together into one bigger bubble. There’s even a nice little visual flourish when you begin to swipe a notification away: it forms its own roundrect, indicating that it has become a discrete object.
The thing that will please a lot of Android users is that after just a year, Google has bailed on its idea of creating a whole new power button menu with Google Wallet and smart home controls. Instead, both of those things are just buttons inside the quick settings shade, similar to Samsung’s solution.
Holding down the power button now just brings up Google Assistant. Samat says it was a necessary change because Google Assistant is going to begin to offer more contextually aware features based on whatever screen you’re looking at. I say the diagonal swipe-in from the corner to launch Assistant was terrible, and I wouldn’t be surprised if it seriously reduced how much people used it.
I also have to point out that it’s a case of Google adopting gestures already popular on other phones: the iPhone’s button power brings up Siri, and a Galaxy’s button brings up Bixby.
New privacy features for camera, mic, and location
Google is doing a few things with privacy in Android 12, mostly focused on three key sensors it sees as trigger points for people: location, camera, and microphone.
The camera and mic will now flip on a little green dot in the upper-right of the screen, indicating that they’re on. There are also now two optional toggles in Quick Settings for turning them off entirely at a system level.
When an app tries to use one of them, Android will pop up a box asking if you want to turn it back on. If you choose not to, the app thinks it has access to the camera or mic, but all Android gives it is a black nothingness and silence. It’s a mood.
For location, Google is adding another option for what kind of access you can grant an app. Alongside the options to limit access to one time or just when the app is open, there are settings for granting either “approximate” or “precise” locations. Approximate will let the app know your location with less precision, so it theoretically can’t guess your exact address. Google suggests it could be useful for things like weather apps. (Note that any permissions you’ve already granted will be grandfathered in, so you’ll need to dig into settings to switch them to approximate.)
Google is also creating a new “Privacy Dashboard” specifically focused on location, mic, and camera. It presents a pie chart of how many times each has been accessed in the last 24 hours along with a timeline of each time it was used. You can tap in and get to the settings for any app from there.
The Android Private Compute Core
Another new privacy feature is the unfortunately named “Android Private Compute Core.” Unfortunately, because when most people think of a “core,” they assume there’s an actual physical chip involved. Instead, think of the APCC as a sandboxed part of Android 12 for doing AI stuff.
Essentially, a bunch of Android machine learning functions are going to be run inside the APCC. It is walled-off from the rest of the OS, and the functions inside it are specifically not allowed any kind of network access. It literally cannot send or receive data from the cloud, Google says. The only way to communicate with the functions inside it is via specific APIs, which Google emphasizes are “open source” as some kind of talisman of security.
Talisman or no, it’s a good idea. The operations that run inside the APCC include Android’s feature for ambiently identifying playing music. That needs to have the microphone listening on a very regular basis, so it’s the sort of thing you’d want to keep local. The APCC also hands the “smart chips” for auto-reply buttons based on your own language usage.
An easier way to think of it is if there’s an AI function you might think is creepy, Google is running it inside the APCC so its powers are limited. And it’s also a sure sign that Google intends to introduce more AI features into Android in the future.
No news on app tracking — yet
Location, camera, mic, and machine learning are all privacy vectors to lock down, but they’re not the kind of privacy that’s on everybody’s mind right now. The more urgent concern in the last few months is app tracking for ad purposes. Apple has just locked all of that down with its App Tracking Transparency feature. Google itself is still planning on blocking third-party cookies in Chrome and replacing them with anonymizing technology.
What about Android? There have been rumors that Google is considering some kind of system similar to Apple’s, but there won’t be any announcements about it at Google I/O. However, Samat confirmed to me that his team is working on something:
There’s obviously a lot changing in the ecosystem. One thing about Google is it is a platform company. It’s also a company that is deep in the advertising space. So we’re thinking very deeply about how we should evolve the advertising system. You see what we’re doing on Chrome. From our standpoint on Android, we don’t have anything to announce at the moment, but we are taking a position that privacy and advertising don’t need to be directly opposed to each other. That, we don’t believe, is healthy for the overall ecosystem as a company. So we’re thinking about that working with our developer partners and we’ll be sharing more later this year.
A few other features
Google has already announced a bunch of features in earlier developer betas, most of which are under-the-hood kind of features. There are “improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets that bring your favorite people to the home screen” and the already-announced improved support for third-party app stores. On top of those, there are a few neat little additions to mention today.
First, Android 12 will (finally) have a built-in remote that will work with Android TV systems like the Chromecast with Google TV or Sony TVs. Google is also promising to work with partners to get car unlocking working via NFC and (if a phone supports it) UWB. It will be available on “select Pixel and Samsung Galaxy phones” later this year, and BMW is on board to support it in future vehicles.
For people with Chromebooks, Google is continuing the trend of making them work better with Android phones. Later this year, Chrome OS devices will be able to immediately access new photos in an Android phone’s photo library over Wi-Fi Direct instead of waiting for them to sync up to the Google Photos cloud. Google still doesn’t have anything as good as AirDrop for quickly sending files across multiple kinds of devices, but it’s a good step.
Android already has fast pairing for quickly setting up Bluetooth devices, but it’s not built into the Bluetooth spec. Instead, Google has to work with individual manufacturers to enable it. A new one is coming on board today: Beats, which is owned by Apple. (Huh!) Ford and BMW cars will also support one-tap pairing.
Android Updates
As always, no story about a new version of Android would be complete without pointing out that the only phones guaranteed to get it in a timely manner are Google’s own Pixel phones. However, Google has made some strides in the past few years. Samat says that there has been a year-over-year improvement in the “speed of updates” to the tune of 30 percent.
A few years ago, Google changed the architecture of Android with something called Project Treble. It made the system a little more modular, which, in turn, made it easier for Android manufacturers to apply their custom versions of Android without mucking about in the core of it. That should mean faster updates.
Some companies have improved slightly, including the most important one, Samsung. However, it’s still slow going, especially for older devices. As JR Raphael has pointed out, most companies are not getting updates out in what should be a perfectly reasonable timeframe.
Beyond Treble, there may be some behind-the-scenes pressure happening. More and more companies are committing to providing updates for longer. Google also is working directly with Qualcomm to speed up updates. Since Qualcomm is, for all intents and purposes, the monopoly chip provider for Android phones in the US, that should make a big difference, too.
That’s all heartening, but it’s important to set expectations appropriately. Android will never match iOS in terms of providing timely near-universal updates as soon as a new version of the OS is available. There will always be a gap between the Android release and its availability for non-Pixel phones. That’s just the way the Android ecosystem works.
That’s Android 12. It may not be the biggest feature drop in years, but it is easily the biggest visual overhaul in some time. And Android needed it. Over time and over multiple iterations, lots of corners of the OS were getting a little crufty as new ideas piled on top of each other. Android 12 doesn’t completely wipe the slate clean and start over, but it’s a significant and ambitious attempt to make the whole system feel more coherent and consistent.
The beta that’s available this week won’t get there — the version I’m using lacks the theming features, widgets, and plenty more. Those features should get layered in as we approach the official release later this year. Assuming that Google can get this fresh paint into all of the corners, it will make Google’s version of Android a much more enjoyable thing to use.
Last night, a number of Eufy home security camera owners discovered they were able to access smart camera feeds and saved videos from users they had never met, due to an apparent security glitch. First reported by 9to5Mac, the issue came to light in an extended Reddit thread, in which users from around the world detailed their experiences.
“Basically I could see every camera, their front door and backdoor bells, master bedroom, living room, garage, kitchen, their motion recordings, everything,” one Eufy owner noted. “I was wondering what was going on as it still had my email and name as signed in and noticed that some unknown email, I’m guessing of the Hawaii owner, was in my shared guest account.”
Some reported that signing out of their account and signing back in resolved the behavior; by now, whatever problem caused the behavior appears to have been fixed. Still, many users are left concerned that their own cameras and feeds might have been exposed without their knowledge.
“For a security product to become completely unsecure, it’s pretty worrying,” the users continued.
Eufy did not respond to a request for comment. However, the thread contains a message sent to customers attributing the issue to a server error:
Dear user, The issue was due to a bug in one of our servers. This was quickly resolved by our engineering team and our customer service team will continue to assist those affected. We recommend all users to: 1.Please unplug and then reconnect the home base. 2.Log out of the eufy security app and log in again. Contact support@eufylife.com for enquiries.
There’s no indication that specific individuals were targeted as part of the bug, but it’s still a troubling behavior for a service that often monitors private homes. Eufy also makes an Echo Dot-style voice assistant called the Genie, although Genie products appear to have been unaffected by the bug.
AMD disclosed two exploits targeting the Secure Encrypted Virtualization (SEV) feature used by its first-, second-, and third-gen EPYC processors ahead of their presentation at the 15th IEEE Workshop on Offensive Technologies (WOOT’21).
The first exploit, CVE-2020-12967, is set to be presented in a paper from researchers at Fraunhofer AISEC and the Technical University of Munich titled “SEVerity: Code Injection Attacks against Encrypted Virtual Machines.”
AMD said the researchers who discovered that flaw “make use of previously discussed research around the lack of nested page table protection in the SEV/SEV-ES feature which could potentially lead to arbitrary code execution within the guest.”
The second exploit, CVE-2021-26311, will be detailed in a paper with the interestingly capitalized title of “undeSErVed trust: Exploiting Permutation-Agnostic Remote Attestation” from researchers at the University of Lübeck.
AMD said the research showed ”memory can be rearranged in the guest address space that is not detected by the attestation mechanism which could be used by a malicious hypervisor to potentially lead to arbitrary code execution within the guest.”
Even though both exploits affect three generations of EPYC processors, only third-generation models will receive a mitigation directly from AMD courtesy of the SEV-Secure Nested Paging feature described in a white paper in January 2020.
As for first- and second-gen EPYC processors: AMD said it “recommends following security best practices” to mitigate exposure to these exploits. That isn’t particularly actionable advice, but fortunately, it shouldn’t prove too hard to follow. We’re following up to see if these issues will receive their own mitigations.
AMD said the “exploits mentioned in both papers require a malicious administrator to have access in order to compromise the server hypervisor.” Requiring physical access should limit the exploits’ reach—especially during a global pandemic.
More information about both exploits is supposed to arrive during WOOT’21 on May 27. (The papers are listed as “Trololo (Title under embargo)” on the workshop’s website; it seems AMD posted their titles earlier than it was supposed to.)
After you’ve gone through the process of building Chia Coin plots on a PC (see how to farm Chia Coin), there’s no need to waste the electricity and tie up expensive computer hardware keeping those plots connected to the Internet. Instead, it’s best to take an external drive or drive(s) with the plots on them and hook them up to a Raspberry Pi where they can stay online, without gulping down too much juice.
In this tutorial, we will create a custom Raspberry Pi Chia farming device powered by the latest Ubuntu 64-bit release for the Raspberry Pi. The unit is designed to be hidden away, farming Chia Coin silently while we go about our lives. As such we chose to house the Raspberry Pi 4 inside of a passively cooled case. Our choice this time was the Akasa Gem Pro which has great cooling for the SoC, PMIC and PCIe chip and a rather tasteful, if unusual design.
For This Project You Will Need
Raspberry Pi 4 4GB
Raspberry pi case, perhaps one of the best Raspberry Pi cases, with cooling
An external USB storage drive or SSD / HDD with USB 3.0 caddy.
16GB Micro SD card or larger
Raspberry Pi Imager tool
Accessories to use your Raspberry Pi 4
Installing Chia On Raspberry Pi 4
1. Install Ubuntu 21.04 to a 16GB micro SD card using the official Raspberry Pi Imager tool. You can also try a headless installation.
2. Connect your keyboard, mouse, screen and Ethernet cable. If you did a headless install, you can skip the keyboard / mouse / screen. Boot your Raspberry Pi 4 and complete the Ubuntu 21.04 installation process. Reboot the Raspberry Pi for all the changes to take effect.
3. Open a terminal and update the software repositories, then upgrade the system software.
$ sudo apt update
$ sudo apt upgrade -y
4. Install the openssh server to enable remote monitoring via an SSH session. After installation this will automatically start the SSH server as a background service.
$ sudo apt install openssh-server
5. Install and start Byobu, a terminal multiplexer that will enable us to log out of the Pi and leave our Chia farm running.
$ sudo apt install byobu
$ byobu
6. Make a note of your Raspberry Pi’s IP address and hostname.
$ hostname -I
7. Install the Git version control software and a number of essential dependencies to build the Chia application.
9. Change directory to chia-blockchain and run the installer.
cd chia-blockchain
sh install.sh
10. Activate the Chia virtual environment and create a new Chia config.
$ . ./activate
$ chia init
11. Connect your USB 3.0 hard drive containing your Chia plot to the blue USB 3.0 ports on the Raspberry Pi 4. The drive will mount to a directory inside of /media/.
12. In the terminal change directory to your USB drive. Our test drive is at /media/les/Chia/
$ cd /media/YOUR USERNAME/DRIVE NAME
13. Add the plot from your USB drive to the Chia config using the 24 word key, created when the plot was created. Enter the command and then type the 24 words with a space between each word.
$ chia keys add
14. Start farming the plot; this will also start services for your wallet. This command will only show that the process has started.
$ chia start farmer
15. Use this command to see the process of syncing our machine to the network and to confirm that farming has begun. The command will update every two seconds. This command can be stopped by pressing CTRL + C.
$ watch 'chia show -s -c'
16. Press F6 to detach from the current Byobu session. This releases us from the running session but it keeps the previous command to view our farming progress running in the background. Should you wish to go back to that Byobu session type this command.
$ byobu
It will take some time for the Pi to sync with the Chia network but it will still continue to farm as it syncs. Right now if you wish, you can unplug the monitor, keyboard, mouse. Leaving just the power, network and USB 3.0 drive connected. Your Pi will happily farm Chia quietly in the corner of the room. But to access the Pi we now need to use SSH, a secure shell terminal and for that we need to instal a client on our computer.
Should you ever need to manually start the Chia farmer, for example after a reboot, start byobu and repeat steps 14 to 16.
How To Remotely Access Your Raspberry Pi Chia Coin Farm
1. Install PuTTY on your PC. PuTTY is used to make remote serial connections, SSH, with our Raspberry Pi 4.
2. Open PuTTY and in the Host Name or IP Address field enter the hostname or IP address of your Raspberry Pi 4. Click Open.
3. Enter your username and password to remotely login to the Raspberry Pi 4.
4. Open the Byobu session to see the current progress.
$ byobu
Auto Mount USB Drive on Boot
Should we need to power off our P, or there is power loss, we need the drive to automatically be ready to farm Chia with little interaction. It is best to connect your keyboard, mouse and screen for this part of the project but it can also be done remotely using an SSH connection.
1. With the USB drive connected, open a terminal and list all the attached devices. Our device is sda1 which is connected to the mountpoint /media/les/Chia. Make a note of your mountpoint, we will need this later.
$ lsblk
2. Run this command to identify the UUID of your drive. Copy the text which starts UUID=. For example our UUID was UUID=”b0984018-3d5e-4e53-89d9-6c1f371dbdee
blkid /dev/YOUR DRIVE
3. Open the filesystem configuration file, fstab with sudo.
$ sudo nano /etc/fstab
4. Add this line to the end of the file. Add your UUID after the =, and leave a space before entering the mountpoint. Here is how our drive is configured.
UUID=b0984018-3d5e-4e53-89d9-6c1f371dbdee /media/les/Chia/ auto nosuid,nodev,nofail,x-gvfs-show 0 0
Icy Dock has developed the industry’s only U.2 to USB 3.2 Gen 2 adapter, which lets you connect an enterprise-grade U.2 SSD to any desktop or laptop with a USB Type-A or Type-C port. The EZ-Adapter Ex MB931U-1VB targets people who need to transfer data from an enterprise-grade SSD to a PC or those who use U.2 drives as recording medium and need to transfer videos to a computer. But PC builders may be attracted to the adapter too.
The U.2 form-factor (SFF-8639) was developed primarily for business and mission-critical server and workstation applications that have very strict requirements for connectivity, thermals, reliability and hot-plug capability. Today, U.2 drives used in servers and workstations and more. For example, select Blackmagic cameras with the Ursa Mini Recorder attached can use U.2 SSDs as storage medium.
A big market for the EZ-Adapter Ex MB931U-1VB are content creators who have to transfer loads of data from one PC to another (or from a camera to a PC). 10GbE networks used in studios are fast, yet a PCIe 3.0 x4 interface of U.2 SSDs is a lot faster, so it makes sense to use U.2 SSDs as flash drives. There are also people who might prefer to use enterprise-grade SSDs as their direct attached storage (DAS), due to their higher endurance and reliability.
But another potential market comes from PC DIYers. U.2 SSDs tend to be very expensive when bought from the IT channel, but they can also be found on sites like eBay for considerably cheaper. Depending on the model, U.2 drives are designed for read-intensive, write-intensive or mixed workloads. Even after some time in service, most U.2 SSDs will have plenty of resource left. Furthermore, such drives are tailored for sustained, rather than burst, performance. As a result, even used U.2 SSDs may be faster and more durable than cheap consumer-grade drives rated for 0.2 DWPD over a three-year period. Hence, it makes sense to consider U.2 SSDs for DIY DAS applications.
Yet, connecting such drives to PCs is complicated, as only select desktop workstations have U.2 ports (or M.2 to U.2 adapters), and not all of them have adapters that can house a U.2 drive. Furthermore, there are no laptops with U.2 slots.
Icy Dock’s EZ-Adapter Ex MB931U-1VB is based on the ASMedia ASM2362 controller. It can house any U.2 SSD and connect it to a PC with a USB 3.2 Gen 2 Type-A or Type-C connector.
The EZ-Adapter Ex MB931U-1VB adapter is available now for $150 from Amazon. A power adapter and USB-A and USB-C cables are included.
After a devastating and deeply embarrassing cyberattack on one of the United States’ largest oil pipelines, one that forced many gas stations to shut down and reportedly saw average national gas prices rise above $3 for the first time since 2014, the oil is flowing again — but Bloomberg is reporting that Colonial Pipeline had to pay a nearly $5 million ransom to get there, and paid that ransom within mere hours.
That’s striking, because it’s the opposite of what Reuters, CNNand others reported in the wake of the attack. “Sources familiar with the company’s response,” a phrase often used when a company doesn’t want to be named, suggested the company had no plans to pay hackers. CNN’s sources insisted Colonial Pipeline had not yet paid the ransom, and would probably not need to pay, suggesting it had already “managed to retrieve the most important data that was stolen” with help from the US government.
It’s also a little worrying, because of how a successful ransom might encourage hackers in future. Over the years, we’ve heard reports of smaller companies and local government entities paying ransoms to regain access to their computers, but this is perhaps one of the most high-profile examples of ransomware yet, and the news might inspire copycats.
On the plus side, an digital forensics expert who spoke to Bloomberg suggested that $5 million isn’t a particularly large sum of money for something like this: “Ransom is usually around $25 million to $35 million for such a company. I think the threat actor realized they stepped on the wrong company and triggered a massive government response,” LIFARS CEO Ondrej Krehel told the publication. On Monday, the Colonial Pipeline hackers apologized for the “social consequences” and promised to ransom less controversial targets in the future.
It’s not clear which parts of the Colonial Pipeline were at risk: a spokesperson suggested there was no evidence the company’s operational systems were compromised; CNN had three sources yesterday say that the pipeline shut down because its billing system was affected, and the company wasn’t sure it’d be able to charge properly for fuel. Reporting by cybersecurity journalist Kim Zetter suggests the decision was likely more complicated than that, as other entities in the oil distribution system were also worried the ransomware could spread to their computers as well.
Yesterday, President Biden signed an executive order aimed at improving national cybersecurity, with the White House specifically naming the Colonial Pipeline, the SolarWinds hack, and the Microsoft Exchange server vulnerabilities as the kinds of infrastructure failures the government hopes to address.
The Colonial Pipeline began resuming operations on Wednesday evening, with President Biden saying it should be “reaching full operational capacity as we speak” in a briefing early Thursday afternoon. Oil supplies should be “seeing a region-by-region return to normalcy beginning this weekend,” he says.
Still, he warns, “this is not like flicking on a light switch — this pipeline is 500,000 miles long, it had never been shut down in its history… it’s going to take some time, and there may be some hiccups along the way here.”
Biden says the US isn’t blaming Russia directly: “We do not believe the Russian government was involved in this attack, but we do have strong reason to believe that the criminals who did the attack are living in Russia,” he says.
He also announced a specific measure against ransomware: “Our Justice Department has launched a new task force dedicated to prosecuting ransomware hackers to the full extent of the law.”
President Biden declined to comment on whether Colonial Pipeline paid the ransom.
President Joe Biden signed an executive order on Wednesday implementing new policies aimed to improve national cybersecurity. The executive order comes in the wake of a number of recent cybersecurity catastrophes, such as last week’s ransomware attack that took down the Colonial Pipeline, the Microsoft Exchange server vulnerabilities that may have affected north of 60,000 organizations, and the SolarWinds hack that compromised nine federal agencies late last year — each of which were specifically namedropped by the White House in a fact sheet accompanying the order.
The executive order outlines a number of initiatives, including reducing barriers to information sharing between the government and the private sector, mandating the deployment of multi-factor authentication in the federal government, establishing a Cybersecurity Safety Review Board modeled after the National Transportation Safety Board, and creating a standardized playbook for responding to “cyber incidents.” You can read more about all of the initiatives in the White House’s fact sheet here.
In the past few months, we’ve seen example after example of major IT systems breaking down, whether they allowed for a huge effort like the email server hack from the state-sponsored Chinese hacking group Hafnium (the White House promised a “whole of government response” to that one), a ransomware attack that forced public schools to cancel classes, or even a pair of breakdowns that appear to have allowed workers to remote into their local water supply and mess things up. The policies outlined in Wednesday’s executive order could create critical infrastructure to help prevent future cybersecurity disasters — or, at the very least, better limit any potential fallout.
The SteelSeries Arctis 9 Wireless is what I’d call a logical addition to the famous and mostly fantastic Arctis gaming headset lineup. Looking at their nomenclature, the Arctis 9 Wireless is a natural upgrade of the also wireless Arctis 7, whose first edition I reviewed back in 2017 and still happily use to this day simply because it continues to impress me with its build quality, wearing comfort, and overall performance. But let’s not waste any more time on the Arctis 7—we’re here to talk about the $200/€200 Arctis 9 Wireless.
Depending on where you buy it, the Arctis 9 Wireless costs $30–$50 more than the Arctis 7. For that price hike, you’re getting everything great about the Arctis 7 with the addition of expanded wireless connectivity. Aside from the standard 2.4 GHz RF wireless connection, the Arctis 9 Wireless also offers Bluetooth connectivity. It can be used on its own to connect to mobile devices (smartphones, tablets, laptops), TVs, and other devices which act as Bluetooth sound sources, or simultaneously with the 2.4 GHz connection. Thanks to that, you can have it connected to your PC and phone at the same time, and use it to answer phone calls without stopping what you’re doing or taking off the headset. In case you own a PlayStation, a Bluetooth connection to your phone will allow you to connect to a Discord server through Discord’s mobile app and easily communicate with your friends. Another interesting option is to connect the headset to your Nintendo Switch through a built-in analog 3.5-mm interface to hear the game while utilizing a Bluetooth connection to your phone for voice chat in Switch games that don’t support it natively. You can also use it wirelessly with the Switch because the supplied wireless dongle works perfectly fine as long as the Switch is in docked mode.
Xbox users, fear not—SteelSeries is also making an Xbox-specific version of this headset, simply called the Arctis 9X.
Specifications
40-mm dynamic drivers (neodymium magnet)
32 Ω impedance
20-20,000 Hz frequency response (specified by the manufacturer)
Closed-back, over-ear design
2.4 GHz and Bluetooth wireless connectivity
3.5-mm wired connectivity (output only)
Retractable bidirectional microphone
Over 20 hours of battery life
Supplied 1.5-meter Micro-USB charging cable
Platform support: PC, macOS, PS4, PS5, Nintendo Switch, and mobile devices
The new Atom’s headline ability is headphone playback, but don’t underestimate its value as a preamplifier. It’s a classy and versatile addition to Naim’s Uniti range.
For
Top-notch streaming
Great headphone stage
Also a great smart preamp
Against
No HDMI ARC input
Sound+Image mag review
This review originally appeared in Sound+Image magazine, one of What Hi-Fi?’s Australian sister publications. Click here for more information on Sound+Image, including digital editions and details on how you can subscribe.
UK-based Naim Audio became first renowned for its amplification, proving the importance of power quality from the early 1970s. Three decades later Naim was also quick to recognise the future of file-based and streaming music, and today enjoys great success with its Mu-so wireless speakers, while the Uniti range of all-in-one streaming systems deliver simple but definitely hi-fi ‘just-add-speakers’ solutions.
In a way the Uniti players brought together everything Naim has learned – the wireless, multiroom and control elements of the Mu-sos, with the solid hi-fi amplification developed over decades, including more recent trickle-down tech from the developmental fillip of investment made in the company’s no-holds-barred Statement amplifier project.
Now here comes the Uniti Atom Headphone Edition (£2399/$3290/AU$4299), which takes the smallest of the existing Uniti all-in-ones and does something rather unexpected for Naim – it throws out the part on which the company built its reputation, the amplification.
10 of the best British amplifiers of all time
Features
Well, that’s not entirely true. There are no amplifiers for loudspeakers, as provided on the other Uniti units (excepting only the Uniti Core, which adds networked hard-drive storage to the range).
But as the ‘HE’ of the new name suggests, it caters instead to headphones. On the front there are headphone outputs for jacks of full-size quarter-inch (6.5mm in new money) and 4.4mm Penteconn balanced connections, while round the back there’s a second balanced connection on 4-pin XLR.
We’re told that for this product Naim has used an all-new amp implementation designed especially so it delivers the best headphone amplifier experience, including a new transformer design to provide power tailored to the needs of the headphone amplification.
But this is not only a headphone amp. It’s also a preamplifier, and Naim has optimised its preamplifier performance also, “including elements originally used in our flagship Statement Amplifier” it says.
As a preamplifier it offers one analogue input pair on RCA sockets, and then digital inputs: two optical and one coaxial, plus USB-A slots both front and rear. There’s also Bluetooth available, which includes the aptX codec.
What doesn’t it have? It loses from the original Atom the HDMI ARC connection which was handy to play audio from your TV, and there’s still no USB-B connection to play direct from computer.
But its outputs are expanded, its variable preamplifier output available on both unbalanced RCA and balanced XLR outputs to feed your downstream amplification. This could play straight to power amps, since there is full volume control in the Atom HE – either from the remote, from its app, or from the heavenly Naim knob which sits on top, the only disadvantage of this positioning being that it is hidden when the unit goes on a good rack shelf, though its minimal height of 9cm means you should still be able to squeeze your hand in there for a knob spin when the urge presents itself.
Streaming
And in addition to physical inputs, this Naim has all the streaming prowess of other Uniti members, and that’s to say as complete a set of protocols as you’ll find anywhere – so many, indeed, that when the range originally launched, it was significantly delayed by the paper trail for all the licensing involved.
So this includes being easily addressed from any Spotify app, free or paid, or using Apple’s AirPlay 2 to stream the output of a Mac or any app on an iOS app, and Chromecast too, for point-to-point streaming from Android devices. Those with music libraries on a PC can use its UPnP ‘server’ function. It’s also Roon Ready, and although the Roon-direct licensing was still going through when it arrived for review, it was nevertheless available in Roon via its Chromecast and AirPlay abilities.
Then there are the services available within the Naim app itself. These include internet radio and podcasts, Tidal, and Qobuz (the latter newly available to Australia). You may note these are services which offer higher-quality subscriptions; Naim emphasises this quality also in its internet radio app, with a section devoted to higher-rate streams than the often grungy desk-compressed pop stations.
And one last batch of capabilities – the Atom HE is multiroom-capable with other Naim equipment including the Mu-so wireless speakers, so you can have music playing in unison (and Uniti) throughout your home. Chromecast and AirPlay 2 offer other paths to multiroom and multi-device playback.
Best music streamers 2021
Setting up
Having previously reviewed the standard Uniti Atom, we found set-up here to be a breeze. You have to pair the remote control by holding it to the full-colour five-inch front-panel panel display while you push ‘Home’ for three seconds. Our Naim app, already installed on an iPad Pro, needed a reinstall before it saw the Atom HE on the network and delivered it a firmware update, losing contact until the update was complete.
Beyond that, we had absolutely zero operational issues, and indeed throughout our testing we were able to generate no criticisms at all – not one – because Naim has honed its highly versatile and potentially complex operation to something near perfection.
Image 1 of 2
Image 2 of 2
The Naim app presents all its streaming services on one screen, the inputs on a scrolling second screen; if that doesn’t appeal you can use the settings to reorder the inputs to your preference, banishing unused ones to the second screen.
We had connected a Thorens turntable via a phono stage into the analogue input. We connected our computer to an optical input, using a DAC between them as a USB-to-SPDIF converter.
To kick off, we ran the Atom HE’s unbalanced pre-outputs to our resident power amps, always a slightly nervous connection to make when the preamp is digitally controlled and might flick the output to max accidentally. (Once we had Roon connected, we specified a safety level beyond which the volume slider then can’t go.)
We addressed it first from Tidal on the Naim app, then from the Tidal app itself, then from Roon.
Indeed during the Atom’s visit it may have be physically located in one room, but it seemed omnipresent. Wherever we accessed music – on the music room computer, on our Chromebook, the iPhone, a tablet – there was the Naim Atom as a playback device waving at us as if saying ‘Play to me! Play to me!’ There are so many ways to play that surely any current preferred path to playback will fit right in.
Listening
We can fully believe Naim’s claim that the preamp of the Atom HE is actually superior to those of the current Uniti range. Even in our initial set-up without the benefit of the balanced connections, all the cues from our favourite tunes poured from power amp and speaker references, dynamically delivered, cleanly resolving the good and the bad.
The effect on Alex the Astronaut’s main vocal for Split the Sky can sound curiously excessive on systems lacking resolution, degenerating into a mush. Here it could be discerned separately, part delay, part reverb. More to the point, the music and the emotion were entirely unchecked. The quite awful subject matter of her remarkable I Like To Dance is chilling; her Triple J cover of Mr Blue Sky – The Go-Betweens’ Lindy Morrison on drums – is sheer joy.
The Tidal stream through the Atom HE easily outperformed Spotify’s relatively softened sound. Naim’s Uniti platform does not support the MQA encoding which Tidal uses to ‘unfold’ its high-res Masters to their high definition – Naim could change this by firmware update, it has said, but is being led by demand.
Whatever you might think of MQA, it may be that uncompressed FLAC high-res streaming as offered by Qobuz and Deezer represents a purer future – after all, with today’s bandwidths defined by streaming 4K video, what need for data compression of high-res music any more?
So with Qobuz newly launched in Australia, we took the opportunity to connect our Roon to Qobuz, and our Roon to the Uniti. Roon’s excellent quality check pop-up box reminded us that Roon via Chromecast dropping the high-res to 48kHz, so we switched to Qobuz direct inside the Naim app. And what a joy that was. Fleetwood Mac’s Go Your Own Way was almost alarmingly crisp; details on Toto’s Africa (the left-channel chuckle on the intro) astoundingly apparent, especially as our usual playback preference for this slice of soft rock is the vinyl 45.
Best music streaming services
On Kate Bush’s Running Up That Hill, the continuous rolling drums’n’bass were entirely segregated from the other parts, and the emotional lift of multitracked Kates as we reached the first ‘Come on baby, come on darlin’ was thrilling at an almost tactile level. We began regretting our agreement to return the Atom HE to distributor BusiSoft AV within an unusually brief two weeks; we were barely getting started and we were missing it already.
Headphone playback
Naim Uniti Atom Headphone Edition specs
Inputs: 1 x analogue RCA, 2 x optical digital, 1 x coaxial digital, 2x USB-A
Streaming: Apple AirPlay 2, Chromecast, UPnPT, Spotify Connect, TIDAL, Qobuz, Roon Ready, Bluetooth, Internet radio
Also visiting from Naim’s Australian distributor were the Final Audio D8000 Pro headphones, themselves a mere AU$4999 (£3995, US$4299) with their silver-coated cables trailing away to the Atom HE’s full-size headphone jack like weighty twisted tinsel.
The Naim had not the slightest trouble driving these esoteric 60-ohm planar magnetic headphones to their maximum ability, whether delivering a tight and punchy kick drum under the guitar and synthscape of The Triffids’ Wide Open Road, or highlighting the curiously lo-fi elements opening Gotye’s Somebody That I Used To Know.
The Naim and Finals delivered a mind-meltingly zingy portrayal of The Go-Betweens’ Streets of your Town, currently resurrected for advertising purposes by Ampol but here crisply separated to the point where our attention was constantly darting around the soundstage to small sonic elements like the cunning combination of panned rhythm guitars, the tight block hits in the left, each element easily individually selectable by the mind’s ear, yet held together in a finely musical whole.
We also ran more affordable headphone references – open AKGs, closed Sennheisers – and there wasn’t a pair which didn’t display their full abilities or receive more than enough power on tap from the Atom HE – enough, indeed, to achieve quite worrying levels without any hint of congestion or distortion.
The relevant figures are 1.5 watt-per-channel output into 16 ohms (from all headphone outputs), and output impedance of 4.7 ohms. The headphone amp remains in pure Class-A except for lower impedance headphones pushed to the extremes of volume, when a Class-AB circuit is “seamlessly” invoked.
If you like it loud (bearing in mind the dangers of so listening), the Atom HE will at least ensure you get your music with a minimum of damaging distortion.
After a head-pumping serve of Wolfmother’s The Joker and the Thief we wondered if we should take a rest, but Qobuz continued serving such delights that we didn’t, instead diverting to some high-res classical. This confirmed the dynamic reserve of the headphone output and a remarkable ability to stay tonally accurate across different impedance headphones. All this was from the standard unbalanced quarter-inch headphone socket; the balanced outputs could potentially lift the Atom HE’s game still higher.
Best headphones 2021
Having an assortment of active stereo speakers in residence for our group test this issue, it occurred to us that the Atom HE’s abilities as a preamp perfectly complemented just such devices. The ELAC Navis, for example, has balanced XLR inputs, to which we connected the Atom HE’s balanced outputs.
The result was wildly successful – a brilliant pair of speakers provided with a perfectly-pitched preamp output backed by physical inputs, streams galore, an app, a physical remote control and Naim’s big knob. Adding good active speakers to the Atom HE makes for a wonderfully compact yet versatile system, boosted by its particular powers to make your headphones sing when privacy is required.
Verdict
The Atom HE is an excellent addition to the Naim Uniti range – something genuinely different in offering a streaming preamplifier with a top-quality headphone amplifier. Use it alone with headphones, with power amps, or with active speakers, and you have a system just as versatile in its streaming abilities as the Mu-so, more versatile in its connections, and far higher in its hi-fi quality. And it comes with the best knob in hi-fi. It’s a big thumbs up from us.
Intel introduced its long-awaited eight-core Tiger Lake-H H35 chips for laptops today, vying for a spot on our best gaming laptop list and marking Intel’s first shipping eight-core 10nm chips for the consumer market. These new 11th-generation chips, which Intel touts as the ‘World’s best gaming laptop processors,’ come as the company faces unprecedented challenges in the laptop market — not only is it contending with AMD’s increasingly popular 7nm Ryzen “Renoir” chips, but perhaps more importantly, Intel is also now playing defense against Apple’s innovative new Arm-based M1 that powers its new MacBooks.
The halo eight-core 16-thread Core i9-11980HK peaks at 5.0 GHz on two cores, fully supports overclocking, and despite its official 65W TDP, can consume up to 110W under heavy load. Additionally, Intel has also added limited overclocking support in the form of a speed optimizer and unlocked memory settings for three of the ‘standard’ eight-core models.
As with Intel’s lower-power Tiger Lake chips, the eight-core models come fabbed on the company’s 10nm SuperFin process and feature Willow Cove execution cores paired with the UHD Graphics 750 engine with the Xe Architecture. These chips will most often be paired with a discrete graphics solution, from Nvidia or AMD. We have coverage of a broad selection of new systems, including from Alienware, Lenovo, MSI, Dell, Acer, HP, and Razer.
All told, Intel claims that the combination of the new CPU microarchitecture and process node offers up to 19% higher IPC, which naturally results in higher performance potential in both gaming and applications. That comes with a bit of a caveat, though — while Intel’s previous-gen eight-core 14nm laptop chips topped out at 5.3 GHz, Tiger Lake-H maxes out at 5.0 GHz. Intel says the higher IPC throws the balance towards even higher performance regardless of 10nm’s lower clock speed.
The new Tiger Lake-H models arrive in the wake of Intel’s quad-core H35 models that operate at 35W for a new ‘Ultraportable’ laptop segment that caters to gamers on the go. However, Intel isn’t using H45 branding for its eight-core Tiger Lake chips, largely because it isn’t marking down 45W on the spec sheet. We’ll cover what that confusing bit of information means below. The key takeaway is that these chips can operate anywhere from 35W to 65W. As usual, Intel’s partners aren’t required to (and don’t) specify the actual power consumption on the laptop or packaging.
Aside from the addition of more cores, a new system agent (more on that shortly), and more confusing branding, the eight-core Tiger Lake-H chips come with a well-known feature set that includes the same amenities, like PCIe 4.0, Thunderbolt 4, and support for Resizable Bar, as their quad-core Tiger Lake predecessors. These chips also mark the debut of the first eight-core laptop lineup that supports PCIe 4.0, as AMD’s competing platforms remain on the PCIe 3.0 connection. Intel also announced five new vPro H-series models with the same specifications as the consumer models but with features designed for the professional market.
Intel says the new Tiger Lake-H chips will come to market in 80 new designs (15 of these are for the vPro equivalents), with the leading devices available for preorder on May 11 and shipping on May 17. Surprisingly, Intel says that it has shipped over 1 million eight-core Tiger Lake chips to its partners before the first devices have even shipped to customers, showing that the company fully intends to leverage its production heft while its competitors, like AMD, continue to grapple with shortages. Intel also plans to keep its current fleet of 10th-Gen Comet Lake processors on the market for the foreseeable future to address the lower rungs of the market, so its 14nm chips will still ship in volume.
Intel Tiger Lake-H Specifications
Processor Number
Base / Boost
Cores / Threads
L3 Cache
Memory
Core i9-11980HK
2.6 / 5.0
8 / 16
24 MB
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
AMD Ryzen 9 5900HX
3.3 / 4.6
8 / 16
16 MB
DDR4-3200 / LPDDR4x-4266
Core i9-10980HK
2.4 / 5.3
8 / 16
16 MB
DDR4-2933
Core i7-11375H Special Edition (H35)
3.3 / 5.0
4 / 8
12 MB
DDR4-3200, LPDDR4x-4266
Core i9-11900H
2.5 / 4.9
8 / 16
24 MB
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
Core i7-10875H
2.3 / 5.1
8 / 16
16 MB
DDR4-2933
Core i7-11800H
2.3 / 4.6
8 / 16
24M
DDR4-2933 (Gear 1) / DDR4-3200 (Gear 2)
Core i5-11400H
2.7 / 4.5
6 / 12
12 MB
2933 (Gear 1) / DDR4-3200 (Gear 2)
Ryzen 9 5900HS
3.0 / 4.6
8 / 16
4 MB
DDR4-3200 / LPDDR4x-4266
Core i5-10400H
2.6 / 4.6
4 / 8
8 MB
DDR4-2933
Intel’s eight-core Tiger Lake-H takes plenty of steps forward — it’s the only eight-core laptop platform with PCIe 4.0 connectivity and hardware support for AVX-512, but it also takes steps back in a few areas.
Although Intel just released 40-core 10nm Ice Lake server chips, we’ve never seen the 10nm process ship with more than four cores for the consumer market, largely due to poor yields and 10nm’s inability to match the high clock rates of Intel’s mature 14nm chips. We expected the 10nm SuperFin process to change that paradigm, but as we see in the chart above, the flagship Core i9-11980HK tops out at 5.0 GHz on two cores, just like the quad-core Tiger Lake i7-11375H Special Edition. Intel uses its Turbo Boost 3.0, which targets threads at the fastest cores, to hit the 5.0 GHz threshold.
However, both chips pale in comparison to the previous-gen 14nm Core i9-10980HK that delivers a beastly 5.3 GHz on two cores courtesy of the Thermal Velocity Boost (TVB) tech that allows the chip to boost higher if it is under a certain temperature threshold. Curiously, Intel doesn’t offer TVB on the new Tiger Lake processors.
Intel says that it tuned 10nm Tiger Lake’s frequency for the best spot on the voltage/frequency curve to maximize both performance and battery life, but it’s obvious that process maturity also weighs in here. Intel offsets Tiger Lake’s incrementally lower clock speeds with the higher IPC borne of the Willow Cove microarchitecture that delivers up to 12% higher IPC in single-threaded and 19% higher IPC in multi-threaded applications. After those advances, Intel says the Tiger Lake chips end up faster than their prior-gen counterparts. Not to mention AMD’s competing Renoir processors.
Intel’s Core i9-11980HK peaks at 110W (PL2) and is a fully overclockable chip — you can adjust the core, graphics, and memory frequency at will. We’ll cover the power consumption, base clock, and TDP confusion in the following section.
Intel has also now added support for limited overclocking on the Core i7-11800H, i9-11900H, and the i9-11950. The memory settings on these three chips are fully unlocked, although with a few caveats we’ll list below, so you can overclock the memory at will. Intel also added support for its auto-tuning Speed Optimizer software. When enabled, this software boosts performance in multi-threaded work, but single-core frequencies are unimpacted.
Intel also made some compromises on the memory front, too. First, the memory controllers no longer support LPDDR4X. Instead, they top out at DDR4-3200, and that’s actually not the case for most of the 11th-Gen lineup, at least if you want the chip to run in the fastest configuration.
The eight-core Tiger Lake die comes with the System Agent Geyersville just like the Rocket Lake desktop chips. That means the company has brought Gear 1 and Gear 2 memory modes to laptops. The optimal setting is called ‘Gear 1’ and it signifies that the memory controller and memory operate at the same frequency (1:1), thus providing the lowest latency and best performance in lightly-threaded work, like gaming. All of the Tiger Lake chips reach up to DDR4-2933 in this mode.
Tiger Lake-H does officially support DDR4-3200, but only with the ‘Gear 2’ setting that allows the memory to operate at twice the frequency of the memory controller (2:1), resulting in higher data transfer rates. This can benefit some threaded workloads but also results in higher latency that can lead to reduced performance in some applications — particularly gaming. We have yet to see a situation where Gear 2 makes much sense for enthusiasts/gamers.
Intel also dialed back the UHD Graphics engine with Xe Architecture for the eight-core H-Series models to 32 execution units (EU), which makes sense given that this class of chip will often be paired with discrete graphics from either AMD or Nvidia. And possibly Intel’s own fledgling DG1, though we have yet to see any configurations yet. For comparison, the quad-core H35 Core i9 and i7 models come equipped with 96 EUs, while the Core i5 variant comes with 80 EUs.
Image 1 of 8
Image 2 of 8
Image 3 of 8
Image 4 of 8
Image 5 of 8
Image 6 of 8
Image 7 of 8
Image 8 of 8
This is Not The Tiger Lake H45 You’re Looking for – More TDP Confusion
As per usual with Intel’s recent laptop chip launches, there’s a bit of branding confusion. The company’s highest-end eight-core laptop chips previously came with an “H45” moniker to denote that these chips have a recommended 45W TDP. But you won’t find that designation with Intel’s new H-Series chips, this even though the quad-core 35W laptop chips that Intel introduced at CES this year come with the H35 designation. In fact, Intel also won’t list a specific TDP on the spec sheet for the eight-core Tiger Lake-H chips. Instead, it will label the H-series models as ’35W to 65W’ for the official TDP.
That’s problematic because Intel measures its TDP at the base frequency, so a lack of a clear TDP rating means there’s no concrete base frequency specification. We know that the PL2, or power consumed during boost, tops out at 110W, but due to the TDP wonkiness, there’s no official PL1 rating (base clock).
That’s because Intel, like AMD, gives OEMs the flexibility to configure the TDP (cTDP) to higher or lower ranges to accommodate the specific power delivery, thermal dissipation, and battery life accommodations of their respective designs. For instance, Intel’s previous-gen 45W parts have a cTDP range that spans from 35W to 65W.
This practice provides OEMs with wide latitude for customization, which is a positive. After all, we all want thinner and faster devices. However, Intel doesn’t compel manufacturers to clearly label their products with the actual TDP they use for the processor, or even list it in the product specifications. That can be very misleading — there’s a 30W delta between the lowest- and highest-performance configurations of the same chip with no clear method of telling what you’re purchasing at the checkout lane. There really is no way to know which Intel is inside.
Intel measures its TDP rating at the chip’s base clock (PL1), so the Tiger Lake-H chips will have varying base clocks that reflect their individual TDP… that isn’t defined. Intel’s spec table shows base clocks at both 45W and 35W, but be aware that this can be a sliding scale. For instance, you might purchase a 40W laptop that lands in the middle range.
As per usual, Intel’s branding practice leaves a lot to be desired. Eliminating the H45 branding and going with merely the ‘H-Series’ for the 35W to 65W eight cores simply adds more confusion because the quad-core H35 chips are also H-Series chips, and there’s no clear way to delineate the two families other than specifying the core count.
Intel is arguably taking the correct path here: It is better to specify that the chips can come in any range of TDPs rather than publish blatantly misleading numbers. However, the only true fix for the misleading mess created by configurable TDPs is to require OEMs to list the power rating directly on the device, or at least on the spec sheet.
Intel Tiger Lake-H Die
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
The eight-core H-series chip package comes with a 10nm die paired with a 14nm PCH. The first slide in the above album shows the Tiger Lake die (more deep-dive info here) that Intel says measures 190mm2, which is much larger than the estimated 146.1mm2 die found on the quad-core models (second image). We also included a die shot of the eight-core Comet Lake-H chip (third image).
We’ll have to wait for a proper die annotation of the Tiger Lake-H chip, but we do know that it features a vastly cut-down UHD Graphics 750 engine compared to the quad-core Tiger Lake models (32 vs 96 EUs) and a much larger L3 cache (24 vs 16MB).
The Tiger Lake die supports 20 lanes of PCIe 4.0 connectivity, with 16 lanes exposed for graphics, though those can also be carved into 2×8, 1×8, or 2×4 connections to accommodate more PCIe 4.0 additives, like additional M.2 SSDs. Speaking of which, the chip also supports a direct x4 PCIe 4.0 connection for a single M.2 SSD.
Intel touts that you can RAID several M.2 SSDs together through its Intel Rapid Storage Technology (IRST) and use them to boot the machine. This feature has been present on prior-gen laptop platforms, but Tiger Lake-H marks the debut for this feature with a PCIe 4.0 connection on a laptop.
The PCH provides all of the basic connectivity features (last slide). The Tiger Lake die and PCH communicate over a DMI x8 bus, and the chipset supports an additional 24 PCIe 3.0 lanes that can be carved up for additional features. For more fine-grained details of the Tiger Lake architecture, head to our Intel’s Tiger Lake Roars to Life: Willow Cove Cores, Xe Graphics, Support for LPDDR5, and Intel’s Path Forward: 10nm SuperFin Technology, Advanced Packaging Roadmap articles for more details.
Intel Tiger Lake-H Gaming Benchmarks
Image 1 of 4
Image 2 of 4
Image 3 of 4
Image 4 of 4
Intel provided the benchmarks above to show the gen-on-gen performance improvements in gaming, and the performance improvement relative to competing AMD processors. As always, approach vendor-provided benchmarks with caution, as they typically paint the vendors’ devices in the best light possible. We’ve included detailed test notes at the end of the article, and Intel says it will provide comparative data against Apple M1 systems soon.
As expected, Intel shows that the Core i9-11980HK provides solid generational leads over the prior-gen Core i9-10980HK, with the deltas spanning from 15% to 21% in favor of the newer chip.
Then there are the comparisons to the AMD Ryzen 9 5900HX, with Intel claiming leads in titles like War Thunder, Total War: Three Kingdoms, and Hitman 3, along with every other hand-picked title in the chart.
Intel tested the 11980HK in an undisclosed OEM pre-production system with an RTX 3080 set at a 155W threshold, while the AMD Ryzen 9 5900HX resided in a Lenovo Legion R9000K with an RTX 3080 dialed in at 165W. Given that we don’t know anything about the OEM system used for Intel’s benchmarks, like cooling capabilities, and that the company didn’t list the TDP for either chip, take these benchmarks with a shovelful of salt.
Intel also provided benchmarks with the Core i5-11400H against the Ryzen 9 5900HS, again claiming that its eight-core chips for thin-and-lights offer the best performance. However, here we can see that the Intel chip loses in three of the four benchmarks, but Intel touts that its “Intel Sample System” is a mere 16.5mm thick, while the 5900HS rides in an ASUS ROG Zephyrus G14 that measures 18mm thick at the front and 20mm thick at the rear.
Intel’s message here is that it can provide comparable gaming performance in thinner systems, but there’s not enough information, like battery life or other considerations, to make any real type of decision off this data.
Intel Tiger Lake-H Application Benchmarks
Image 1 of 2
Image 2 of 2
Here we can see Intel’s benchmarks for applications, too, but the same rules apply — we’ll need to see these benchmarks in our own test suite before we’re ready to claim any victors. Also, be sure to read the test configs in the slides below for more details.
Intel’s 11th-Gen Tiger Lake brings support for AVX-512 and the DL Boost deep learning suite, so Intel hand-picks benchmarks that leverage those features. As such, the previous-gen Comet Lake-H comparable is hopelessly hamstrung in the Video Creation Workflow and Photo Processing benchmarks.
We can say much the same about the comparison benchmarks with the Ryzen 9 5900HX. As a result of Intel’s insistence on using AI-enhanced benchmarks, these benchmarks are largely useless for real-world comparisons: The overwhelming majority of software doesn’t leverage either AI or AVX-512, and it will be several years before we see broad uptake.
As noted, Intel says the new Tiger Lake-H chips will come to market in 80 new designs (15 of these are for the vPro equivalents), with the leading devices available for preorder on May 11 and shipping on May 17. As you can imagine, we’ll also have reviews coming soon. Stay tuned.
Image 1 of 10
Image 2 of 10
Image 3 of 10
Image 4 of 10
Image 5 of 10
Image 6 of 10
Image 7 of 10
Image 8 of 10
Image 9 of 10
Image 10 of 10
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.