Over the last couple of years, Valve has been working on Steam Cloud Gaming, allowing users to connect their Steam libraries to services like GeForce Now. Valve’s cloud ambitions may not end there though, as evidence is pointing to a new handheld streaming device currently known as ‘SteamPal’.
SteamDB creator, Pavel Djundik, brought attention to this today. After some datamining, code strings were found referring to “SteamPal”, “SteamPal Games” and “Neptune”, which is the codename for a controller Valve is also working on. There were also references to a “NeptuneGamesCollection” and a separate string for “Device Optimized Games”.
What this appears to be alluding to is a handheld streaming device with its own dedicated controller, which would be used for streaming Steam games. Not all games on Steam support gamepads/controllers, so that is where the ‘device optimised’ list comes into play.
Valve dropped the Steam Controller and its Steam Link streaming device in favour of an app-focused approach for smartphones. It seems that Valve isn’t done with ideas to deliver Steam games in a more streamlined, console-style format.
KitGuru Says: Valve works on new projects all the time, a lot of which don’t reach the public announcement phase. Still, with cloud gaming finally beginning to gain some ground, this is a concept that has a lot of potential, especially as a replacement for the Steam Link concept. Would you like to see Valve bring out its own console-style streaming device for Steam games?
More than 500 Amazon employees have signed an internal letter to Jeff Bezos and Andy Jassy calling for the company to acknowledge the plight of the Palestinian people. The move comes after Israeli airstrikes devastated Palestinians in Gaza, leaving 248 people dead. Hamas and Israel have since agreed to a ceasefire.
“We ask Amazon leadership to acknowledge the continued assault upon Palestinians’ basic human rights under an illegal occupation… without using language that implies a power symmetry or situational equivalency, which minimizes and misrepresents the disruption, destruction, and death that has disproportionately been inflicted upon the Palestinians in recent days and over several decades,” employees wrote. “Amazon employs Palestinians in Tel Aviv and Haifa offices and around the world. Ignoring the suffering faced by Palestinians and their families at home erases our Palestinian coworkers.”
Employees want the company to terminate business contracts with organizations that are complicit in human rights violations, like the Israeli Defense Forces. In April, Amazon and Google signed a $1.2 billion cloud computing contract with the Israeli government.
The note echoes similar petitions from workers at Apple and Google. On May 18th, Jewish employees at Google penned a letter to Sundar Pichai calling for the company to “reject any definition of antisemitism that holds that criticism of Israel or Zionism is antisemitic.” Two days later, The Verge published a note from Muslim employees at Apple.
Muslim tech workers say executives have been slow to voice support for Palestinians, or condemn the violence in Gaza. Many feel their CEOs are choosing to ignore Israeli human rights abuses because the situation is fraught. The result, according to multiple sources, is that Muslims in tech feel undervalued and ignored.
Read the entire letter from Amazon employees below:
Amazon did not immediately respond to a request for comment from The Verge.
Compared to competitors like YouTube TV and Hulu, Sling TV has never had the flashiest app, and the overall user experience leaves something to be desired. It’s been easy enough to overlook these faults since Sling TV undercuts those rivals on price, but today the company announced a completely redesigned app that focuses on more polish and personalized recommendations.
The new Sling TV app is rolling out first to “select customers” using Amazon’s Fire TV devices, and other platforms will be added as the year stretches on. (It’ll arrive on Roku sometime this summer, for example.) The Dish-owned company describes this as the “most comprehensive update in Sling TV’s history.” And based on screenshots and the GIF above, it does look like a significant makeover.
“After a year of talking to customers and working with our design and advanced engineering teams, we’re happy to roll out the new Sling TV app to deliver the best in live sports, news and entertainment, at the same unbeatable low price point,” Michael Schwimmer, group president of Sling TV, said in a press release.
The redesign comes with a lot of changes, including a left-side navigation column and a new homescreen that focuses on content recommendations. The channel guide has been “reimagined” to make favoriting channels and filtering easier, though it will still feel familiar to customers who want that traditional cable-like grid.
Sling TV’s cloud DVR now gets its own tab, which should make it easier to sort through your recordings. By default, the service comes with 50 hours of DVR space, but you can expand this to 200 hours for an extra $5 per month.
“If a streaming app is done right, it should be practically invisible, allowing the user to get to the most relevant content quickly and easily — the new Sling TV experience does just that,” said Jon Lin, Sling TV’s VP of product.
If you’re a Nintendo Switch owner, you’re probably already familiar with The Legend of Zelda: Breath of the Wild. It remains one of the most ambitious and charming Zelda titles to date. But like most first-party Nintendo games, it’s one that rarely receives a sizeable discount. Right now, though, you can purchase a physical edition of the beloved title at GameStop for $40, with free shipping. The popular retailer is also offering a host of other first- and third-party games at a discount as part of its ongoing Memorial Day sale, including standouts such as Persona V Strikers, Splatoon 2, and Donkey Kong Country Tropical Freeze. Now, if only Animal Crossing: New Horizons and Super Smash Bros. Ultimate wouldmake the cut.
The Legend of Zelda: Breath of the Wild
$40
$60
34% off
Prices taken at time of publishing.
Four years after launch, TheLegend of Zelda: Breath of the Wild remains a masterpiece. The first-party title offers all of the hallmarks of a traditional Zelda title, including challenging combat and puzzles, but within a gorgeous, open-ended design.
$40
at GameStop
If you’re more of an Xbox gamer looking for something to let you play games while on the go, Microsoft just updated its Cloud Gaming app on the Surface Duo to let you use one of the screens as a virtual controller. Conveniently, Amazon has both the 128GB and 256GB versions of the Surface Duo on sale for $619 and $656, respectively, once you clip the 25 percent coupon. These models are locked to AT&T, however, so you’ll need to have an AT&T SIM card and service to use them when outside of a Wi-Fi network.
If gaming isn’t your thing, but you’re still in the market for a professional monitor, the Dell 27-inch U2719DX is worth consideration. Currently on sale at Best Buy for $250 — an all-time low — the QHD 1440p peripheral offers color-accurate visuals and a thin profile, one that looks as sturdy as it is ergonomic. It tops out at 60Hz and lacks the USB-C connectivity found on pricier displays such as the like-minded U2719DC. But given it’s currently more than $100 off, the lack of futureproofing is a bit more understandable.
Dell 27-inch U2719DX monitor
$250
$350
29% off
Prices taken at time of publishing.
Dell’s 27-inch U2719DX Monitor is built for simplicity. It sports a sturdy, swivel-reliant design and 60Hz refresh rate, along with a three-warranty, accurate colors, and a healthy port selection that, sadly, doesn’t include USB-C.
$250
at Best Buy
If you have no intention of making the jump to iPad Pro with the M1 processor, picking up a keyboard is a great way to make the most of the last-gen iPad Pro. Luckily, Amazon is offering the biggest discount we’ve seen in recent months on Apple’s Smart Keyboard Folio Case for the 12.9-inch iPad Pro. The protective fabric-lined case magnetically attaches to the back of either the 2018 or 2020 iPad Pro, provides two viewing angles, and is a joy to type on, though, we still wouldn’t recommend it as your primary writing device. For a limited time, you can pick it up for more than $100 off the initial list price at Amazon.
Microsoft Build, the company’s annual developer conference, is kicking off today, May 25th. Just like last year, the conference is happening in an online-only format rather than in-person in Seattle. The developer conference is typically where Microsoft details upcoming changes to its Windows, Office, and cloud platforms.
There’ll be around 48 hours of Build content in total taking place over the next two days, kicking off with a keynote today on May 25th starting at 11am ET. A full agenda is available over on the Microsoft Build website, featuring a mix of keynotes, technical deep dives, and breakout sessions.
How do I watch Microsoft Build?
You’ll need to register to attend the virtual Build conference, but doing so is free and gives you access to over 300 sessions as of this writing. You can use Microsoft’s scheduler to plan out which sessions to attend right here.
Microsoft is also streaming Build over on its developer YouTube channel. We’ve embedded a link to the first day’s livestream below.
What time does Microsoft Build start?
If you’re watching along, Microsoft’s stream begins at 8AM PT / 11AM ET.
Microsoft teased the potential for an Xbox handheld-like experience with the Surface Duo during its unveiling nearly two years ago, and it’s finally appearing today. Microsoft is updating its Xbox Cloud Gaming (xCloud) app for Android, and it includes dual-screen support for the Surface Duo.
The app update allows Surface Duo owners to use a virtual gamepad on one screen of their device and games on the other. It makes the Surface Duo look more like a Nintendo 3DS than a mobile phone, with touch controls for a variety of games.
Microsoft has been steadily adding Xbox Touch Controls to more than 50 games in recent months, including titles like Sea of Thieves, Gears 5, and Minecraft Dungeons. The full list of touch-compatible games is available here, and you can of course just use a regular Bluetooth or Xbox controller to stream games to the Surface Duo.
Microsoft is turning its Surface Duo into a handheld Xbox today. The latest app update for Xbox Cloud Gaming (xCloud) let’s you use one screen for touch controls and the other for the game. It’s like a Nintendo 3DS with Xbox games. Details here: https://t.co/ubbsEAW3r8 pic.twitter.com/aP94t9xgzC
— Tom Warren (@tomwarren) May 24, 2021
The benefits of a dual-screen device for this type of mobile experience are obvious. You no longer have touch controls over the top of the game, and your thumbs don’t get in the way of seeing important action on-screen. If dual-screen or foldable devices ever catch on, this is a far superior way to play Xbox games without a dedicated controller.
Microsoft has also tweaked the rest of the Xbox Cloud Gaming to work better on the Surface Duo. Improvements include making it easier to view content, move through menus, and the addition of columned layouts. The updated app is available now in Google Play Store.
Solid-state drives have a number of advantages when compared to hard drives, which include performance, dimensions, and reliability. Yet, for quite a while, HDDs offered a better balance between capacity, performance, and cost, which is why they outsold SSDs in terms of unit sales. Things have certainly changed for client PCs as 60% of new computers sold in Q1 2021 used SSDs instead of HDDs. That said, it’s not surprising that SSDs outsold HDDs almost 3:2 in the first quarter in terms of unit sales as, in 2020, SSDs outsold hard drives (by units not GBs), by 28 perecent.
Unit Sales: SSDs Win 3:2
Three makers of hard drives shipped as many as 64.17 million HDDs in Q1 2021, according to Trendfocus. Meanwhile, less than a dozen SSD suppliers, including those featured in our list of best SSDs, shipped 99.438 million solid-state drives in the first quarter, the same company claims (via StorageNewsletter).
Keeping in mind that many modern notebooks cannot accommodate a hard drive (and many desktops are shipped with an SSD by default), it is not particularly surprising that sales of SSDs are high. Furthermore, nowadays users want their PCs to be very responsive and that more or less requires an SSD. All in all, the majority of new PCs use SSDs as boot drives, some are also equipped with hard drives and much fewer use HDDs as boot drives.
Exabyte Sales: HDDs Win 4.5:1
But while many modern PCs do not host a lot of data, NAS, on-prem servers, and cloud datacenters do and this is where high-capacity NAS and nearline HDDs come into play. These hard drives can store up to 18TB of data and an average capacity of a 3.5-inch enterprise/nearline HDD is about 12TB these days nowadays. Thus, HDD sales in terms of exabytes vastly exceed those of SSDs (288.3EB vs 61.5EB).
Meanwhile, it should be noted that the vast majority of datacenters use SSDs for caching and HDDs for bulk storage, so it is impossible to build a datacenter purely based on solid-state storage (3D NAND) or hard drives.
Anyhow, as far as exabytes shipments are concerned, HDDs win. Total capacity of hard drives shipped in the first quarter 2021 was 288.28 EB, whereas SSDs sold in Q1 could store ‘only’ 66 EB s of data.
Since adoption of SSDs both by clients and servers is increasing, dollar sales of solid-state drives are strong too. Research and Markets values SSD market in 2020 at $34.86 billion and forecasts that it will total $80.34 billion by 2026. To put the numbers into context, Gartner estimated sales of HDDs to reach $20.7 billion in 2020 and expected them to grow to $22.6 billion in 2022.
Samsung Leads the Pack
When it comes to SSD market frontrunners, Samsung is an indisputable champion both in terms of unit and exabytes shipments. Samsung sold its HDD division to Seagate in 2011, a rather surprising move then. Yet, the rationale behind the move has always been there for the company that is the No. 1 supplier of NAND flash memory. Today, the move looks obvious.
Right now, Samsung challenges other SSD makers both in terms of unit (a 25.3% market share) and exabyte (a 34.3% chunk of the market) shipments. Such results are logical to expect as the company sells loads of drives to PC OEMs, and high-capacity drives to server makers and cloud giants.
Still, not everything is rosy for the SSD market in general and Samsung in particular due to shortage of SSD controllers. The company had to shut down its chip manufacturing facility that produces its SSD and NAND controllers in Austin, Texas, earlier this year, which forced it to consider outsourcing of such components. Potentially, shortage of may affect sales of SSDs by Samsung and other companies.
“Shortages of controllers and other NAND sub-components are causing supply chain uncertainty, putting upwards pressure on ASPs,” said Walt Coon, VP of NAND and Memory Research at Yole Développement. “The recent shutdown of Samsung’s manufacturing facility in Austin, Texas, USA, which manufactures NAND controllers for its SSDs, further amplifies this situation and will likely accelerate the NAND pricing recovery, particularly in the PC SSD and mobile markets, where impacts from the controller shortages are most pronounced.”
Storage Bosses Still Lead the Game
Western Digital follows Samsung in terms of SSD units (18.2%) and capacity (15.8%) share to a large degree because it sells loads of drives for applications previously served by HDDs and (perhaps we are speculating here) mission-critical hard drives supplied by Western Digital, HGST (as well as Hitachi and IBM before that).
The number three SSD supplier is Kioxia (formerly Toshiba Memory) with a 13.3% unit market share and a 9.4% exabyte market share, according to TrendFocus. Kioxia has inherited many shipment contracts (particularly in the business/mission-critical space) from Toshiba. Kioxia’s unit shipments (a 13.3% market share) are way lower when compared to those of its partner Western Digital (to some degree because the company is more aimed at the spot 3D NAND and retail SSD markets).
Being aimed primarily at high-capacity server and workstation applications, Intel is the number three SSD supplier in terms of capacity with an 11.5% market share, but when it comes to unit sales, Intel controls only 5% of the market. This situation is not particularly unexpected as Intel has always positioned its storage business as a part of its datacenter platform division, which is why the company has always been focused on high-capacity NAND ICs (unlike its former partner Micron) for advanced server-grade SSDs.
Speaking of Micron, its SSD unit market share is at an 8.4%, whereas its exabytes share is at 7.9%, which is an indicator that the company is balancing between the client and enterprise. SK Hynix also ships quite a lot of consumer drives (an 11.8% market share), but quite some higher-end enterprise-grade SSDs (as its exabytes share is 9.1%).
Seagate is perhaps one exception — among the historical storage bosses — that controls a 0.7% of the exabyte SSD market and only 0.3% of unit shipments. The company serves its loyal clientele and has yet to gain significant share in the SSD market.
Branded Client SSDs
One interesting thing about the SSD market is that while there are loads of consumer-oriented brands that sell flash-powered drives, they do not control a significant part of the market either in terms of units nor in terms of exabytes, according to Trendfocus.
Companies like Kingston, Lite-On, and a number of others make it to the headlines, yet in terms of volume, they control about 18% of the market, a significant, but not a definitive chunk. In terms of exabytes, their share is about 11.3%, which is quite high considering the fact that most of their drives are aimed at client PCs.
Summary
Client storage is going solid state in terms of unit shipments due to performance, dimensions, and power reasons. Datacenters continue to adopt SSDs for caching as well as business and mission-critical applications.
Being the largest supplier of 3D NAND (V-NAND in Samsung’s nomenclature), Samsung continues to be the leading supplier of SSDs both in terms of volumes and in terms of capacity shipments. Meanwhile, shortage of SSD controllers may have an impact on the company’s SSD sales.
Based on current trends, SSDs are set to continue taking unit market share from HDDs. Yet hard drives are not set to give up bulk storage.
Seagate has finally listed its dual-actuator hard disk drive — the Mach.2 Exos 2X14 — on its website and disclosed the official specs. With a 524MB/s sustained transfer rate, the Mach.2 is the fastest HDD ever, its sequential read and write performance is twice that of a normal drive. In fact, it can even challenge some inexpensive SATA SSDs.
The HDD is still available to select customers and will not be available on the open market, at least for the time being. Meanwhile, Seagate’s spec disclosure shows us what type of performance to expect from multi-actuator high-end hard drives.
Seagate Describes First Mach.2 HDD: the Exos 2X14
Seagate’s Exos 2X14 14TB hard drive is essentially two 7TB HDDs in one standard hermetically sealed helium-filled 3.5-inch chassis. The drive features a 7200 RPM spindle speed, is equipped with a 256MB multisegmented cache, and uses a single-port SAS 12Gb/s interface. The host system considers an Exos 2X14 as two logical drives that are independently addressable.
Seagate’s Exos 2X14 boasts a 524MB/s sustained transfer rate (outer diameter) of 304/384 random read/write IOPS, and a 4.16 ms average latency. The Exos 2X14 is even faster than Seagate’s 15K RPM Exos 15E900, so it is indeed the fastest HDD ever.
Furthermore, its sequential read/write speeds can challenge inexpensive SATA/SAS SSDs (at a far lower cost-per-TB). Obviously, any SSD will still be faster than any HDD in random read/write operations. However, hard drives and solid-state drives are used for different storage tiers in data centers, so the comparison is not exactly viable.
But performance increase comes at the cost of higher power consumption. An Exos 2X14 drive consumes 7.2W in idle mode and up to 13.5W under heavy load, which is higher than modern high-capacity helium-filled drives. Furthermore, that’s also higher than the 12W usually recommended for 3.5-inch HDDs.
Seagate says the power consumption is not an issue as some air-filled HDDs are power hungry too, so there are plenty of backplanes and servers that can deliver enough power and ensure proper cooling. Furthermore, the drive delivers quite a good balance of performance-per-Watt and IOPS-per-Watt. Also, data centers can use Seagate’s PowerBalance capability to reduce power consumption, but at the cost of 50% lower sequential read/write speeds and 5%~10% lower random reads/writes.
“3.5-inch air-filled HDDs have operated in a power envelope that is very similar to Exos 2X14 for many years now,” a spokesman for Seagate explained. “It is also worth noting that Exos 2X14 does support PowerBalance which is a setting that allows the customer to reduce power below 12W, [but] this does come with a performance reduction of 50% for sequential reads and 5%-10% for random reads.”
Since the Exos 2X14 is aimed primarily at cloud data centers, all of its peculiarities are set to be mitigated in one way or another, so slightly higher power consumption is hardly a problem for the intended customers. Nonetheless, the drive will not be available on the open market, at least for now.
Seagate has been publicly experimenting with dual-actuator HDDs (dubbed Mach.2) with Microsoft since late 2017, then it expanded availability to other partners, and earlier this year, it said that it would further increase shipments of such drives.
Broader availability of dual-actuator HDDs requires Seagate to better communicate its capabilities to customers, which is why it recently published the Exos 2X14’s specs.
“We began shipping [Mach.2 HDDs] in volume in 2019 and we are now expanding our customer base,” said Jeff Fochtman, Senior Vice President, Business and Marketing, Seagate Technology. “Well over a dozen major customers have active dual-actuator programs underway. As we increase capacities to meet customer needs, Mach.2 ensures the performance they require by essentially keeping the drive performance inside the storage to your expectations for hyperscale deployments.”
Keeping HDDs Competitive
Historically, HDD makers focused on capacity and performance: every new generation brought higher capacity and slightly increased performance. When the nearline HDD category emerged a little more than a decade ago, hard drive makers added power consumption to their focus as tens of thousands of HDDs per data center consumed loads of power, and it became an important factor for companies like AWS, Google, and Facebook.
As hard drive capacity grew further, it turned out that while normal performance increments brought by each new generation were still there, random read/write IOPS-per-TB performance dropped beyond comfortable levels for data centers and their quality-of-service (QoS) requirements. That’s when data centers started mitigating HDD random IOPS-per-TB performance with various caching mechanisms and even limiting HDD capacities.
In a bid to keep hard drives competitive, their manufacturers have to continuously increase capacity, increase or maintain sequential read/write performance, increase or maintain random read/write IOPS-pet-TB performance, and keep power consumption in check. A relatively straightforward way to improve the performance of an HDD is to use more than one actuator with read/write heads, as this can instantly double both sequential and random read/write speeds of a drive.
Not for Everyone. Yet
Seagate is the first to commercialize its dual-actuator HDD, but its rivals from Toshiba and Western Digital are also working on similar hard drives.
“Although Mach.2 is ramped and being used now, it’s also really still in a technology-staging mode,” said Fochtman. “When we reach capacity points above 30TB, it will become a standard feature in many large data center environments.”
For now, most of Seagate’s data center and server customers can get a high-capacity single-actuator HDD with the right balance between capacity and IOPS-per-TB performance, so the manufacturer doesn’t need to sell its Exos 2X14 through the channel. Meanwhile, when capacities of Seagate’s HAMR-based HDDs increase to over 50TB sometime in 2026, there will be customers that will need dual-actuator drives.
Microsoft is outlining its vision for the future of meetings today. After a year that’s seen more people dialing into the office remotely, the company is once again banging the drum for hybrid work: a model that combines remote access with in-person work.
While the company has been teasing new concepts for Microsoft Teams in recent months, it’s now starting to bring to life an updated interface for the communications software that will help blend remote colleagues into physical meeting rooms.
A new video details Microsoft’s plans, which include larger screens that help facilitate face-to-face meetings with life-sized remote colleagues. Microsoft imagines meeting rooms where cameras are placed at eye level to improve eye contact, and spatial audio that will help you hear colleagues’ voices when they’re dialed in. This spatial audio will also supposedly make it feel like a remote colleague is more present in a room.
This meeting room of the future looks like it’ll require a lot of hardware, though. Customers will need new intelligent video cameras that can detect who’s talking and bring them into view, speakers capable of spatial audio, and even microphones embedded into the ceilings. Microsoft itself might deliver some of this hardware: the company started selling Intelligent Speakers for Teams recently, which will help bring this future meeting room scenario to life.
This meeting room of the future is all part of a broader push by Microsoft to get ready for what it sees as a hybrid approach to work, where more employees will be working remotely or dipping in and out of the office.
“Hybrid work represents the biggest shift to how we work in our generation,” says Microsoft CEO Satya Nadella, in a LinkedIn post outlining the company’s approach. “And it will require a new operating model, spanning people, places, and processes.” Microsoft is releasing a playbook for businesses looking to adopt a hybrid model, with data and research it has conducted during the pandemic.
Microsoft has been gradually opening up its campus in Redmond, Washington in recent months, and remote meetings have become a big focus point. “In fact, at Microsoft, meeting recordings are the fastest-growing content type,” reveals Nadella. “Employees now expect all meeting information — whether that’s recordings, transcripts, or highlights — to be available on demand, and on double speed, at a time that works for them.”
The push toward hybrid work also opens up security challenges for organizations. Microsoft is embracing this new era by removing its own employees from corporate networks and taking an internet-first approach instead. That means ditching the old era of corporate domains and intranets you need VPNs for and having all data in the cloud. Of course, Microsoft also happens to own Azure, so that makes it both easier for the company to switch and an incentive to promote its cloud business. For other businesses, it’s not always an easy task to embrace the cloud fully.
Microsoft is also asking its own employees who work from home to “run a test of their home networks to ensure they are secure,” and requiring that every mobile device that accesses corporate information is managed. We’ve seen a variety of ransomware attacks and increases in phishing attempts during the pandemic, and Microsoft says the threats keep on increasing.
“The threat landscape has never been more complex or challenging, and security has never been more critical,” explains Nadella. “We intercepted and thwarted a record 30 billion email threats last year and are currently tracking 40-plus active nation-state actors and over 140 threat groups.”
Apple’s software engineering head Craig Federighi had a tricky task in the Epic v. Apple trial: explaining why the Mac’s security wasn’t good enough for the iPhone.
Mac computers have an official Apple App Store, but they also allow downloading software from the internet or a third-party store. Apple has never opened up iOS this way, but it’s long touted the privacy and security of both platforms. Then Epic Games sued Apple to force its hand, saying that if an open model is good enough for macOS, Apple’s claims about iOS ring hollow. On the stand yesterday, Federighi tried to resolve this problem by portraying iPhones and Macs as dramatically different devices — and in the process, threw macOS under the bus.
Federighi outlined three main differences between iOS and macOS. The first is scale. Far more people use iPhones than Macs, and the more users a platform gets, the more enticing that audience becomes to malware developers. Federighi argued iOS users are also much more casual about downloading software, giving attackers better odds of luring them into a download. “iOS users are just accustomed to getting apps all the time,” he said, citing Apple’s old catchphrase: “There’s an app for that.”
The second difference is data sensitivity. “iPhones are very attractive targets. They are very personal devices that are with you all the time. They have some of your most personal information — of course your contacts, your photos, but also other things,” he said. Mobile devices put a camera, microphone, and GPS tracker in your pocket. “All of these things make access or control of these devices potentially incredibly valuable to an attacker.”
That may undersell private interactions with Macs; Epic’s counsel Yonatan Even noted that many telemedicine calls and other virtual interactions happen on desktop. Still, it’s fair to say phones have become many people’s all-purpose digital lockboxes.
The third difference is more conceptual. Federighi basically says iOS users need to be more protected because the Mac is a specialist tool for people who know how to navigate the complexities of a powerful system, while the iPhone and iPad are — literally — for babies.
As Federighi put it:
The Mac from the beginning has been part of a generation of systems where the expectation is you can get software from wherever — you can hand it to your friend on a floppy disk and run it, that’s part of the expectation. But Mac users also expect a degree of flexibility that is useful to what they do. Some of them are software developers, some of them are pros running their unique tools, and having that power is part of it.
I think of it is as if the Mac is a car — that you can take it off-road if you want, you can drive wherever you want. And that comes with as a driver, you gotta be trained, there’s a certain level of responsibility in doing that, but that’s what you wanted to buy. You wanted to buy a car. With iOS, we were able to create something where children — heck, even infants — can operate an iOS device, and be safe in doing so. So it’s a really different product.
Federighi expanded on the metaphor a little later, when Apple’s counsel asked if macOS was “safe.”
Safe if operated correctly, much like that car. If you know how to operate a car, and you obey the rules of the road and are very cautious, yes. If you’re not — I’ve had a couple of family members who’ve gotten some malware on their Mac. But ultimately, I think the Mac can be operated safely.
I find the mental image of slowly, cautiously “driving” a Mac around the internet hilarious, because cars are deadly two-ton metal boxes that crush obstacles at superhuman speeds, while my MacBook starts losing keys if I type on it too hard.
If you pair these comments with some earlier statements about macOS, though, it’s a bit less funny. Federighi was bluntly critical of macOS security, saying Apple saw “a level of malware on the Mac that we don’t find acceptable.” If you used the Mac’s security model on the iPhone, “with all those devices, all that value, it would get run over to a degree dramatically worse than is already happening on the Mac,” Federighi said. “iOS has established a dramatically higher bar for customer protection. The Mac is not meeting that bar today.” It’s a distinctly negative evaluation of open computing systems, implying onlya relatively small platform could afford that openness without disaster.
Federighi took a far broader view of security than Epic’s own expert witness James Mickens. Mickens testified earlier that iOS wasn’t meaningfully more secure than Android, but he analyzed mostly technical threats to the platforms. Federighi focused on scams, phishing, and other apps that target human psychology instead of computer code — many of which pose serious dangers.
Sometimes, though, the protectiveness felt patronizing. When Federighi explained Apple’s restrictions on cloud gaming, he focused partly on tangible security issues, like how to grant device permissions for different titles on a single gaming app. But he slipped seamlessly into discussing how the concept would be simply too confusing — that iPhone and iPad owners would be befuddled by the notion of launching a separate game catalog. Apple wants iOS devices to feel trustworthy, but at times like that, it seems more like Apple just doesn’t trust its own users.
Google is aiming to build a “useful, error-corrected quantum computer” by the end of the decade, the company explained in a blog post. The search giant hopes the technology will help solve a range of big problems like feeding the world and climate change to developing better medicines. To develop the technology, Google has unveiled a new Quantum AI campus in Santa Barbara containing a quantum data center, hardware research labs, and quantum processor chip fabrication facilities. It will spend billions developing the technology over the next decade, The Wall Street Journal reports.
The target announced at Google I/O on Tuesday comes a year and a half after Google said it had achieved quantum supremacy, a milestone where a quantum computer has performed a calculation that would be impossible on a traditional classical computer. Google says its quantum computer was able to perform a calculation in 200 seconds that would have taken 10,000 years or more on a traditional supercomputer. But competitors racing to build quantum computers of their own cast doubt on Google’s claimed progress. Rather than taking 10,000 years, IBM argued at the time that a traditional supercomputer could actually perform the task in 2.5 days or less.
This extra processing power could be useful to simulate molecules, and hence nature, accurately, Google says. This might help us design better batteries, creating more carbon-efficient fertilizer, or develop more targeted medicines, because a quantum computer could run simulations before a company invests in building real-world prototypes. Google also expects quantum computing to have big benefits for AI development.
Despite claiming to have hit the quantum supremacy milestone, Google says it has a long way to go before such computers are useful. While current quantum computers are made up of less than 100 qubits, Google is targeting machine built with 1,000,000. Getting there is a multistage process. Google says it first needs to cut down on the errors qubits make, before it can think about building 1,000 physical qubits together into a single logical qubit. This will lay the groundwork for the “quantum transistor,” a building block of future quantum computers.
Despite the challenges ahead, Google is optimistic about its chances. “We are at this inflection point,” the scientist in charge of Google’s Quantum AI program, Hartmut Neven, told the Wall Street Journal, “We now have the important components in hand that make us confident. We know how to execute the road map.” Google’s eventually plans to offer quantum computing services over the cloud.
The foldable computer is almost here, and there will be a version of Windows 10 for it. But maybe not exactly the one you know. It’s called Windows 10X, and it is the operating system that will power dual-screen laptops and folding PCs.
Update, May 18, 2021: Microsoft has officially shelved Windows 10X, with plans to integrate some of its features into Windows 10. Microsoft mentioned the change in a blog post following reports of the change in early May.
The operating system, which was codenamed Santorini internally, is based on the little-spoken of Windows Core OS. The brief version of Core OS is that it’s a stripped-down, simplified version of Windows that can be expanded or shrunk down to meet the needs of different devices.
Is Windows 10X like Windows 10 S?
No. Additions can be made to that Core, and Windows 10X offers “newly implemented support for running Win32 applications in a container.” wrote Windows and education corporate vice president Eran Meggido in a blog post.
That means that with Windows 10X, you won’t be limited to Universal Windows Platform (UWP) apps. What we don’t know yet is if there are further limitations to a stripped down version of Windows 10.
When will Windows 10X be available? What devices will it be on?
Windows 10X was put on the back burner in May 2021 to integrate some of its features into Windows 10.
Windows 10X was at one point scheduled to launch in the fall 2020. It will power Microsoft’s own Surface Neo, as well as computers from partners including Lenovo, Dell, Asus and HP. Each of the devices were to be powered by Intel (Surface Neo, specifically, will use one of Intel’s Lakefield chips).
Lenovo confirmed to Tom’s Hardware that its foldable ThinkPad X1 device would use Windows 10X, though it launched with Windows 10 Pro prior to Microsoft changing its plans.. Asus would neither confirm nor deny if it planned to use Windows 10X for Project Precog. We have seen Concept Ori and Concept Duet — one with a foldable OLED panel and one with a hinge.
What can Windows 10X do?
Editors’ note: It’s unclear which Windows 10X features will be brought to Windows 10. The below summarizes what we knew about Windows 10X as its own operating system.
Frankly, we’re still in the dark on many of the specifics, though at its October event, Microsoft showed off some neat features that should make using a dual-screen device easier.
One of them was easy access to search. Another was that programs that are opened will show up on the side of the device in which it was invoked. And if you want it on two screens, you can pinch it and drag it to the center, which Microsoft referred to as “spanning.”
With the a Bluetooth keyboard (the Surface Neo has a magnetic one that covers part of one display), the “WonderBar” is invoked, with room for a touchbar, emojis, smaller screens or other menus.
Additionally, the extra space can be put to good use, like having Outlook in one window and opening new calendar invites or emails in the other without having to switch back and forth.
Microsoft has said that updates to Windows 10X will download and install in 90 seconds, which would be far faster than regular Windows 10.
There may be a little more we know. At Computex, Microsoft corporate vice president of operating systems Roanne Sones detailed a vision for a more modern Windows. That included seamless updates, security, 5G and LTE and sustained performance. She also discussed cloud connectivity, the ability to fit on several form factors, and inputs from pens, touch and even gaze.
Per leaks, the Start Menu will be referred to as the “Launcher,” which sounds more like a phone. Additionally, facial recognition with Windows Hello may be faster, with users skipping the step to dismiss the lock screen before going to the desktop.
Other leaked features include a modernized File Explorer, a quicker Action Center and a focus on Win32 apps and Progressive Web App (PWA) version of Office rather than UWP from its own store.
When will developers get their hands on Windows 10X?
Windows 10X is currently available through emulation with the Microsoft Emulator. You can see our hands-on with it here. You can get the emulator and image from the Microsoft Store. It requires Windows 10 Pro and the latest version of the Windows Insider build.
Microsoft chief product Panos Panay said that part of the reason for debuting the Surface Neo early was to empower developers to build experiences for its two screens. Perhaps we’ll hear more about it at the next Microsoft Build, which will take place between May 19 and May 21 in Seattle in 2020.
Photo Credits: Microsoft
This article is part of the Tom’s Hardware Glossary.
Amazon has extended its moratorium on law enforcement use of its facial recognition software “until further notice,” according to Reuters. The ban was set to expire in June.
As early as 2018, Amazon employees had pushed Amazon to scale back the project, arguing that documented racial bias in facial recognition could exacerbate police violence against minorities. Amazon defended the project until June 2020, when increased pressure from widespread protests led to the company announcing a yearlong moratorium on police clients for the service.
Rekognition is offered as an AWS service, and many of Amazon’s cloud computing competitors have similar technology. Microsoft announced that it would also not be selling its facial recognition services to police the day after Amazon’s pledge, and IBM said that it would stop developing or researching facial recognition tech altogether the same week. Google doesn’t commercially offer its facial recognition technology to anyone.
Amazon didn’t immediately respond to request for comment about why the ban was being extended. In a statement provided when the ban on law enforcement use was first issued, Amazon said it hoped Congress would use the year provided by the moratorium to implement rules surrounding the ethical use of facial recognition technology. Part of its statement read:
We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.
So far, no federal legislation has addressed police use of facial recognition, but a number of state and local measures have passed paring back use of the technology. San Francisco was the first US city to ban government use of facial recognition in May 2019, with Oakland following soon after. The Oregon and Maine cities of Portland also passed legislation around the tech in late 2020. The state of Massachusetts failed to pass a proposed ban in December 2020 but has recently passed a modified bill that adds some restrictions on police use of facial recognition.
There are new features, but it’s the biggest design update in years
Google is announcing the latest beta for Android 12 today at Google I/O. It has an entirely new design based on a system called “Material You,” featuring big, bubbly buttons, shifting colors, and smoother animations. It is “the biggest design change in Android’s history,” according to Sameer Samat, VP of product management, Android and Google Play.
That might be a bit of hyperbole, especially considering how many design iterations Android has seen over the past decade, but it’s justified. Android 12 exudes confidence in its design, unafraid to make everything much larger and a little more playful. Every big design change can be polarizing, and I expect Android users who prefer information density in their UI may find it a little off-putting. But in just a few days, it has already grown on me.
There are a few other functional features being tossed in beyond what’s already been announced for the developer betas, but they’re fairly minor. The new design is what matters. It looks new, but Android by and large works the same — though, of course, Google can’t help itself and again shuffled around a few system-level features.
I’ve spent a couple of hours demoing all of the new features and the subsequent few days previewing some of the new designs in the beta that’s being released today. Here’s what to expect in Android 12 when it is officially released later this year.
Material You design and better widgets
Android 12 is one implementation of a new design system Google is debuting called Material You. Cue the jokes about UX versus UI versus… You, I suppose. Unlike the first version of Material Design, this new system is meant to mainly be a set of principles for creating interfaces — one that goes well beyond the original paper metaphor. Google says it will be applied across all of its products, from the web to apps to hardware to Android. Though as before, it’s likely going to take a long time for that to happen.
In any case, the point is that the new elements in Android 12 are Google’s specific implementations of those principles on Pixel phones. Which is to say: other phones might implement those principles differently or maybe even not at all. I can tell you what Google’s version of Android 12 is going to look and act like, but only Samsung can tell you what Samsung’s version will do (and, of course, when it will arrive).
The feature Google will be crowing the most about is that when you change your wallpaper, you’ll have the option to automatically change your system colors as well. Android 12 will pull out both dominant and complementary colors from your wallpaper automatically and apply those colors to buttons and sliders and the like. It’s neat, but I’m not personally a fan of changing button colors that much.
The lock screen is also set for some changes: the clock is huge and centered if you have no notifications and slightly smaller but still more prominent if you do. It also picks up an accent color based on the theming system. I especially love the giant clock on the always-on display.
Android’s widget system has developed a well-deserved bad reputation. Many apps don’t bother with them, and many more haven’t updated their widget’s look since they first made one in days of yore. The result is a huge swath of ugly, broken, and inconsistent widgets for the home screen.
Google is hoping to fix all of that with its new widget system. As with everything else in Android 12, the widgets Google has designed for its own apps are big and bubbly, with a playful design that’s not in keeping with how most people might think of Android. One clever feature is that when you move a widget around on your wallpaper, it subtly changes its background color to be closer to the part of the image it’s set upon.
I don’t have especially high hopes that Android developers will rush to adopt this new widget system, so I hope Google has a plan to encourage the most-used apps to get on it. Apple came very late to the home screen widget game on the iPhone, but it’s already surpassed most of the crufty widget abandonware you’ll find from most Android apps.
Bigger buttons and more animation
As you’ve no doubt gathered already from the photos, the most noticeable change in Android 12 is that all of the design elements are big, bubbly, and much more liberal in their use of animation. It certainly makes the entire system more legible and perhaps more accessible, but it also means you’re just going to get fewer buttons and menu items visible on a single screen.
That tradeoff is worth it, I think. Simple things like brightness and volume sliders are just easier to adjust now, for example. As for the animations, so far, I like them. But they definitely involve more visual flourish than before. When you unlock or plug in your phone, waves of shadow and light play across the screen. Apps expand out clearly from their icon’s position, and drawers and other elements slide in and out with fade effects.
More animations mean more resources and potentially more jitter, but Samat says the Android team has optimized how Android displays core elements. The windows and package manager use 22 percent less CPU time, the system server uses 15 percent less of the big (read: more powerful and battery-intensive) core on the processor, and interrupts have been reduced, too.
Android has another reputation: solving for jitter and jank by just throwing ever-more-powerful hardware at the problem: faster chips, higher refresh rate screens, and the like. Hopefully none of that will be necessary to keep these animations smooth on lower-end devices. On my Pixel 5, they’ve been quite good.
One last bit: there’s a new “overscroll” animation — the thing the screen does when you scroll to the end of a page. Now, everything on the screen will sort of stretch a bit when you can’t scroll any further. Maybe an Apple patent expired.
Shuffling system spaces around
It wouldn’t be a new version of Android without Google mucking about with notifications, Google Assistant, or what happens when you press the power button. With Android 12, we’ve hit the trifecta. Luckily, the changes Google has made mostly represent walking back some of the changes it made in Android 11.
The combined Quick Settings / notifications shade remains mostly the same — though the huge buttons mean you’re going to see fewer of them in either collapsed or expanded views. The main difference in notifications is mostly aesthetic. Like everything else, they’re big and bubbly. There’s a big, easy-to-hit down arrow for expanding them, and groups of notifications are put together into one bigger bubble. There’s even a nice little visual flourish when you begin to swipe a notification away: it forms its own roundrect, indicating that it has become a discrete object.
The thing that will please a lot of Android users is that after just a year, Google has bailed on its idea of creating a whole new power button menu with Google Wallet and smart home controls. Instead, both of those things are just buttons inside the quick settings shade, similar to Samsung’s solution.
Holding down the power button now just brings up Google Assistant. Samat says it was a necessary change because Google Assistant is going to begin to offer more contextually aware features based on whatever screen you’re looking at. I say the diagonal swipe-in from the corner to launch Assistant was terrible, and I wouldn’t be surprised if it seriously reduced how much people used it.
I also have to point out that it’s a case of Google adopting gestures already popular on other phones: the iPhone’s button power brings up Siri, and a Galaxy’s button brings up Bixby.
New privacy features for camera, mic, and location
Google is doing a few things with privacy in Android 12, mostly focused on three key sensors it sees as trigger points for people: location, camera, and microphone.
The camera and mic will now flip on a little green dot in the upper-right of the screen, indicating that they’re on. There are also now two optional toggles in Quick Settings for turning them off entirely at a system level.
When an app tries to use one of them, Android will pop up a box asking if you want to turn it back on. If you choose not to, the app thinks it has access to the camera or mic, but all Android gives it is a black nothingness and silence. It’s a mood.
For location, Google is adding another option for what kind of access you can grant an app. Alongside the options to limit access to one time or just when the app is open, there are settings for granting either “approximate” or “precise” locations. Approximate will let the app know your location with less precision, so it theoretically can’t guess your exact address. Google suggests it could be useful for things like weather apps. (Note that any permissions you’ve already granted will be grandfathered in, so you’ll need to dig into settings to switch them to approximate.)
Google is also creating a new “Privacy Dashboard” specifically focused on location, mic, and camera. It presents a pie chart of how many times each has been accessed in the last 24 hours along with a timeline of each time it was used. You can tap in and get to the settings for any app from there.
The Android Private Compute Core
Another new privacy feature is the unfortunately named “Android Private Compute Core.” Unfortunately, because when most people think of a “core,” they assume there’s an actual physical chip involved. Instead, think of the APCC as a sandboxed part of Android 12 for doing AI stuff.
Essentially, a bunch of Android machine learning functions are going to be run inside the APCC. It is walled-off from the rest of the OS, and the functions inside it are specifically not allowed any kind of network access. It literally cannot send or receive data from the cloud, Google says. The only way to communicate with the functions inside it is via specific APIs, which Google emphasizes are “open source” as some kind of talisman of security.
Talisman or no, it’s a good idea. The operations that run inside the APCC include Android’s feature for ambiently identifying playing music. That needs to have the microphone listening on a very regular basis, so it’s the sort of thing you’d want to keep local. The APCC also hands the “smart chips” for auto-reply buttons based on your own language usage.
An easier way to think of it is if there’s an AI function you might think is creepy, Google is running it inside the APCC so its powers are limited. And it’s also a sure sign that Google intends to introduce more AI features into Android in the future.
No news on app tracking — yet
Location, camera, mic, and machine learning are all privacy vectors to lock down, but they’re not the kind of privacy that’s on everybody’s mind right now. The more urgent concern in the last few months is app tracking for ad purposes. Apple has just locked all of that down with its App Tracking Transparency feature. Google itself is still planning on blocking third-party cookies in Chrome and replacing them with anonymizing technology.
What about Android? There have been rumors that Google is considering some kind of system similar to Apple’s, but there won’t be any announcements about it at Google I/O. However, Samat confirmed to me that his team is working on something:
There’s obviously a lot changing in the ecosystem. One thing about Google is it is a platform company. It’s also a company that is deep in the advertising space. So we’re thinking very deeply about how we should evolve the advertising system. You see what we’re doing on Chrome. From our standpoint on Android, we don’t have anything to announce at the moment, but we are taking a position that privacy and advertising don’t need to be directly opposed to each other. That, we don’t believe, is healthy for the overall ecosystem as a company. So we’re thinking about that working with our developer partners and we’ll be sharing more later this year.
A few other features
Google has already announced a bunch of features in earlier developer betas, most of which are under-the-hood kind of features. There are “improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets that bring your favorite people to the home screen” and the already-announced improved support for third-party app stores. On top of those, there are a few neat little additions to mention today.
First, Android 12 will (finally) have a built-in remote that will work with Android TV systems like the Chromecast with Google TV or Sony TVs. Google is also promising to work with partners to get car unlocking working via NFC and (if a phone supports it) UWB. It will be available on “select Pixel and Samsung Galaxy phones” later this year, and BMW is on board to support it in future vehicles.
For people with Chromebooks, Google is continuing the trend of making them work better with Android phones. Later this year, Chrome OS devices will be able to immediately access new photos in an Android phone’s photo library over Wi-Fi Direct instead of waiting for them to sync up to the Google Photos cloud. Google still doesn’t have anything as good as AirDrop for quickly sending files across multiple kinds of devices, but it’s a good step.
Android already has fast pairing for quickly setting up Bluetooth devices, but it’s not built into the Bluetooth spec. Instead, Google has to work with individual manufacturers to enable it. A new one is coming on board today: Beats, which is owned by Apple. (Huh!) Ford and BMW cars will also support one-tap pairing.
Android Updates
As always, no story about a new version of Android would be complete without pointing out that the only phones guaranteed to get it in a timely manner are Google’s own Pixel phones. However, Google has made some strides in the past few years. Samat says that there has been a year-over-year improvement in the “speed of updates” to the tune of 30 percent.
A few years ago, Google changed the architecture of Android with something called Project Treble. It made the system a little more modular, which, in turn, made it easier for Android manufacturers to apply their custom versions of Android without mucking about in the core of it. That should mean faster updates.
Some companies have improved slightly, including the most important one, Samsung. However, it’s still slow going, especially for older devices. As JR Raphael has pointed out, most companies are not getting updates out in what should be a perfectly reasonable timeframe.
Beyond Treble, there may be some behind-the-scenes pressure happening. More and more companies are committing to providing updates for longer. Google also is working directly with Qualcomm to speed up updates. Since Qualcomm is, for all intents and purposes, the monopoly chip provider for Android phones in the US, that should make a big difference, too.
That’s all heartening, but it’s important to set expectations appropriately. Android will never match iOS in terms of providing timely near-universal updates as soon as a new version of the OS is available. There will always be a gap between the Android release and its availability for non-Pixel phones. That’s just the way the Android ecosystem works.
That’s Android 12. It may not be the biggest feature drop in years, but it is easily the biggest visual overhaul in some time. And Android needed it. Over time and over multiple iterations, lots of corners of the OS were getting a little crufty as new ideas piled on top of each other. Android 12 doesn’t completely wipe the slate clean and start over, but it’s a significant and ambitious attempt to make the whole system feel more coherent and consistent.
The beta that’s available this week won’t get there — the version I’m using lacks the theming features, widgets, and plenty more. Those features should get layered in as we approach the official release later this year. Assuming that Google can get this fresh paint into all of the corners, it will make Google’s version of Android a much more enjoyable thing to use.
Financial analysts believe that while hard disk drive pricing has spiked in recent weeks due to Chia coin mining and will continue to be higher than usual for a while, average HDD prices will not get considerably higher than they are today as there is extraordinary demand for specific models rather than for all kinds of drives. But there’s a catch.
Demand for nearline hard drives for data centers has consistently grown for years. In contrast, demand for high-capacity HDDs for consumers has increased in recent weeks because of Chia coin cryptocurrency mining. As a result, prices of higher-capacity hard drives increased in recent weeks, whereas range-topping models have sold out.
The market is experiencing a tight supply of HDDs, which is comparable to the situation in 2012 when flooding in Thailand stopped the production of hard drives in the country. Back then, average prices of HDDs increased by roughly 22%, according to Sidney Ho, an analyst with Deutsche Bank. This time, price hikes will not be that high.
“While the use of storage for Chia is relatively small compared to the total industry output, demand for large consumer hard drives has increased significantly due to Chia mania, with drives sold out on many websites and pricing on secondary markets meaningfully higher than usual,” Ho wrote in a note to clients, reports Barron’s.
Our HDD price analysis from earlier this week demonstrated that prices of midrange HDDs featuring a 6TB or 8TB capacity did not change significantly in recent weeks. 10TB hard drives also did not get substantially more expensive. Meanwhile, 12TB, 14TB, 16TB, and 18TB HDDs got dramatically more expensive in just a few weeks (some SKUs gained $100, others doubled).
The vast majority of 14TB – 18TB HDDs are nearline drives, such as Seagate’s Exos and Western Digital’s WD Gold and Ultrastar. Most of those drives are sold directly to companies like Amazon Web Services, Google, and Microsoft at pre-arranged prices and therefore never reach retail.
Joseph Moore, an analyst with Morgan Stanley, says that Seagate sells about 30% of its HDDs via distributors and retailers, whereas Western Digital ships 40% of its products using these channels. As a result, the vast majority of hard drives from Seagate and Western Digital are not sold through retail and therefore cannot get meaningfully more expensive because of the ongoing Chia mania. Still, the manufacturers can naturally increase their prices because of higher demand and the necessity to procure more components.
In general, while high-capacity HDD retail pricing could increase by well over 22%, average HDD prices will not increase tangibly as most HDDs are sold at pre-arranged prices. In contrast, midrange models are not getting more expensive due to modest demand.
“Our view right now is that Seagate and Western Digital will benefit from the incremental volume, pricing and thus gross margin tailwinds in the short-term, but that cloud demand […] remains the primary driver of results,” Ho wrote. “Longer-term, there just remains too much uncertainty […] on the future general acceptance of Chia or most cryptocurrencies to fundamentally adjust our outlooks for the industry. Whatever Chia becomes, though, it is a positive for the industry to see additional potential consumer growth opportunities.”
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.