Chia is heading to the cloud. Backblaze today announced that Chia Coin farmers can upload their plots to its B2 cloud storage platform for $5 per TB per month.
A quick primer on Chia: It’s supposed to be more environmentally friendly than Bitcoin, Ethereum, and other cryptocurrencies because it uses a less energy-intensive Proof of Time and Space model that relies on “plots” of storage rather than the Proof of Work model that requires high-end equipment to do complex math.
The amount of storage devoted to Chia is staggering. Chia Explorer’s figures put its netspace at roughly 24 EiB at time of writing; it was at 16.55 EiB on June 3. For reference, a single EiB is equivalent to 1,152,921,504,606,846,976 bytes decimal. That’s an inconceivable amount of space—and yet Chia’s growth hasn’t slowed.
All that storage has to come from somewhere, of course, and manufacturers like WD and Seagate have reportedly started to increase HDD production in response to Chia’s popularity. Others have modified the warranties of their consumer products to reflect the fact that farming the cryptocurrency is tough on mainstream drives.
This brings us to Backblaze. The company said it experimented with Chia farming on its cloud storage platform “for the many Chia farmers out there who’ve already delved into farming and are now facing empty shelves as they seek out storage solutions for their plots.” (Read: It saw a hole in the market and wanted to fill it.)
It took some tinkering, but the company said it was eventually able to make its platform compatible with Chia’s model, which effectively selects a winning “plot” at random from its entire network. The more storage a farmer devotes to the network, the better their chances of winning, which is why the netspace has grown so much.
There are two important caveats. The first is that, while Backblaze said Chia farmers can store their plots on its B2 platform, the initial creation of those plots still has to happen locally. That means anyone hoping to get into this crypto currency will still need a PC with a solid performance — perhaps one of the best Chia plotting PC builds — to make the plots.
The second caveat is Backblaze’s assertion that pricing Chia plot storage at $5 per TB per month can still be profitable has a lot of asterisks. Chia Calculator’s simple estimate puts the current monthly earnings at $6.43 with the 10 plots that will fit on that amount of storage. Those are already pretty slim earnings.
But things look even worse using Chia Calculator’s advanced tool, which takes the netspace’s growth into account. Using the calculator’s default values returns an estimated total earnings figure of $7.52 over the course of six months with 10 plots. We’re pretty sure that spending $30 to make $7.52 doesn’t count as making a profit.
Backblaze doesn’t seem to expect Chia farmers to be fazed by those figures, though, because it warned Chia farmers that they’ll be limited to 100TB of storage unless they reach out to its sales team. Instructions for uploading a Chia plot to the company’s B2 platform are available via the “b2fs4chia” repository on GitHub.
Google researchers published a new paper in Nature on Wednesday describing “an edge-based graph convolutional neural network architecture” that learned how to design the physical layout of a semiconductor in a way that allows “chip design to be performed by artificial agents with more experience than any human designer.” Interestingly, Google used AI to design other AI chips that offer more performance.
This is a significant advancement in chip design that could have serious implications for the field. Here’s how the researchers described their achievement in the abstract of the paper (the full text of which is unavailable to the public) as printed by Nature:
“Despite five decades of research, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.”
The capabilities of this method weren’t just conjecture. Google’s researchers said it was used to design the next generation of tensor processing units (TPUs) the company uses for machine learning. So they essentially taught an artificial intelligence to design chips that improve the performance of artificial intelligence.
That loop appears to be intentional. The researchers said they “believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.” Those advancements could have other benefits, too, especially if the AI-designed chips truly are better “in all key metrics.”
It would be interesting to know how this might affect Google’s reported plans to develop its own system-on-chips (SoC) for use in phones and Chromebooks. The company’s already switching to custom processors for some tasks—it reportedly replaced millions of Intel CPUs with its own video transcoding units—as well.
The method described in this paper likely wouldn’t be limited to TPUs; the company would probably be able to use it to improve other application specific integrated circuits (ASICs) meant to serve particular functions. This advancement could make it far easier to develop those ASICs so Google can ditch more off-the-shelf solutions.
Other developers should be able to benefit from the research, too, because Google has made TPUs available via a dedicated board as well as Google Cloud. Assuming the company doesn’t keep these next-generation TPUs to itself, developers ought to be able to take advantage of this artificial intelligence ouroboros before too long.
Ahead of E3 Microsoft and Xbox are putting a heavy emphasis on cloud gaming and its Game Pass subscription program alongside its existing console ecosystem. This includes new, dedicated streaming hardware for any TV or
monitor
. It is also updating its cloud datacenters to use the
Xbox Series X
, so that gamers who stream are getting the company’s most powerful hardware.
Xbox’s announcement
comes ahead of Xbox’s joint E3 games showcase this Sunday with its recent acquisition, Bethesda, and also comes with a slew of new attempts to push Xbox onto just about any device you might already have. The Xbox division is moving to get its software embedded into internet-connected TVs, which would require no additional hardware other than a controller to play cloud games.
Additionally, the company is looking into new subscription offerings for Game Pass. (though it didn’t get into specifics), and is looking into new purchase options for Xbox All Access, which lets people buy the console and Game Pass for a monthly fee, rather than paying up front. (This is similar to how many pay for smartphones in the U.S.).
Building its own streaming devices, however, is a bigger push to make Xbox an ecosystem outside of consoles and even moves Xbox into competition, to a degree, with Chromecast, Roku and Apple TV for the living room. (Chromecast is scheduled to get
Google Stadia
support later this month).
Still, the company sees its consoles, the Xbox Series X and Series S, as its top-notch offering, even while it expands in mobile, on PC and in streaming. In fact, that’s the other major piece of hardware Xbox is working on: the next console.
“Cloud is key to our hardware and Game Pass roadmaps, but no one should think we’re slowing down on our core console engineering. In fact, we’re accelerating it,” said Liz Hamren, corporate vice president of gaming experiences and platforms.
“We’re already hard at work on new hardware and platforms, some of which won’t come to light for years. But even as we build for the future, we’re focused on extending the Xbox experience to more devices today so we can reach more people.”
This isn’t exactly surprising. Consoles start getting designed years in advance, and these days, the mid-life cycle refresh cycle is common. Microsoft has also positioned the latest consoles as a “series” of devices, so it’s possible there will be new entries in the line that remain compatible with the current options.
Cloud gaming in Xbox Game Pass Ultimate is set to launch in Brazil, Japan and Australia later this year. Meanwhile, cloud gaming in a web browser, including support for Chrome, Edge and Safari, will go live to Game Pass Ultimate subscribers “in the next few weeks.” The Xbox app on PC will also get cloud gaming integrated this year.
Hamren said that Game Pass has more than 18 million subscribers, though that wasn’t broken down between the console, PC and ultimate plans, (which include game streaming).
The Series X and S haven’t seen a ton of new titles from Microsoft Studios yet, but it sounds like that will change.
“In terms of the overall lineup, we want to get to a point of releasing a new game every quarter…” said Matt Booty, the head of Xbox Game Studios. “We know that a thriving entertainment service needs a consistent and exciting flow of new content. So our portfolio will continue to grow as our service grows.”
Xbox has more than 23 studios and also recently acquired ZeniMax Media, the parent company of Bethesda Game Studios, as well as id Software, ZeniMax Online Studios, Arkane, MachineGames, Tango Gameworks, Alpha Dog and Roundhouse Studios.
Game Pass games are released simultaneously on PC and Xbox, which Xbox Head Phil Spencer used to poke at its competitors, namely Sony and its
PlayStation 5
.
“So right now, we’re the only platform shipping games on console, PC and cloud simultaneously,” Spencer said. “Others bring console games to PC years later, not only making people buy their hardware up front, but then charging them a second time to play on PC. And, of course, all of our games are in our subscription service day one, full cross-platform included.” (PlayStation brought Horizon Zero Dawn and Days Gone to PC but long after their PlayStation 4 releases.)
Tim Stuart, the chief financial officer for Xbox, said “we’ll do a lot more in PC for sure.” There have been rumors of big changes to the Microsoft Store on Windows, including making it easier for developers to sell games. That’s another avenue we may see explored soon, as Microsoft explores
what’s next for Windows
later this month, after E3.
The Xbox and Bethesda Games Showcase will take place on Sunday, June 13 at 10 a.m. PT / 1 p.m. ET and will stream on YouTube, Twitch, Facebook and Twitter.
Google is using machine learning to help design its next generation of machine learning chips. The algorithm’s designs are “comparable or superior” to those created by humans, say Google’s engineers, but can be generated much, much faster. According to the tech giant, work that takes months for humans can be accomplished by AI in under six hours.
Google has been working on how to use machine learning to create chips for years, but this recent effort — described this week in a paper in the journal Nature — seems to be the first time its research has been applied to a commercial product: an upcoming version of Google’s own TPU (tensor processing unit) chips, which are optimized for AI computation.
“Our method has been used in production to design the next generation of Google TPU,” write the authors of the paper, led by Google’s head of ML for Systems, Azalia Mirhoseini.
AI, in other words, is helping accelerate the future of AI development.
In the paper, Google’s engineers note that this work has “major implications” for the chip industry. It should allow companies to more quickly explore the possible architecture space for upcoming designs and more easily customize chips for specific workloads.
An editorial in Naturecalls the research an “important achievement,” and notes that such work could help offset the forecasted end of Moore’s Law — an axiom of chip design from the 1970s that states that the number of transistors on a chip doubles every two years. AI won’t necessarily solve the physical challenges of squeezing more and more transistors onto chips, but it could help find other paths to increasing performance at the same rate.
The specific task that Google’s algorithms tackled is known as “floorplanning.” This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chip’s sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects.
Google’s engineers note that designing floor plans takes “months of intense effort” for humans, but, from a machine learning perspective, there is a familiar way to tackle this problem: as a game.
AI has proven time and time again it can outperform humans at board games like chess and Go, and Google’s engineers note that floorplanning is analogous to such challenges. Instead of a game board, you have a silicon die. Instead of pieces like knights and rooks, you have components like CPUs and GPUs. The task, then, is to simply find each board’s “win conditions.” In chess that might be checkmate, in chip design it’s computational efficiency.
Google’s engineers trained a reinforcement learning algorithm on a dataset of 10,000 chip floor plans of varying quality, some of which had been randomly generated. Each design was tagged with a specific “reward” function based on its success across different metrics like the length of wire required and power usage. The algorithm then used this data to distinguish between good and bad floor plans and generate its own designs in turn.
As we’ve seen when AI systems take on humans at board games, machines don’t necessarily think like humans and often arrive at unexpected solutions to familiar problems. When DeepMind’s AlphaGo played human champion Lee Sedol at Go, this dynamic led to the infamous “move 37” — a seemingly illogical piece placement by the AI that nevertheless led to victory.
Nothing quite so dramatic happened with Google’s chip-designing algorithm, but its floor plans nevertheless look quite different to those created by a human. Instead of neat rows of components laid out on the die, sub-systems look like they’ve almost been scattered across the silicon at random. An illustration from Nature shows the difference, with the human design on the left and machine learning design on the right. You can also see the general difference in the image below from Google’s paper (orderly humans on the left; jumbled AI on the right), though the layout has been blurred as it’s confidential:
This paper is noteworthy, particularly because its research is now being used commercially by Google. But it’s far from the only aspect of AI-assisted chip design. Google itself has explored using AI in other parts of the process like “architecture exploration,” and rivals like Nvidia are looking into other methods to speed up the workflow. The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.
Microsoft is making some significant upgrades to its Xbox Cloud Gaming (xCloud) service in the next few weeks. The Xbox cloud streaming service will be moving to Xbox Series X hardware on the server side, bringing dramatic improvements to load times and frame rates. Microsoft is also moving xCloud on the web out of beta, which is good news for owners of Apple devices.
“We’re now in the final stages of internal testing, and we’ll be upgrading the experience for Ultimate members in the next few weeks,” says Kareem Choudhry, head of cloud gaming at Microsoft. “The world’s most powerful console is coming to Azure.”
The upgrade will include major improvements to xCloud, with players able to benefit from the same faster load times and improved frame rates that are available on Xbox Series X consoles. Microsoft’s xCloud service launched in September, powered by Xbox One S server blades. The load times have been one of the troubling aspects of using Xbox game streaming, and this upgrade will dramatically reduce the wait time of launching games. Players will also be able to access Xbox Series X / S optimized games.
Alongside the server upgrades, Microsoft is launching Xbox Cloud Gaming through the browser for all Xbox Game Pass Ultimate members in the next few weeks. The service is currently in an invite-only beta mode, but the expansion will make it available for all Xbox Game Pass Ultimate members to access xCloud streaming on iPhones, iPads, and on any device with a compatible browser (Chrome, Edge, and Safari).
Microsoft is also expanding cloud gaming to Australia, Brazil, Mexico, and Japan later this year, and hinting at plans for new Xbox Game Pass subscriptions. “We need to innovate to bring our games and services to more people around the world, and we’re investigating how to introduce new subscription offerings for Xbox Game Pass,” says Liz Hamren, head of gaming experiences and platforms at Microsoft.
These new Xbox Game Pass subscriptions will likely include some form of access to xCloud game streaming. Microsoft currently only offers Xbox game streaming to those who subscribe to the Xbox Game Pass Ultimate tier, which is priced at $14.99 per month. It’s easy to imagine a future where Microsoft offers a separate Game Pass tier that only provides access to Xbox Cloud Gaming (xCloud).
Microsoft is also announcing plans for an Xbox TV app and its own streaming stick today, alongside the ability to access and use xCloud on Xbox consoles later this year.
Microsoft is planning to let Xbox console owners try games before they download them later this year. The new Xbox dashboard feature will allow console players to stream games through Microsoft’s Xbox Cloud Gaming (xCloud) service instantly. It’s part of a push to integrate xCloud more into Xbox consoles and into the Xbox app on Windows PCs.
“Later this year, we’ll add cloud gaming directly to the Xbox app on PCs, and integrated into our console experience, to light up all kinds of scenarios, like ‘try before you download,’” says Kareem Choudhry, head of cloud gaming at Microsoft.
Microsoft isn’t detailing all of the ways that xCloud will appear on Xbox consoles, but trying games before you download them certainly opens up possibilities for Xbox owners who want to know what a game is like before buying it.
Either way, utilizing xCloud to let Xbox players quickly jump into games before they’re downloaded will be particularly useful on day one game launches. With games regularly exceeding 100GB, it often takes hours to download titles if you didn’t plan ahead and preload a game before its launch.
In a briefing with members of the press ahead of Microsoft’s Xbox E3 event on Sunday, the company’s head of Xbox, Phil Spencer, was keen to stress Microsoft’s commitment to Xbox Game Pass and cloud gaming.
“So right now we’re the only platform shipping games on console, PC, and cloud simultaneously,” says Spencer. “Others bring console games to PC years later, not only making people buy their hardware up front, but then charging them a second time to play on PC.”
Spencer is of course referring to Sony and its ongoing efforts to bring more PlayStation games to PC years after their launch. Microsoft obviously prefers its own approach to launching simultaneously across multiple platforms and being available on Xbox Game Pass on day one.
Speaking of Xbox Game Pass, Microsoft is also committing to some form of a timeline for exclusive first-party content for the service. “In terms of the overall lineup, we want to get to a point of releasing a new game every quarter … we know that a thriving entertainment service needs a consistent and exciting flow of new content,” explains Matt Booty, head of Xbox Game Studios. “So our portfolio will continue to grow as our service grows.”
Microsoft isn’t providing an update on its Xbox Game Pass subscription growth yet. The service jumped to 18 million subscribers earlier this year, after growing steadily throughout 2020. Today’s announcements are part of some broader Xbox and xCloud news, including server upgrades to xCloud and Microsoft’s plans for an Xbox TV app and streaming sticks.
Apple has spent considerable time championing itself as a protector of user privacy. Its CEO Tim Cook has repeatedly stated that privacy is “a fundamental human right,” the company has based multiple ad campaigns around its privacy promises, and it’s had high profile battles with authorities to keep its users’ devices private and secure.
The pitch is simple: our products protect your privacy. But this promise has shifted very subtly in the wake of this week’s iCloud Plus announcement, which for the first time bundled new security protections into a paid subscription service. The pitch is still “our products keep you safe,” but now one of those “products” is a monthly subscription that doesn’t come with the device in your box — even if those devices are getting more built-in protections as well.
iCloud has always been one of Apple’s simplest services. You get 5GB of free storage to backup everything from images, to messages and app data, and you pay a monthly subscription if you want more (or just want to silence Apple’s ransom note when you inevitably run out of storage). Apple isn’t changing anything about the pricing or storage options as part of the shift to iCloud Plus. Prices will still range from $0.99 a month for 50GB of storage up to $9.99 for 2TB. But what is changing is the list of features you’re getting, which is expanding by three.
The first change sits more within iCloud’s traditional cloud storage remit, and is an expansion of Apple’s existing HomeKit Secure Video offering. iCloud Plus now lets you securely stream and record from an unlimited number of cameras, up from a previous maximum of five.
With the new Private Relay and Hide My Mail features, however, iCloud Plus is expanding its remit from a storage-based service into a storage and privacy service. The privacy-focused additions are minor in the grand scheme of the protections Apple offers across its ecosystem, and Apple isn’t using them as justification for increasing the cost of iCloud. But they nevertheless open the door to so-called “premium” privacy features becoming a part of Apple’s large and growing services empire.
The features appear as an admission from Apple about the limits of what privacy protections can do on-device. “What happens on your iPhone stays on your iPhone” was how the company put its promise in a 2019 ad, but when your iPhone needs to connect to the internet to browse the web, receive email, and generally earn the “i” in “iPhone,” inevitably some of its privacy rests on the infrastructure serving it.
The most interesting of these new features is Apple’s Private Relay, which aims to shield your web traffic from prying eyes in iOS 15 and macOS Monterey. It hides your data from both internet service providers as well as advertisers that might build a detailed profile on you based on your browsing history. While it sounds a bit like a VPN, Apple claims the Private Relay’s dual-hop design means even Apple itself doesn’t have a complete picture of your browsing data. Regular VPNs, meanwhile, require a level of trust that means you need to be careful about which VPN you use.
As Craig Federighi, Apple’s senior vice president of software engineering explains, VPNs can protect your data from outsiders, but they “involve putting a lot of trust in a single centralized entity: the VPN provider. And that’s a lot of responsibility for that intermediary, and involves the user making a really difficult trust decision about exposing all of that information to a single entity.”
“We wanted to take that completely out of the equation by having a dual-hop architecture,” Federighi told Fast Company.
Here’s how it works. When using Private Relay your internet traffic is being sent via two proxy servers on its way to its destination. First, your traffic gets encrypted before it leaves your device. Then, once it hits the initial, Apple-operated server, it gets assigned an anonymous IP that hides your specific location. Next up, the second server, which is controlled by a third-party, decrypts the web address and forwards the traffic to its destination.
Apple can’t see which website you’re requesting, only the IP address you’re requesting it from, and third-parties can’t see that IP address, only the website you’re requesting. (Apple says it also uses Oblivious DNS over HTTPS.) That’s different from most “double VPN” and “multi-hop” VPN services you can subscribe to today, where a provider may control both servers. You could perhaps combine a VPN and a proxy server to do something similar, though. Apple says Private Relay won’t impact performance, which can be a concern with these other services.
While Private Relay is theoretically more private than a regular VPN, Apple’s offering is also more limited. You can’t use it to trick websites into thinking you’re accessing them from a different location, so you’re not going to be able to use Private Relay to get around geographical limitations on content blocked by a government or a service like Netflix. And it only seems to cover web browsing data through Safari, not third-party browsers or native apps. In a WWDC developer session about the feature, Apple says that Private Relay will also include DNS queries and a “small subset of traffic from apps,” specifically insecure HTTP traffic. But there was no mention of other browsers, and Apple clarified to The Verge that it’s only handling app traffic when your app technically happens to be loading the web inside a browser window.
In addition to Private Relay, iCloud Plus also includes Hide my Email, a feature designed to protect the privacy of your email address. Instead of needing to use your real email address for every site that requests it (increasing the risk of an important part of your login credentials becoming public, not to mention getting inundated with spam), Hide My Email lets you generate and share unique random addresses which will then forward any messages they receive back to your true email address. It’s another privacy-focused feature that sits outside of iCloud’s traditional area of focus, and could be useful even if similar options have been available for years.
Gmail, for example, lets you use a simple “+” symbol to add random extra characters to your email address. Even Apple’s own “Sign In with Apple” service pulls a similar trick, handing out random email addresses to each service you use it with. But the advantage of Apple’s new service is that it gives you an easily-accessible shortcut to generate them right in its Mail app and Safari, putting the feature front and center in a way that seems likely to boost its mainstream appeal.
Apple might be charging for Private Relay and Hide My Email by bundling them into iCloud subscriptions, but these iCloud Plus additions are still dwarfed by the array of privacy protections already built into Apple’s hardware and software. There’s no sign that any of these existing privacy features will be locked behind a monthly subscription fee anytime soon. Indeed, the list of built-in protections Apple offers continues to grow.
This includes a new Mail Privacy Protection feature in the Mail app in iOS 15, which sends your emails through a relay service to confuse any tracking pixels that might be hiding in them (read more about tracking pixels here). There’s also a new App Privacy Report feature coming to iOS 15 that will show how often apps are accessing your location, camera, microphone, and other data.
But with iCloud Plus, Apple now offers two privacy protections that are distinct from those that are included for free with the purchase of a device, and the division between the two seems arbitrary to some extent. Apple justifies charging for features like Private Relay and Hide My Email because of the incremental costs of running those services, but Mail Privacy Protection also relies on a relay server, which presumably isn’t free to run.
Regardless of its rationale, choosing to charge for these services means that Apple has opened the door to premium privacy features becoming part of its increasingly important services business, beyond just its hardware business. Adherence to privacy was already part of the company’s attempt to lock you into its devices; now it could become part of the attempt to lock you into its services. All the while, those walls around Apple’s garden creep higher and higher.
Ragnar Locker has claimed another victim. BleepingComputer reported yesterday that the ransomware group forced Adata to take its systems offline in May. Even though Adata says it has since resumed normal operations, the group claims that it was able to steal 1.5TB of data before the company detected its attack.
It’s not clear how the ransomware attack affected Adata’s ability to manufacture its storage, memory, and power solutions. The company told BleepingComputer that “things are being moved toward the normal track, and business operations are not disrupted for corresponding contingency practices are effective.”
Ragnar Locker has reportedly claimed that it was able to “collect and exfiltrate proprietary business information, confidential files, schematics, financial data, Gitlab and SVN source code, legal documents, employee info, NDAs, and work folders” as part of this attack. But those files have not yet been shared with the public.
The ransomware group has been operating since at least November 2019. Sophos offered some insight into how the ransomware itself operated in May 2020, and the FBI said in November 2020 that it has targeted “cloud service providers, communication, construction, travel, and enterprise software companies.”
It seems Ragnar Locker isn’t bashful, either, with Threatpost reporting in November 2020 that it took out Facebook ads threatening to leak the 2TB of data it stole from Campari Group unless it was paid $15 million in Bitcoin. Other high-profile attacks have targeted Energias de Portugal (a Portuguese electric company) and Capcom.
Ransomware doesn’t necessarily get as much attention as it used to, but attacks are still common, and they’re still able to affect large companies like Adata or Quanta Computer. The attacks often follow the pattern set by Ragnar Locker by attempting to block access to data while simultaneously threatening to leak it to the public.
Attacks continue to target consumers, too, with a recent example being Android ransomware that masqueraded as a mobile version of Cyberpunk 2077 to find its victims. Companies have even started to sell their “self-defending” SSDs to consumers to ease concerns about being targeted by these kinds of attacks.
Adata told BleepingComputer that it is “determined to devote ourselves making the system protected than ever, and yes, this will be our endless practice while the company is moving forward to its future growth and achievements.” Somebody’s gotta make sure those efforts to capitalize on Chia aren’t disrupted again.
Adobe is now shipping new versions of more Creative Cloud apps that run natively on Apple silicon Macs. Lightroom Classic, Illustrator, and InDesign have all been updated for the M1 processor, and Adobe says that you can expect average performance boosts of up to 80 percent across the suite when compared to an equivalent Intel-based Mac.
Adobe released an M1-native version of Photoshop back in March, while an update for Lightroom came in December. Many photographers (like me) still prefer to use Lightroom Classic, however, which Adobe maintains as a separate app within the Creative Cloud suite, so it’s good to see it get a performance boost to match the newer version.
Based on the results of a third-party benchmarking report commissioned by the company, Adobe says “most operations in Lightroom Classic on an M1 Mac,” including launching, importing, and exporting will be “about twice as fast” as they were on an equivalent Intel Mac. A new Super Resolution image-enhancing feature that’s also been added in this update is “more than three times as fast,” meanwhile. The benchmarks were run on 13-inch MacBook Pro laptops, one with an M1 processor and the other with an Intel Core i5. Both laptops had 16GB of RAM and were hooked up to an Apple Pro Display XDR.
Other Lightroom updates include the ability to specify custom aspect ratios when cropping (as opposed to using the freehand tool) and a set of new premium presets created by pro photographers. The collection includes options for styles like “cinematic” and “futuristic,” as well as portrait presets for various skin tones. They’ll be available in both Lightroom and Lightroom Classic on all platforms.
The best moment of this year’s WWDC keynote was a straightforward demo of a macOS feature, Universal Control. The idea is simple enough: it allows you to use the keyboard and trackpad on a Mac to directly control an iPad, and even makes it simple to drag and drop content between those devices.
What made the demo so impressive is how easy and seamless it all seemed. In a classic Apple move, there was no setup required at all. The segment happened so fast that it even seemed (incorrectly, as it turns out) like the Mac was able to physically locate the iPad in space so it knew where to put the mouse pointer.
After Zaprudering the clip and asking Apple a few questions, I now have a better understanding of what’s going on here. It turns out that the entire system is actually simpler than it first appears. It’s essentially a new way to use a bunch of technologies Apple had already developed. That’s not a knock on Universal Control — sometimes the best software features are a result of clever thinking instead of brute force technological improvements.
So here’s what’s happening in that demo.
First, you need to get the iPad and Mac relatively close to each other. Universal Control is built off the same Continuity and Handoff features that have long been a part of iOS and macOS. When the devices are close enough, their Bluetooth modules let each other know. Of course, all the devices here need to be on the same iCloud account for this to work.
Then, you start up Universal Control by dragging your mouse pointer all the way to the left or right edge of your Mac’s screen, then a little bit beyond that edge. When you do, the Mac will assume that you’re trying to drag the mouse over to another device, in this case the iPad.
So there’s no UWB location detection, just good old assumption. One note is that if you have lots of compatible devices, Monterey assumes that you’re dragging towards the last iPad or Mac you interacted with.
At this point, a Wi-Fi Direct connection is made and the iPad will show a small bar on the side with a little bump. It’s a sort of indicator that the iPad is aware you’re trying to drag a mouse into it. Keep dragging and pow, the bump breaks free into a circular mouse pointer. When the mouse is on the iPad screen, both it and the keyboard on your Mac control the iPad. Move it back to the Mac, and you control the Mac.
But there’s a clever little affordance built into that strange bar. There are a couple of arrows inside it, a hint that you can slide that bump up or down before it breaks free into the iPad itself. Doing that is how you line up the iPad’s screen with your Mac’s, so that dragging the mouse between the screens doesn’t result in a weird jump.
You go through the same process to set up a second device with Universal Control — it maxes out at three. If all this automatic setup sounds like a hassle, you can just go into system preferences and set a device as your preferred Universal Control buddy gadget.
However you set it up, you can drag and drop content between devices and it’ll use either Wi-Fi Direct or USB to transfer the files. Of course, if you’re dragging files into the iPad, make sure you have an app open (like Files) that can accept it.
That’s pretty much the long and the short of it. There are still some details to hash out, Apple tells me, and it isn’t available in the first developer preview. If you put your dock on the left or right edge of the screen, for example, it’s unclear if this whole setup will work.
What’s fascinating to me about this system — as I discuss in the video above — is that it’s only really possible because of a long series of software enhancements that have been built into the iPad over the years, including:
Continuity, Handoff, and AirDrop. Universal Control isn’t technically AirDrop, but it’s the same basic idea. All of these are the basic ways that Apple devices communicate directly with each other instead of through the cloud.
Multitasking. I’m not referring to split-screen, but the support for drag-and-drop that came along with the improved windowing options on the iPad.
Keyboard and mouse support. That’s an obvious pre-requisite, but it wasn’t always obvious that Apple would put mouse support into the iPad.
Sidecar. Sidecar’s the tool that lets you use an iPad as a second Mac monitor. I don’t think that Universal Control uses the same bits of software as Sidecar, but I do think that there were probably lessons there about latency that would prove useful here.
I had a hunch that there would be a similar story of evolution on the Mac side of this story. I figured that all the iPad and iOS technologies finding their way into the Mac with the last few releases played a part. Catalyst apps turned into native iPad apps for M1 Macs. Control Center, Shortcuts, and Focus mode all are iOS things that are also on the Mac.
Nice idea, but wrong. Apple tells me that the foundation on the Mac side is as simple as it seems, based on Continuity and Handoff.
I hope that Universal Control works as well in the real world as it did in this staged demo — and I know that’s no sure thing. But what I like about the feature is how it’s just a clever recombination of existing technologies that Apple had already built for other purposes.
Inside the Apple ecosystem, you expect that the trade you’re making for only using Apple devices is getting synergistic integrations like this. They’ve actually been rarer than I would have guessed the past few years. But as the Mac and the iPad start trading more and more features with each other, I expect we’ll see more of them going forward.
Apple just wrapped up its WWDC 2021 keynote, and it was jam-packed with news and announcements, including our first looks at iOS 15, the new macOS Monterey, big improvements to FaceTime, and more.
Our live blog has moment-by-moment commentary on what Apple announced from Nilay Patel and Dieter Bohn. But if you just want to know the big-ticket items from the show, we’ve got you covered right here.
iOS 15 brings big improvements to FaceTime, updates to notifications, and more
Apple announced iOS 15, which brings improvements to FaceTime such as spatial audio, a new “SharePlay” feature that lets you share media with people on FaceTime virtually, updates to Messages, a new look for notifications, the ability to set different “Focus” statuses, updates to Memories in Photos, a redesign to the weather app, and much more.
Apple is building video and music sharing into FaceTime
Apple’s new SharePlay feature will let you watch or listen to content with others virtually. Apple is also introducing a SharePlay API so that other developers can build apps that support the feature.
Apple is going to use AI to read all the text in your photos
Apple’s new Live Text will digitize text in your photos, which can let you copy and paste text from a photo, for example, or call a phone number that’s in a photo. Apple says it uses “on-device intelligence” for the feature.
You’ll soon be able to use your iPhone as your ID at the airport
Apple’s Wallet will soon let you store your ID in a digital form (in participating US states), which you’ll then use as identification in US airports.
iPadOS 15 lets you drop widgets on the homescreen and brings changes to multitasking
With iPadOS 15, Apple will let you add widgets to the homescreen and access to the app library, which debuted last year on iPhone with iOS 14. Apple is also introducing improvements to multitasking, with new controls that make it easier to manage your apps, and you’ll be able to build apps with Swift Playgrounds.
Apple adds welcome privacy features to Mail, Safari
Apple announced new privacy-focused features at WWDC, including that Apple Mail will block tracking pixels with Mail Privacy Protection and that Safari will hide IPs. Apple is also introducing a new section in settings called the “App Privacy Report.”
Apple’s Siri will finally work without an internet connection with on-device speech recognition
Apple will let Siri process voice requests on device, meaning audio won’t be sent over the web, and Siri can accept many requests while offline.
Apple lets users see family members’ Health data
Apple is introducing a number of new health-focused features, such as the ability to share health data with your families and with healthcare providers.
Apple is making AirPods easier to hear with and find
Apple is making some new changes to AirPods, such as making it easier to find them on the Find My network and the ability to announce your notifications.
Apple’s iCloud Plus bundles a VPN, private email, and HomeKit camera storage
Apple’s iCloud is getting a new private relay service and the ability to create burner emails called “Hide My Email.” These will be part of a new iCloud Plus subscription, which will be offered at no additional price to current iCloud paid users.
Apple announces watchOS 8 with new health features
Apple’s upcoming watchOS 8 has new health features, including a new Mindfulness app, improvements to the Photos watchface, and more.
Siri is coming to third-party accessories
Apple will let third-party accessory makers add Siri to their devices, Apple announced during WWDC. The company showed it on an Ecobee thermostat in its presentation.
macOS Monterey lets you use the same cursor and keyboard across Macs and iPads
Apple’s next big macOS release is called Monterey. One big new feature is the ability to use the same mouse and keyboard across your Mac and your iPad. Apple’s Shortcuts app is also coming to the Mac. And Monterey adds improvements to FaceTime, SharePlay, and Apple’s new “Focus” statuses that are coming to Apple’s other software platforms.
Apple redesigns Safari on the Mac with a new tab design and tab groups
Apple is redesigning Safari with a new look for tabs and tab groups. And on iOS, the tab bar will be at the bottom of the screen to be in easier reach of your thumb. Web extensions are also coming to iOS and iPadOS.
Apple is bringing TestFlight to the Mac to help developers test their apps
Apple announced that it will let developers use TestFlight to test their apps on the Mac. The company also announced Xcode Cloud, which lets you test your apps across all Apple devices in the cloud.
Apple is amping up iCloud with a new set of features called iCloud Plus. The cloud storage service will now come with access to a VPN, burner email addresses, and unlimited storage for HomeKit-enabled home security cameras.
The VPN, called Private Relay, will route your internet traffic through two relays in order to mask who’s browsing and where that data is coming from. The burner email feature, called Hide My Email, lets you create single-use email addresses that will forward to your actual account; that way, you can provide a junk email to a service you don’t trust in case it starts spamming you. Apple already offers a similar feature through Sign In With Apple.
Apple will also include unlimited storage for video from HomeKit-enabled home security cameras. You currently need to pay for at least 200GB of iCloud storage to record video from one camera, and you need to pay for a higher tier to support more streams.
The features are all supposed to be included with existing iCloud plans at no additional cost. Apple didn’t say if the feature would be available through its cheapest plans, though, which don’t currently support HomeKit video storage.
Apple is also introducing new features to help manage your iCloud account. There’s a new recovery feature that allows Apple to message security codes to your friends or family if your own device is lost. There’s also a “Digital Legacy” program that lets you choose who can access your files after you die.
Unfortunately, Apple didn’t announce the iCloud update we were all hoping for: a free storage tier that starts above a paltry 5GB. Maybe next year.
Apple’s digital assistant Siri will now process audio on-device by default, meaning you can use the feature without an internet connection. Apple says the upgrade will also make Siri more responsive.
Processing audio on-device will also make using Siri more private, says Apple. This follows the company’s well-established preference for implementing machine learning features on-device, rather than sending data away to the cloud.
“This addresses the biggest privacy concern we hear from voice assistants, which is unwanted audio recording,” said an Apple presenter during WWDC.
Developing… we’re adding more to this post, but you can follow along with our WWDC 2021 live blog to get the news even faster.
Apple has announced a new feature called Live Text, which will digitize the text in all your photos. This unlocks a slew of handy functions, from turning handwritten notes into emails and messages to searching your camera roll for receipts or recipes you’ve photographed.
This is certainly not a new feature for smartphones, and we’ve seen companies like Samsung and Google offer similar tools in the past. But Apple’s implementation does look typically smooth. With Live Text, for example, you can tap on the text in any photo in your camera roll or viewfinder and immediately take action from it. You can copy and paste that text, search for it on the web, or — if it’s a phone number — call that number.
Apple says the feature is enabled using “deep neural networks” and “on-device intelligence,” with the latter being the company’s preferred phrasing for machine learning. (It stresses Apple’s privacy-heavy approach to AI, which focuses on processing data on-device rather than sending it to the cloud.)
Live Text works across iPhones, iPads, and Mac computers and supports seven languages.
In addition to extracting text from photos, iOS 15 will also allow users to search visually — a feature that sounds exactly the same as Google Lens.
Apple didn’t go into much detail about this feature during its presentation at WWDC, but said the new tool would recognize “art, books, nature, pets, and landmarks” in photos. We’ll have to test it out in person to see exactly how well it performs, but it sounds like Apple is doing much more to apply AI to users’ photos and make that information useful.
Developing… we’re adding more to this post, but you can follow along with our WWDC 2021 live blog to get the news even faster.
When you’ve worked with the Raspberry Pi, or just microelectronics in general, for long enough, you inevitably end up with a box of spare parts and sensors. Maker Andrew Healey decided to put his box of parts to good use with this satellite detection project.
The inspiration began after receiving a GPS receiver module as a gift. The end result is a custom dashboard that outputs data in real-time with a Windows 98 themed interface. Healey created this platform with modularity in mind so components can be easily added or removed over time.
The dashboard currently relies on three major accessories: a GT-U7 GPS receiver module, an AM2302 temperature/humidity sensor as well as a POS58 receipt printer. The best Raspberry Pi projects use a slick interface and this one uses CSS to resemble the default Windows 98 theme.
On the first 24-hour test run, the GPS module managed to detect 31 individual satellites! According to Healey, about 8 to 10 satellites are usually visible at a given time. The satellite data is output to a dedicated window on the dashboard. There is also a window used just for displaying the temperature and humidity information from the AM2302 module.
The printer has a notably unique function—Healey uses it to print messages from his friend who also has a receipt printer and can receive replies.
This project is totally open source and available to anyone who has a box of components that need to be put to use. Check out the project page on Healey’s website for more details.
There’s no need to have a GPS. If you want do this a bit more easily, using cloud-based data check out our tutorial on how to track satellite fly-bys with Raspberry Pi.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.