A group of Jewish Google employees is calling on the company to increase its support of Palestinians amid Israel’s deadly bombing campaign in Gaza. The conflict started with Israel’s attempt to evict Palestinians from their homes in East Jerusalem, and escalated when militants fired rockets toward Jerusalem and Israel responded with airstrikes.
In an internal letter, Google workers ask CEO Sundar Pichai to put out a statement condemning the attacks, including “direct recognition of the harm done to Palestinians by Israeli military and gang violence.” The letter currently has 250 signatures. (An external version of the note can be found here).
The request is coming from a new employee resource group which formed last year in response to pro-Zionist sentiment within “Jewglers” — Google’s official Jewish ERG. While Jewglers has tried to be apolitical, two current workers say it has supported pro-Israel discussions and is not a safe space to express anti-Zionist beliefs.
This rift led to the formation of the Jewish Diaspora in Tech — a group of Jewish anti-nationalists within Google. “We were compelled to form our own space because of the fact that we were quite literally not allowed to express our viewpoints in the ERG,” says a product marketing manager in the group.
Now, members of the new organization are calling on Google to support freedom of expression internally — particularly around anti-Zionist viewpoints. “Google is the world’s largest search engine and any repression of freedom of expression occurring within the company is a danger not only to Googlers internally but to all people around the world,” they wrote in an FAQ.
They also want Google to terminate any business contracts which support “Israeli violations of Palestinian human rights, such as the Israeli Defense Forces.”
Members of the group say they were inspired to write the letter after Jewglers failed to put out a statement condemning the violence against Palestinians. One worker told The Verge that people in the group were promoting pro-Israel funding opportunities.
Read the entire letter below:
Google did not immediately respond to a request for comment from The Verge.
Amazon has extended its moratorium on law enforcement use of its facial recognition software “until further notice,” according to Reuters. The ban was set to expire in June.
As early as 2018, Amazon employees had pushed Amazon to scale back the project, arguing that documented racial bias in facial recognition could exacerbate police violence against minorities. Amazon defended the project until June 2020, when increased pressure from widespread protests led to the company announcing a yearlong moratorium on police clients for the service.
Rekognition is offered as an AWS service, and many of Amazon’s cloud computing competitors have similar technology. Microsoft announced that it would also not be selling its facial recognition services to police the day after Amazon’s pledge, and IBM said that it would stop developing or researching facial recognition tech altogether the same week. Google doesn’t commercially offer its facial recognition technology to anyone.
Amazon didn’t immediately respond to request for comment about why the ban was being extended. In a statement provided when the ban on law enforcement use was first issued, Amazon said it hoped Congress would use the year provided by the moratorium to implement rules surrounding the ethical use of facial recognition technology. Part of its statement read:
We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.
So far, no federal legislation has addressed police use of facial recognition, but a number of state and local measures have passed paring back use of the technology. San Francisco was the first US city to ban government use of facial recognition in May 2019, with Oakland following soon after. The Oregon and Maine cities of Portland also passed legislation around the tech in late 2020. The state of Massachusetts failed to pass a proposed ban in December 2020 but has recently passed a modified bill that adds some restrictions on police use of facial recognition.
Google just finished its live Google I/O 2021 keynote, where the company unveiled a huge number of announcements, including a new look coming to Android, a bunch of features coming to its Google Workspace productivity suite, and even a new AI that talked as if it were Pluto.
Nilay Patel and Dieter Bohn followed the whole thing in real time right here on our live blog.But if you just want to get caught up on the biggest news from the show, read on for our recap.
Android 12 has a radical and bubbly new look
Google revealed that Android 12 will have a brand-new “Material You” design with a whole lot of new changes. It offers a lot of color and customization, and the new mobile OS will even be able to change system colors to be able to better match your wallpaper. It also offers some new privacy features, including a new privacy dashboard. A beta will be available today, and it works with phones from 11 device makers.
Google is making Workspace more interconnected
Google announced “smart canvas,” a new initiative for its Workspace office software that will make it easier to work between products. Features include “smart chips” that let you link to other Workspace apps and the ability to start a Meet video call right from Google Doc, Sheet, or Slide.
Google showed off its new LaMDA AI language model and demoed conversations with Pluto and a paper airplane
Google CEO Sundar Pichai showed off some impressive (but pre-recorded) demos of someone having a conversation with an AI powered by its new LaMDA conversation technology. In the demos, the AI “talked” as Pluto and a paper airplane.
Google and Samsung are merging Wear OS and Tizen
Google announced Wear OS (now called just Wear) and Samsung’s Tizen will be combined into a unified platform. That should lead to apps launching faster and longer battery life.
Project Starline creates a 3D model of a person sitting across from you
Google demonstrated Project Starline, which uses high-resolution cameras and depth sensors to create a real-time 3D model of a person who is “sitting” across you to re-create the feeling of having a face-to-face meeting.
Google is building a camera that’s more inclusive of skin tone
Google is working on updates to its camera and imaging products to make them better at capturing and reproducing skin tones accurately in images.
Google Photos will be able to make animated photos from still shots
Google Photos is getting impressive new “cinematic moments” that use two photos to create a moving image. You can see what they look like in the GIF above. Google is also adding new types of Memories, including ones based on visual patterns in your photos.
Google Photos will let you store photos in a locked folder
Google is adding a feature in Google Photos to let you store photos in a password-protected space on your phone. These photos won’t appear when you’re scrolling in the app. It’s launching first on Google Pixel and coming to more Android devices “throughout the year.”
Google will let you change a site’s password right from inside its password manager
Google announced that it’s adding a way to change a stored password for a website right from inside Google’s password manager. It’s rolling out gradually to Chrome on Android in the US and will be widely available in the coming months.
Google Maps’ AR Live View tool will show more information
Google is making some changes to Google Maps, including rolling out new features for its Live View augmented reality tool. Google is also adding features to make maps more informative, such as showing different restaurants at different times of day, pointing out local landmarks if you’re visiting a new city, and showing how busy a certain area is.
Google is adding a privacy-friendly sandbox for machine learning data in Android
Google is adding a number of privacy-focused features to Android, including a sandbox in Android, the Private Compute Core, that will securely store data used for machine learning.
There are now 3 billion active Android devices globally
Google announced that there are now more than 3 billion active Android devices. That’s a lot!
Android will support digital car keys so you can unlock your car with your phone
Google will let “select Pixel and Samsung Galaxy phones” work as digital car keys starting with Android 12. The feature supports UWB and NFC, though BMW is the only automaker confirmed to support Android’s digital car key so far.
Google debuts new health tool to identify skin conditions using your camera
Google revealed a new health tool that lets you take a photo of a problem skin area and answer questions about their skin type and symptoms to help you better identify skin conditions. Google aims to launch a pilot of the tool this year.
It’s a long-standing problem that dates back to the days of film: image processing tends to be tuned for lighter skintones and not that of black and brown subjects. Google announced an effort to address that today in its own camera and imaging products, with a focus on making images of people of color “more beautiful and more accurate.” These changes will come to Google’s own Pixel cameras this fall, and the company says it will share what it learns across the broader Android ecosystem.
Specifically, Google is making changes to its auto white balance and exposure algorithms to improve accuracy for dark skintones based on a broader data set of images featuring black and brown faces. With these tweaks, Google aims to avoid over-brightening and de-saturating people of color in photos for more accurate representation.
Google has also made improvements for portrait mode selfies, creating a more accurate depth map for curly and wavy hair types — rather than simply cutting around the subject’s hair.
The company says it still has much to do — and it has certainly stumbled in the past on the image recognition and inclusion front — but it’s a welcome step in the right direction.
Google unveiled Android 12 at its opening I/O 2021 keynote, and now you can try the new update yourself as part of the first public beta. The Android 12 public beta is currently available for Google’s Pixel phones (Pixel 3 and up) and will also come to devices from OnePlus, Lenovo, Asus, Oppo, Realme, Sharp, Tecno, TCL, Vivo, Xiaomi, and ZTE. You can enroll your Pixel phone on Google’s Android beta site or find specific instructions for other supported phones on the Android Developers page.
Android 12 brings a cornucopia of new features, but the most exciting are the new visuals. Along with new animations, widgets, and a modified lock screen, Android 12 also offers theming, with the ability to change the colors used across the OS just by changing your wallpaper. Not all of the dramatic visual changes will be available in this first beta, but they should roll out over time as we get closer to release.
Google is tweaking other aspects of the Android experience as well, like notifications and quick settings. It’s also making some improvements to privacy and security, with indicators for when your phone’s camera or microphones are in use, and easier access to all of your apps’ various permissions in one dashboard. For a more in-depth look at Android 12, you can check out our full preview.
Google is working on a next-gen video chat booth that makes the person you’re chatting with appear in front of you in 3D. You can see them from different angles by moving around and even make eye contact, Google said during a preview of the project at its I/O conference today.
The system is called “Project Starline,” and it’s basically a really, really fancy video chat setup. The platform uses multiple cameras and sensors to capture a person’s appearance and shape from different perspectives. It then stitches those together into a 3D model that’s broadcast in real time to whomever they’re chatting with. In Google’s preview, Starline was used for person-to-person calls (not group chats), and both sides seemed to be using specialized tech so it could all work.
In a demo video, people using the tech describe seeing people like they were in the same room together. It’s “as if she was right in front of me,” one person says.
Right now the system is big. It appears to be an entire booth, complete with lights, cameras, and a bench to sit on. Google says it relies on “custom-built hardware and highly specialized equipment.”
The system is currently only available in “a few” of Google’s offices, but it plans on testing the tech with business partners later in the year. Google mentioned health care and media as two industries from which it was seeking feedback.
Google Photos will soon have a cool new trick: if you take two similar images with your phone’s camera, the app will be able to create an animated, moving shot that combines them. It does this by using machine learning to synthesize movement between the two shots. Google creates new frames between them, resulting in a “vivid moving picture.” Google’s Shimrit Ben-Yair made this sound like something that parents will love since now your multiple attempts at the same shot will allow for this added benefit.
The new feature is called “cinematic moments,” and it will work on both Android and iOS, Ben-Yair said.
Google also announced a new locked folder feature that will let you hide your most sensitive images behind a folder protected by a password or thumbprint. The app will also soon let you hide specific people or periods of your life from its “memories” to avoid unwanted reminders of the past.
Developing… we’re adding more to this post, but you can follow along with our Google I/O 2021 live blog to get the news even faster.
Google announced a bunch of new features for Google Maps at its 2021 I/O developer conference today, including upgrades to its handy Live View tool, which helps you navigate the world through augmented reality.
Live View launched in beta in 2019, projecting walking directions through your camera’s viewfinder, and was rolled out to airports, transit stations, and malls earlier this year. Now, Live View will be accessible directly from Google Maps and will collate a lot of handy information, including how busy shops and restaurants are, recent reviews, and any uploaded photos.
It sounds particularly handy for exploring new destinations remotely. You can remotely browse that street full of interesting restaurants while on holiday, checking out which places are heaving, and even looking at some of the pictures of dishes.
We’re making it easier to explore with Live View.
Soon you’ll be able to access it right from the map to see helpful details about places nearby — like their busyness, reviews, photos, and more. #GoogleIO pic.twitter.com/V2g5Q8s7rR
— Google Maps (@googlemaps) May 18, 2021
Live View also now has better labeling for streets on complex intersections, says Google, and it will automatically orient you in relation to known locations (like your home or work).
That’s not all for Maps, though, and as Google’s Liz Reid said onstage, the company is on track to launch “more than 100 AI-driven improvements” for the app this year.
Other upgrades include a wider launch for the detailed street map view, which will be available in 50 new cities by the end of the year, including Berlin, São Paulo, Seattle, and Singapore; new “busyness” indicators for whole areas as opposed to specific shops and businesses; and selective highlighting of nearby businesses based on the time of day that you’re looking at Google Maps. So if you open up the app first thing in the morning, it’ll show you more places to grab a coffee than a candle-lit dinner for two.
A more ambitious upgrade for Maps is using AI to identify and forecast “hard-braking events” when you’re in your car. Think of it like traffic warnings that navigation apps issue based on data collated from multiple users. But instead of just traffic jams, Google Maps will try to identify harder-to-define “situations that cause you to slam on the brakes, such as confusing lane changes or freeway exits.” The company says its tech has the ability to “eliminate over 100 million hard-braking events in routes driven with Google Maps each year” by giving users a head’s up when it knows such an event is on the horizon.
In yet another sign of the growing alliance between Google and Samsung, today both companies announced that they are essentially combining Wear OS — Google’s operating system — and the Tizen-based software platform that has been foundational to Samsung’s wearables for many years. The resulting platform is currently being referred to simply as “Wear,” though that might not be the final name.
Benefits of the joint effort include significant improvements to battery life, 30 percent faster loading times for apps, and smoother animations. It also simplifies life for developers and will create one central smartwatch OS for the Android platform. Google is also promising a greater selection of apps and watch faces than ever before.
“All device makers will be able to add a customized user experience on top of the platform, and developers will be able to use the Android tools they already know and love to build for one platform and ecosystem,” Google’s Bjorn Kilburn wrote in a blog post.
Wired has more details on what’s to come, including the tidbit that Samsung will stick with its popular rotating bezel on future devices — but it’s finished making Tizen-only smartwatches. There will also be a version of Google Maps that works standalone (meaning without your phone nearby) and a YouTube Music app that supports offline downloads.
Samsung confirmed that its next Galaxy Watch will run on this unified platform. And future Fitbit devices will also run the software. Aside from merging the technologies of both platforms, the new Wear OS will include improvements that make it easier to multitask between wrist apps. And some of Fitbit’s “most popular” fitness tracking features will also be included.
Nearly a year after Apple announced the iPhone would become your digital car key, Google is doing the same. Android 12 will officially let “select Pixel and Samsung Galaxy phones” natively double as a car key later this year, the company just announced at its Google I/O 2021 developer conference today.
It’s not like Google is exactly late to the party, though, because automakers are taking their sweet time rolling out the technology, too. In fact, Google’s announcement only names a single brand — BMW — which already announced it would work with Samsung earlier this year. And last we checked, BMW has only committed a single car to support the seemingly “best” version of the digital car key technology.
The end goal here is to replace your bulky key fob — which already lets you enter a car without removing it from your pocket — with your phone instead, using new ultra-wideband (UWB) radios to securely tell your car you’re actually standing right in front of it. Apple’s quietly slipped those radios into (almost) all of its new iPhones and its latest Apple Watch Series 6, and presumably today’s announcement means the next Google Pixel will have them as well. (They could also let you locate your car in a crowded parking lot, something Samsung plans to take advantage of.)
But… whether for backwards compatibility’s sake or because they’re pinching pennies, both Google’s and Apple’s technology also supports near-field communication (NFC), which requires you to physically pull your phone out of your pocket and tap it to a car like the 2021 BMW 5 Series. In some ways, that’s actually a step backward from the humble radio key fob.
Now, BMW likely isn’t the only automaker interested in the potential of UWB: Hyundai, at least, is a member of the FiRa Consortium that’s pushing for UWB specifically, and both Hyundai and Kia are members of the UWB Alliance as well.
(Apple, Google, Samsung, LG, BMW, GM, Honda, Hyundai, and Volkswagen are all on the board of the Car Connectivity Consortium as well, though that group’s promoting both NFC and UWB in its digital car key standard.)
No matter what automakers pick, it seems likely that participation in standards bodies and global Android support will let you use (and share) your digital car key across smartphone brands. Samsung announced as much earlier this year at its Galaxy S21 event. Google says it’ll share more on standards when it launches later this year.
In other Android / automotive news, Google says that cars from BMW and Ford will soon support Bluetooth Fast Pair to pair your Android phone with a single tap. Android Auto itself — the smartphone-based infotainment system that’s been around for six years and directly competes with Apple’s CarPlay — has now made it into 100 million cars, the company says.
Google also promises that going forward, the “vast majority of new vehicles” from GM, Ford, Honda, and other automakers will support Android Auto wirelessly, no need to pull your phone out of your pocket and plug it in. That’s a feature that’s been rolling out rather slowly since its introduction in 2016, partially because it wasn’t native to Android phones until last August, partially because automakers had been holding out on Google-powered infotainment systems, and perhaps because Android Auto has been methodically rolling out across the world instead of launching everywhere simultaneously.
There are new features, but it’s the biggest design update in years
Google is announcing the latest beta for Android 12 today at Google I/O. It has an entirely new design based on a system called “Material You,” featuring big, bubbly buttons, shifting colors, and smoother animations. It is “the biggest design change in Android’s history,” according to Sameer Samat, VP of product management, Android and Google Play.
That might be a bit of hyperbole, especially considering how many design iterations Android has seen over the past decade, but it’s justified. Android 12 exudes confidence in its design, unafraid to make everything much larger and a little more playful. Every big design change can be polarizing, and I expect Android users who prefer information density in their UI may find it a little off-putting. But in just a few days, it has already grown on me.
There are a few other functional features being tossed in beyond what’s already been announced for the developer betas, but they’re fairly minor. The new design is what matters. It looks new, but Android by and large works the same — though, of course, Google can’t help itself and again shuffled around a few system-level features.
I’ve spent a couple of hours demoing all of the new features and the subsequent few days previewing some of the new designs in the beta that’s being released today. Here’s what to expect in Android 12 when it is officially released later this year.
Material You design and better widgets
Android 12 is one implementation of a new design system Google is debuting called Material You. Cue the jokes about UX versus UI versus… You, I suppose. Unlike the first version of Material Design, this new system is meant to mainly be a set of principles for creating interfaces — one that goes well beyond the original paper metaphor. Google says it will be applied across all of its products, from the web to apps to hardware to Android. Though as before, it’s likely going to take a long time for that to happen.
In any case, the point is that the new elements in Android 12 are Google’s specific implementations of those principles on Pixel phones. Which is to say: other phones might implement those principles differently or maybe even not at all. I can tell you what Google’s version of Android 12 is going to look and act like, but only Samsung can tell you what Samsung’s version will do (and, of course, when it will arrive).
The feature Google will be crowing the most about is that when you change your wallpaper, you’ll have the option to automatically change your system colors as well. Android 12 will pull out both dominant and complementary colors from your wallpaper automatically and apply those colors to buttons and sliders and the like. It’s neat, but I’m not personally a fan of changing button colors that much.
The lock screen is also set for some changes: the clock is huge and centered if you have no notifications and slightly smaller but still more prominent if you do. It also picks up an accent color based on the theming system. I especially love the giant clock on the always-on display.
Android’s widget system has developed a well-deserved bad reputation. Many apps don’t bother with them, and many more haven’t updated their widget’s look since they first made one in days of yore. The result is a huge swath of ugly, broken, and inconsistent widgets for the home screen.
Google is hoping to fix all of that with its new widget system. As with everything else in Android 12, the widgets Google has designed for its own apps are big and bubbly, with a playful design that’s not in keeping with how most people might think of Android. One clever feature is that when you move a widget around on your wallpaper, it subtly changes its background color to be closer to the part of the image it’s set upon.
I don’t have especially high hopes that Android developers will rush to adopt this new widget system, so I hope Google has a plan to encourage the most-used apps to get on it. Apple came very late to the home screen widget game on the iPhone, but it’s already surpassed most of the crufty widget abandonware you’ll find from most Android apps.
Bigger buttons and more animation
As you’ve no doubt gathered already from the photos, the most noticeable change in Android 12 is that all of the design elements are big, bubbly, and much more liberal in their use of animation. It certainly makes the entire system more legible and perhaps more accessible, but it also means you’re just going to get fewer buttons and menu items visible on a single screen.
That tradeoff is worth it, I think. Simple things like brightness and volume sliders are just easier to adjust now, for example. As for the animations, so far, I like them. But they definitely involve more visual flourish than before. When you unlock or plug in your phone, waves of shadow and light play across the screen. Apps expand out clearly from their icon’s position, and drawers and other elements slide in and out with fade effects.
More animations mean more resources and potentially more jitter, but Samat says the Android team has optimized how Android displays core elements. The windows and package manager use 22 percent less CPU time, the system server uses 15 percent less of the big (read: more powerful and battery-intensive) core on the processor, and interrupts have been reduced, too.
Android has another reputation: solving for jitter and jank by just throwing ever-more-powerful hardware at the problem: faster chips, higher refresh rate screens, and the like. Hopefully none of that will be necessary to keep these animations smooth on lower-end devices. On my Pixel 5, they’ve been quite good.
One last bit: there’s a new “overscroll” animation — the thing the screen does when you scroll to the end of a page. Now, everything on the screen will sort of stretch a bit when you can’t scroll any further. Maybe an Apple patent expired.
Shuffling system spaces around
It wouldn’t be a new version of Android without Google mucking about with notifications, Google Assistant, or what happens when you press the power button. With Android 12, we’ve hit the trifecta. Luckily, the changes Google has made mostly represent walking back some of the changes it made in Android 11.
The combined Quick Settings / notifications shade remains mostly the same — though the huge buttons mean you’re going to see fewer of them in either collapsed or expanded views. The main difference in notifications is mostly aesthetic. Like everything else, they’re big and bubbly. There’s a big, easy-to-hit down arrow for expanding them, and groups of notifications are put together into one bigger bubble. There’s even a nice little visual flourish when you begin to swipe a notification away: it forms its own roundrect, indicating that it has become a discrete object.
The thing that will please a lot of Android users is that after just a year, Google has bailed on its idea of creating a whole new power button menu with Google Wallet and smart home controls. Instead, both of those things are just buttons inside the quick settings shade, similar to Samsung’s solution.
Holding down the power button now just brings up Google Assistant. Samat says it was a necessary change because Google Assistant is going to begin to offer more contextually aware features based on whatever screen you’re looking at. I say the diagonal swipe-in from the corner to launch Assistant was terrible, and I wouldn’t be surprised if it seriously reduced how much people used it.
I also have to point out that it’s a case of Google adopting gestures already popular on other phones: the iPhone’s button power brings up Siri, and a Galaxy’s button brings up Bixby.
New privacy features for camera, mic, and location
Google is doing a few things with privacy in Android 12, mostly focused on three key sensors it sees as trigger points for people: location, camera, and microphone.
The camera and mic will now flip on a little green dot in the upper-right of the screen, indicating that they’re on. There are also now two optional toggles in Quick Settings for turning them off entirely at a system level.
When an app tries to use one of them, Android will pop up a box asking if you want to turn it back on. If you choose not to, the app thinks it has access to the camera or mic, but all Android gives it is a black nothingness and silence. It’s a mood.
For location, Google is adding another option for what kind of access you can grant an app. Alongside the options to limit access to one time or just when the app is open, there are settings for granting either “approximate” or “precise” locations. Approximate will let the app know your location with less precision, so it theoretically can’t guess your exact address. Google suggests it could be useful for things like weather apps. (Note that any permissions you’ve already granted will be grandfathered in, so you’ll need to dig into settings to switch them to approximate.)
Google is also creating a new “Privacy Dashboard” specifically focused on location, mic, and camera. It presents a pie chart of how many times each has been accessed in the last 24 hours along with a timeline of each time it was used. You can tap in and get to the settings for any app from there.
The Android Private Compute Core
Another new privacy feature is the unfortunately named “Android Private Compute Core.” Unfortunately, because when most people think of a “core,” they assume there’s an actual physical chip involved. Instead, think of the APCC as a sandboxed part of Android 12 for doing AI stuff.
Essentially, a bunch of Android machine learning functions are going to be run inside the APCC. It is walled-off from the rest of the OS, and the functions inside it are specifically not allowed any kind of network access. It literally cannot send or receive data from the cloud, Google says. The only way to communicate with the functions inside it is via specific APIs, which Google emphasizes are “open source” as some kind of talisman of security.
Talisman or no, it’s a good idea. The operations that run inside the APCC include Android’s feature for ambiently identifying playing music. That needs to have the microphone listening on a very regular basis, so it’s the sort of thing you’d want to keep local. The APCC also hands the “smart chips” for auto-reply buttons based on your own language usage.
An easier way to think of it is if there’s an AI function you might think is creepy, Google is running it inside the APCC so its powers are limited. And it’s also a sure sign that Google intends to introduce more AI features into Android in the future.
No news on app tracking — yet
Location, camera, mic, and machine learning are all privacy vectors to lock down, but they’re not the kind of privacy that’s on everybody’s mind right now. The more urgent concern in the last few months is app tracking for ad purposes. Apple has just locked all of that down with its App Tracking Transparency feature. Google itself is still planning on blocking third-party cookies in Chrome and replacing them with anonymizing technology.
What about Android? There have been rumors that Google is considering some kind of system similar to Apple’s, but there won’t be any announcements about it at Google I/O. However, Samat confirmed to me that his team is working on something:
There’s obviously a lot changing in the ecosystem. One thing about Google is it is a platform company. It’s also a company that is deep in the advertising space. So we’re thinking very deeply about how we should evolve the advertising system. You see what we’re doing on Chrome. From our standpoint on Android, we don’t have anything to announce at the moment, but we are taking a position that privacy and advertising don’t need to be directly opposed to each other. That, we don’t believe, is healthy for the overall ecosystem as a company. So we’re thinking about that working with our developer partners and we’ll be sharing more later this year.
A few other features
Google has already announced a bunch of features in earlier developer betas, most of which are under-the-hood kind of features. There are “improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets that bring your favorite people to the home screen” and the already-announced improved support for third-party app stores. On top of those, there are a few neat little additions to mention today.
First, Android 12 will (finally) have a built-in remote that will work with Android TV systems like the Chromecast with Google TV or Sony TVs. Google is also promising to work with partners to get car unlocking working via NFC and (if a phone supports it) UWB. It will be available on “select Pixel and Samsung Galaxy phones” later this year, and BMW is on board to support it in future vehicles.
For people with Chromebooks, Google is continuing the trend of making them work better with Android phones. Later this year, Chrome OS devices will be able to immediately access new photos in an Android phone’s photo library over Wi-Fi Direct instead of waiting for them to sync up to the Google Photos cloud. Google still doesn’t have anything as good as AirDrop for quickly sending files across multiple kinds of devices, but it’s a good step.
Android already has fast pairing for quickly setting up Bluetooth devices, but it’s not built into the Bluetooth spec. Instead, Google has to work with individual manufacturers to enable it. A new one is coming on board today: Beats, which is owned by Apple. (Huh!) Ford and BMW cars will also support one-tap pairing.
Android Updates
As always, no story about a new version of Android would be complete without pointing out that the only phones guaranteed to get it in a timely manner are Google’s own Pixel phones. However, Google has made some strides in the past few years. Samat says that there has been a year-over-year improvement in the “speed of updates” to the tune of 30 percent.
A few years ago, Google changed the architecture of Android with something called Project Treble. It made the system a little more modular, which, in turn, made it easier for Android manufacturers to apply their custom versions of Android without mucking about in the core of it. That should mean faster updates.
Some companies have improved slightly, including the most important one, Samsung. However, it’s still slow going, especially for older devices. As JR Raphael has pointed out, most companies are not getting updates out in what should be a perfectly reasonable timeframe.
Beyond Treble, there may be some behind-the-scenes pressure happening. More and more companies are committing to providing updates for longer. Google also is working directly with Qualcomm to speed up updates. Since Qualcomm is, for all intents and purposes, the monopoly chip provider for Android phones in the US, that should make a big difference, too.
That’s all heartening, but it’s important to set expectations appropriately. Android will never match iOS in terms of providing timely near-universal updates as soon as a new version of the OS is available. There will always be a gap between the Android release and its availability for non-Pixel phones. That’s just the way the Android ecosystem works.
That’s Android 12. It may not be the biggest feature drop in years, but it is easily the biggest visual overhaul in some time. And Android needed it. Over time and over multiple iterations, lots of corners of the OS were getting a little crufty as new ideas piled on top of each other. Android 12 doesn’t completely wipe the slate clean and start over, but it’s a significant and ambitious attempt to make the whole system feel more coherent and consistent.
The beta that’s available this week won’t get there — the version I’m using lacks the theming features, widgets, and plenty more. Those features should get layered in as we approach the official release later this year. Assuming that Google can get this fresh paint into all of the corners, it will make Google’s version of Android a much more enjoyable thing to use.
There are over 3 billion active Android devices in the wild now. Sameer Samat, VP of product management at Google, announced the news at Google I/O 2021, which is live, but totally online, this year.
Google added over 500 million active Android devices since its last developer’s conference in 2019 and 1 billion devices since 2017. (That was when it hit the 2 billion mark.) The number is taken from the Google Play Store, which doesn’t take into account devices based on Android but that use alternative stores, including Amazon Fire devices and the myriad of Chinese Android-based devices that avoid using Google’s apps altogether. That means the number of active Android devices is likely much higher than what Samat announced on the live stream.
The news feels like a flex against Apple, too. Apple announced over 1 billion active iPhones in the wild earlier this year — a mere third of the number of Android devices. It’s a bold reminder that Apple’s smartphone and tablet dominance is largely limited to the United States and a few other regions. For everywhere else, it’s Android.
Developing… we’re adding more to this post, but you can follow along with our Google I/O 2021 live blog to get the news even faster.
Google is introducing a new feature to Google Photos that lets you hide specific pictures so they won’t show up in your photo feed or in other apps. The feature, called Locked Folder, will put whatever sensitive pictures you’d rather not share behind a password.
In its I/O presentation, Google used the example of parents trying to keep a puppy purchase secret from their kids — though the feature should be useful for any sensitive images that you don’t want to share with others. It’s easy to see how this feature could be useful: who hasn’t handed their phone to someone to show off one or two pictures, then suddenly realized, “Wow, I hope they don’t scroll too far to the left or right”? Locked Folder will help Photos users avoid that fear by keeping whatever sensitive pictures you’ve got on the service out of your main photos feed.
With Locked Folder in @googlephotos, you can add photos to a passcode protected space and they won’t show up as you scroll through Photos or other apps on your phone. Locked Folder is launching first on Google Pixel, and more Android devices throughout the year. #GoogleIO pic.twitter.com/yGNoQ8vLdq
— Google (@Google) May 18, 2021
Google says the feature will launch first on Pixels but will be coming to more Android phones “throughout the year.” Similar features have been available through third-party apps already, and many Samsung users already have access to a Secure Folder feature included on some Galaxy phones.
At the I/O developer conference on Tuesday, Google announced a range of new privacy measures, including a new partition within Android to manage machine learning data more securely.
Android’s new Private Compute Core will be a privileged space within the operating system, similar to the partitions used for passwords or sensitive biometric data. But instead of holding credentials, the computing core will hold data for use in machine learning, like the data used for the Smart Reply text message feature or the Now Playing feature for identifying songs.
While neither feature is sensitive in itself, they both draw on sensitive data like personal texts and real-time audio. The partition will make it easier for the operating system to protect that data, while still keeping it available for system-level functions.
“This means that all sensitive audio and language processing happens exclusively on your device and isolated from the network to preserve your privacy,” Google explained in a post announcing the feature.
Despite the name, the Android Private Compute Core is not a separate hardware chip; the partition exists entirely in software. While that lowers the absolute level of data protection, it should also make the system easier to deploy across a range of devices.
The new system was announced alongside a range of other privacy features for Android, including a new privacy dashboard and a new setting for approximate location sharing.
Developing… we’re adding more to this post, but you can follow along with our Google I/O 2021 live blog to get the news even faster.
Artificial intelligence is a huge part of Google’s business, and at this year’s I/O conference, the company highlighted its work with AI language understanding. The star of the show was an experimental model called LaMDA, which Google says could one day supercharge the ability of its conversational AI assistants and allow for more natural conversations.
“It’s really impressive to see how LaMDA can carry on a conversation about any topic,” said Google CEO Sundar Pichai during the presentation. “It’s amazing how sensible and interesting the conversation is. But it’s still early research, so it doesn’t get everything right.”
To demonstrate LaMDA’s abilities, the company showed videos of two short conversations conducted with the model. In the first, LaMDA answered questions while pretending to be Pluto, and in the second, it was standing in for a paper airplane. As Pichai noted, the model was able to refer to concrete facts and events throughout the conversation, like the New Horizons probe that visited Pluto in 2015. (For some strange reason, both Pichai and LaMDA-Pluto erroneously referred to Pluto as a planet! It is, of course, a dwarf planet.)
You can see the conversations above and below:
But what’s the point of holding a conversation with a machine? Well, as Pichai noted onstage, so much of Google’s AI work is about retrieving information, whether that’s through translating other languages or understanding what users mean when they search the web. If Google can get AI to understand language better, it can improve its core products.
But it’s definitely hard work. As Pichai noted: “Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. […] The richness and flexibility of language make it one of humanity’s greatest tools and one of computer sciences’ greatest challenges.”
Developing… we’re adding more to this post, but you can follow along with our Google I/O 2021 live blog to get the news even faster.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.