The open source RISC-V instruction set architecture is gaining more mainstream attention in the wake of Intel’s rumored $2 billion bid for SiFive, the industry’s leading RISC-V design house. Unfortunately, RISC-V has long been relegated to smaller chips and microcontrollers, limiting its appeal. However, that should change soon as RISC-V International, the organization that oversees the development of the RISC-V instruction set architecture (ISA), has announced plans to extend the architecture to high performance computing, AI, and supercomputing applications.
The RISC-V open-source ISA was first introduced in 2016, but the first cores were only suitable for microcontrollers and some basic system-on-chip designs. However, after several years of development, numerous chip developers (e.g., Alibaba) have created designs aimed at cloud data centers, AI workloads (like the Jim Keller-led Tenstorrent), and advanced storage applications (e.g., Seagate, Western Digital).
The means there’s plenty of interest from developers for high-performance RISC-V chips. But to foster adoption of the RISC-V ISA by edge, HPC, and supercomputing applications, the industry needs a more robust hardware and software ecosystem (along with compatibility with legacy applications and benchmarks). That’s where the RISC-V SIG for HPC comes into play.
At this point, the RISC-V SIG-HPC has 141 members on its mailing list and 10 active members in research, academia, and the chip industry. The key task for the growing SIG is to propose various new HPC-specific instructions and extensions and work with other technical groups to ensure that HPC requirements are considered for the evolving ISA. As a part of this task, the SIG needs to define AI/HPC/edge requirements and plot a feature and capability path to a point when RISC-V is competitive against Arm, x86, and other architectures.
There are short-term goals for the RISC-V SIG-HPC, too. In 2021, the group will focus on the HPC software ecosystem. First up, the group plans to find open source software (benchmarks, libraries, and actual programs) that can work with the RISC-V ISA right out of the box. This process is set to be automatized. The first investigations will be aimed at applications like GROMACS, Quantum ESPRESSO and CP2K; libraries like FFT, BLAS, and GCC and LLVM; and benchmarks like HPL and HPCG.
The RISC-V SIG-HPC will develop a more detailed roadmap after the ecosystem is solidified. The long-term goal of the RISC-V SIG is to build an open-source ecosystem of hardware and software that can address emerging performance-demanding applications while also accomodating legacy needs.
How many years will that take? Only time will tell, but industry buy-in from big players, like Intel, would certainly help speed that timeline.
It may look like the unlikely outcome of teleportation experiment involving a Sega Bass Fishing controller and a Game Boy Micro, but Playdate is a tiny, handheld games console with a novel form of input.
Image 1 of 3
Image 2 of 3
Image 3 of 3
In case this is your first contact with the boxy yellow machine, it’s an extremely low-powered attempt to bring bite-sized games to a dedicated system instead of a cellphone. The crank on the side is a gameplay tool, and doesn’t charge the system or act as a Van de Graff generator. The only hair-raising will, hopefully, come from the games.
The specs are lower than a Raspberry Pi Zero W, but much more than a Raspberry Pi Pico. Playdate is powered by an Arm Cortex M7 CPU running at just 180MHz, 16MB of RAM, 4GB of flash (up from an initial 2GB), and a 2.7-inch, 400 × 240 1-bit Sharp Memory LCD that creates images in pure black and white, no shades of gray which means dithering is required to add texture and tone to a game. The screen lacks a backlight, relying on the reflective nature of the screen to illuminate your games. Anyone who had a Game Boy will be familiar with these principles, as the reflective screen and dithered graphics were part of Nintendo’s classic handheld. There’s Wi-Fi and Bluetooth on board, along with a headphone jack and a USB-C port for charging.
US software publisher Panic Inc. (that recently moved into games with titles like Firewatch and Untitled Goose Game) and Swedish industrial designer Teenage Engineering are the brains behind this quirky and interesting device.
Games, which are being made by the likes of Bennett Foddy, Zach Gage and Katamari Damacy creator Keita Takahashi, will arrive as a ‘season’, with 24 (recently doubled from 12) of them delivered wirelessly to the handheld, two a week, for no extra charge. The platform is open source and will allow games that aren’t part of an official ‘season’ to be side-loaded. An SDK will be available for Windows, Linux and Mac OS, which will include a simulator and debugger, and will be compatible with the C and Lua programming languages.
In an in-depth interview with Edge magazine, reproduced by Gamesradar+, Panic Inc. co-founder Cabel Sasser describes the device’s inception: “The first question from the CEO was, ‘Do you really think anyone’s going to buy this?’ I was like, ‘I’m not sure. But it’s something we really want to do, if you can help?’ And then the consultants were like, ‘It’s going to cost you, bare minimum, a couple million bucks to even remotely get this thing off the ground.’”
The pre-order price has recently been raised (hence the increase in specs and number of games) and currently sits at $179. Pre-orders begin in July from play.date.
According to Bloomberg’s sources, Intel has offered $2 billion for startup chip designer SiFive, though neither company has officially acknowledged the offer. SiFive is the leading designer of chips based on the open source RISC-V architecture that has coincidentally attracted much more interest in the wake of Nvidia’s ongoing acquisition of Arm for $40 billion. The reports of the possible SiFive acquisition come on the heels of SiFive’s announcement that it will collaborate with Intel’s newly-christened foundry services.
SiFive, most recently valued at $500 million, is reportedly considering takeover offers from multiple firms and it may still choose to remain an independent outfit. Much of the new interest in SiFive and RISC-V stems from firms looking to avoid any potential pitfalls due to Nvidia’s potential control of Arm.
RISC-V is an open source instruction set architecture (ISA) for RISC chips that discards the traditional notion of licensing fees associated with designing chips around a certain ISA, as we see with Arm. The ISA is maintained by the non-profit RISC-V International organization comprised of more than 1,000 members in 50 countries.
RISC-V is most commonly used in microcontrollers and small, simple chips, which has earned it quite the industry uptake with companies like WD, which ships over two billion RISC-V controllers a year in its products. The RISC-V organization plans to evolve the standard to accommodate faster chips for high-performance applications in the future.
Chinese chip firms have shown a keen interest in RISC-V chip designs in the wake of US restrictions on their use of Arm designs due to US national security interests. Naturally, RISC-V’s open source licensing, which eschews fees, and the fact that the company is incorporated in Switzerland and doesn’t “take a political position on behalf of any geography” is enticing to Chinese firms.
Intel CEO Pat Gelsinger’s recent announcement that the company would begin licensing its own x86 processor designs to other firms as part of its new IDM 2.0 initiative was surprising, and the company even revealed that it would be open to fabbing third-party Arm designs in its new custom foundry outfit, Intel Foundry Services (IFS).
If the reports are true, it’s natural to expect that Intel will look to add RISC-V designs to its own arsenal and also offer custom designs to customers of its new foundry services business, all of which ties in nicely with the company’s pledge to help “re-shore” semiconductor manufacturing in the US.
iOS 4 originally appeared nearly 10 years ago as Apple’s first mobile operating system to drop the iPhone OS naming convention. An 18-year-old developer has now lovingly recreated iOS 4 as an iPhone app, and it’s a beautiful blast from the past. If you never got the chance to use iOS 4, or you’re a fan of the iPhone 3G, OldOS almost flawlessly pulls off the experience of using an iPhone from a decade ago.
OldOS is “designed to be as close to pixel-perfect as possible,” says Zane, the developer behind the app. It’s all built using Apple’s SwiftUI, so it includes buttery smooth animations and even the old iPhone home button that vibrates with haptic feedback to make it feel like a real button.
Apple’s built-in iOS 4 apps have also been recreated here, and it’s a real flashback to the skeuomorphic days of the iPhone whenever they launch. Photos lets you view your existing camera roll as you would have 10 years ago, while Notes transports you back to the yellow post-it notes of yesteryear.
Today is Launch Day
Introducing OldOS — iOS 4 beautifully rebuilt in SwiftUI.
* Designed to be as close to pixel-perfect as possible. * Fully functional, perhaps even usable as a second OS. * ️ Fully open source for all to learn, modify, and build on. pic.twitter.com/K0JOE2fEKM
— Zane (@zzanehip) June 9, 2021
The only apps that don’t work as you might expect are Messages and YouTube. Apple used to bundle YouTube directly into its operating system, and the developer behind OldOS says there are “still some major issues with YouTube” and Messages that they’re working to fix.
Everything else is mostly flawless. and you can even browse the web in the old UI of Safari. The App Store also list apps that will redirect you to the modern store to download and install. There are some things that simply don’t work, including folders and no jiggling to rearrange home screen apps.
We’ve seen this type of nostalgic app appear on the iPhone before. Rewound launched in the App Store back in December 2019, turning an iPhone into an iPod. Apple quickly pulled the app a few days later, citing store violations.
This latest OldOS app is available on Apple’s TestFlight service, which is typically used to distribute beta versions of apps. That means it probably won’t last long before Apple takes exception, so grab it while you can. Zane has also published the source code for the entire project on GitHub, so if you’re willing to compile it in Xcode then it will live forever.
When you’ve worked with the Raspberry Pi, or just microelectronics in general, for long enough, you inevitably end up with a box of spare parts and sensors. Maker Andrew Healey decided to put his box of parts to good use with this satellite detection project.
The inspiration began after receiving a GPS receiver module as a gift. The end result is a custom dashboard that outputs data in real-time with a Windows 98 themed interface. Healey created this platform with modularity in mind so components can be easily added or removed over time.
The dashboard currently relies on three major accessories: a GT-U7 GPS receiver module, an AM2302 temperature/humidity sensor as well as a POS58 receipt printer. The best Raspberry Pi projects use a slick interface and this one uses CSS to resemble the default Windows 98 theme.
On the first 24-hour test run, the GPS module managed to detect 31 individual satellites! According to Healey, about 8 to 10 satellites are usually visible at a given time. The satellite data is output to a dedicated window on the dashboard. There is also a window used just for displaying the temperature and humidity information from the AM2302 module.
The printer has a notably unique function—Healey uses it to print messages from his friend who also has a receipt printer and can receive replies.
This project is totally open source and available to anyone who has a box of components that need to be put to use. Check out the project page on Healey’s website for more details.
There’s no need to have a GPS. If you want do this a bit more easily, using cloud-based data check out our tutorial on how to track satellite fly-bys with Raspberry Pi.
Twitter Blue — the social network’s first subscription product that adds an undo button to tweets among other minor additions like changing the color of icons and adding folders for bookmarks — launched on Thursday. It’s limited to Canada and Australia for now but has already garnered attention for lacking the features people would be willing to pay Twitter for, like no ads, or better tools to handle harassment.
Which makes Twitter CEO Jack Dorsey’s thread today, the day after the product launched, somewhat humorous and frustrating. What can we say: the guy loves to talk about bitcoin, even when other more pressing matters are at hand!
Square is considering making a hardware wallet for #bitcoin. If we do it, we would build it entirely in the open, from software to hardware design, and in collaboration with the community. We want to kick off this thinking the right way: by sharing some of our guiding principles.
— jack (@jack) June 4, 2021
I’m not a Bitcoin expert, but sure, making a product in the open, with the goal of being inclusive and open source sounds fine by me, especially since Square is already heavily invested in the currency. As with most things people tweet, best to take this as off-the-cuff musing rather than an official product announcement. Dorsey’s made similar pronouncements via Twitter thread — like funding a decentralized version of Twitter — that have only made small amounts of public progress since they were tweeted into the ether.
What this might highlight, though, is how Dorsey’s attention is split acting as the CEO of both Twitter and payment company Square. The issue has been raised before by one of the company’s investors, Elliott Management. Running Twitter is a job he’s been increasingly checked out of, with the Wall Street Journal reporting in October that Dorsey is “hands-off to the extreme, delegating most major decisions to subordinates in part so he can pursue his personal passions.” He came out and said today at the Bitcoin 2021 conference in Miami that if he wasn’t running Square and Twitter, he’d be working on bitcoin. He seems pretty good at finding a way to work on bitcoin anyway.
Twitter’s recent sprint of new product announcements suggests someone wants to change things at Twitter. Social audio features like Spaces and creator subscription systems like Super Follows are legitimately interesting — just maybe not to Dorsey. But as Platformer’s Casey Newton notes, Twitter Blue, as an example of the company’s new focus on power users, doesn’t really offer many features that power users want. And if his silence on the subject is any indication, perhaps Dorsey and users are aligned in their disinterest towards the paid service.
Jack, if you’re listening, there’s absolutely nothing stopping you from becoming a wandering ascetic, living off fake money you minted from an overclocked GPU. Just please, if you hate it so much, let someone else run your website.
AMD has announced that FidelityFX Super Resolution (FSR), its super sampling technique that should boost performance and image quality in supported games, will launch on June 22nd. The company gave a presentation at Computex Taipei today with more information on the feature, though it’s still not clear just how effective it’ll be.
Supersampling is a major point of differentiation between AMD’s GPUs and those from its competitor Nvidia. DLSS (Deep Learning Super Sampling), Nvidia’s version of the technique, uses neural networks to reconstruct images at higher quality from lower resolutions in real time, enabling games to run at smoother frame rates without compromising the image quality. Nvidia launched DLSS back in 2018 with the RTX 20- series, and it’s has been increasing performance and support ever since. More than 50 games now work with DLSS, and Nvidia itself just announced today that Red Dead Redemption 2 and Rainbow Six Siege are getting the feature.
AMD first said it was working on super sampling last year when it announced the RX 6000-series GPUs. The company isn’t providing too many technical details on the feature just yet, but says it will be open source and that more than ten studios and engines will support it this year.
FSR will support four levels of scaling. In AMD’s own testing, running Godfall on a Radeon RX 6800 XT with epic graphic settings and ray tracing, the performance mode ran at 150fps — a huge increase over the native rendering result of 49fps. The balanced, quality, and ultra quality modes turned in results of 124fps, 99fps, and 78fps respectively.
Because FSR is open-source, it’ll also run on Nvidia GPUs, including 10-series models that don’t support DLSS. AMD is claiming a 41-percent performance increase in quality mode for Godfall on a GTX 1060, for example, boosting the frame rate from 27fps to 38fps.
Companies’ own benchmarks should never be taken at face value, of course, and the results aren’t all that meaningful without being able to see the effects on image quality with our own eyes. AMD has not shown off much evidence of how FSR actually works in practice — but we won’t have too much longer to find out, as it’ll be available in three weeks.
, AMD has finally introduced FidelityFX Super Resolution (FSR), the company’s upscaling technology to rival Nvidia’s machine learning-powered DLSS. It was introduced during AMD chief executive Dr. Lisa Su’s virtual keynote address at Computex, which is being held online this year. The new feature will launch on June 22.
AMD promises that FSR will deliver up to 2.5 times higher performance while using the dedicated performance mode in “select titles.” At least ten game studios will integrate FSR into their games and engines this year. The first titles should show up this month, and the company also detailed FSR’s roots in open source. The feature is based on
AMD’s OpenGPU suite
.
FSR has four presets: ultra quality, quality, balanced and performance. The first two focus on higher quality by rendering at closer to native resolution, while the latter two push you to get as many frames as possible. FSR works on both desktops and laptops, as well as both integrated and discrete graphics.
In its own tests using Gearbox Software’s Godfall (AMD used the
Radeon RX 6900 XT
,
RX 6800 XT
and
RX 6700 XT
on the game’s epic preset at 4K with ray tracing on), the company claimed 49 frames per second at native rendering, but 78 fps using ultra quality FSR, 99 fps using quality, 124 fps on balanced and 150 fps on performance.
But FSR works on other hardware, including Nvidia’s graphics cards. AMD tested one of Nvidia’s older (but still very popular) mainstream GPUs, the GTX 1060, with Godfall at 1440p on the epic preset. It ran natively at 27 fps, but at 38 fps with quality mode on — a 41% boost. In fact, AMD says that FSR, which needs to be implemented by game developers to suit their titles, will work with over 100 CPUs and GPUs, including its own and competitors.
We’ll be able to test FidelityFX Super Resolution when it launches, starting with Godfall on June 22, so keep an eye out for our thoughts. While the performance gains sound impressive, we’re also keen to check out image quality. We’ve been fairly impressed by
Nvidia’s DLSS 2.0
, but the original DLSS implementation was far less compelling. It seems as though AMD aims to provide similar upscaling but without all the fancy machine learning.
Su’s keynote included other graphics announcements, such as the launch of the
Radeon RX 6800M, RX 6700M and RX 6600M mobile GPUs
based on RDNA 2, as well as a handful of new APUs.
With many Raspberry Pi projects, it’s common to wonder why makers would choose to use a Raspberry Pi, but today we’re asking Alfredo Sequieda why he has done this project at all. In a hilarious yet dangerous burst of creativity, Sequieda has created a Raspberry Pi-powered flamethrowing Roomba.
This isn’t the first creation from Sequida we’ve covered before, he previously had us on the edge of our seats with his awesome Nerf Gun controller, but this flamethrowing Roomba project is in an entirely different tier of engineering. The best Raspberry Pi projects are interactive, and this one is piloted with an Xbox One controller.
It doesn’t take much to power this vacuum of terror — Sequieda controlled all of the operations with a Raspberry Pi Zero W. Thanks to the help of 3D-printed supports, he mounted the Pi to the top of the Roomba along with a bottle of butane.
The servo motors activate the butane via a custom Python script. The Roomba is also controlled by a Python script using serial communication. With this setup, users can steer and operate the flamethrowing Roomba using the wireless Xbox One controller.
Good news for aspiring pyrotechnic engineers—this project is entirely open source. Users can find the code and STL files for 3D printing components on the project’s GitHub page. You can also follow Alfredo Sequeida on YouTube for more cool projects and any updates to this one.
In a new blog post, iFixit heavily criticizes Samsung’s recently announced Galaxy Upcycling program (via ArsTechnica), an initiative which the repair specialists helped launch in 2017. It’s a damning look at how the initiative morphed from its ambitious origins to a “nearly unrecognizable” final form, and completely sidelined iFixit in the process.
Here’s how iFixit describes the original plan:
The original Upcycling announcement had huge potential. The purpose was twofold: unlock phones’ bootloaders—which would have incidentally assisted other reuse projects like LineageOS—and foster an open source marketplace of applications for makers. You could run any operating system you wanted. It could have made a real dent in the huge and ever-growing e-waste problem by giving older Samsung devices some value (no small feat, that). It was a heck of a lot more interesting than the usual high-level pledges from device makers about carbon offsets and energy numbers.
You can see this original vision on display in a Samsung trailer from 2017 (embedded below). Samsung outlined how an old smartphone could be turned into a sensor for a fish tank, simultaneously re-using an old phone while at the same time helping to stop people from needing to buy a dedicated single-use device. Other potential ideas included turning old phones into smart home controllers, weather stations and nanny cams.
It sounds like a cool initiative, and iFixit was initiallyheavily involved. It lent its branding to the launch, and its CEO Kyle Wiens helped announce the project onstage at Samsung’s developer conference. It had even planned to expand its support pages and spare parts program for Samsung phones had the project shipped, but…
Instead, we heard crickets. The actual software was never posted. The Samsung team eventually stopped returning our emails. Friends inside the company told us that leadership wasn’t excited about a project that didn’t have a clear product tie-in or revenue plan.
So what’s the problem with the program in its 2021 form? Two things: it only goes back three years to the Galaxy S9, and it only gives it basic smart home functionality. Less, in other words, than what’s possible from a cheap $40 Raspberry Pi.
So instead of an actually-old Galaxy becoming an automatic pet feeder, full-fledged Linux computer, retro game console, a wooden-owl Alexa alternative, or anything else that you or a community of hackers can dream of, the new program will take a phone you can still sell for $160 and turn it into something like a $30 sensor.
Most will have probably just shrugged and moved on when they saw Samsung’s upcycling announcement in January. But it’s disappointing to realize that the project could have been so much more. iFixit’s post is well worth reading in its entirety.
Android has been around for over a decade at this point and has grown tremendously during that time. Google’s mobile operating system has now set a new record, with Android being used on over 3 billion active devices.
Since Android is open source, smartphone makers have been free to adopt it and even make changes to help differentiate their devices. This has been a successful approach, with the vast majority of major smartphone makers using Android instead of their own custom operating system.
Back in 2014, Google reached 1 billion active Android devices for the first time and by 2019, that number had grown to 2.5 billion. Now, the number of active Android devices has surpassed the 3 billion milestone.
Google I/O returned this week after a break in 2020. During the event, Google’s Vice President of Product Management, Sameer Samat, announced the new milestone. With three billion devices actively used, Android’s user base now dwarfs Apple’s iOS platform, which has an active device base of 1 billion as of this year.
Breaking down the numbers, this means that an additional 500 million Android devices have been activated since 2019 and 1 billion since 2017.
KitGuru Says: Android has come a long way over the years. What was the first Android device that you owned?
There are new features, but it’s the biggest design update in years
Google is announcing the latest beta for Android 12 today at Google I/O. It has an entirely new design based on a system called “Material You,” featuring big, bubbly buttons, shifting colors, and smoother animations. It is “the biggest design change in Android’s history,” according to Sameer Samat, VP of product management, Android and Google Play.
That might be a bit of hyperbole, especially considering how many design iterations Android has seen over the past decade, but it’s justified. Android 12 exudes confidence in its design, unafraid to make everything much larger and a little more playful. Every big design change can be polarizing, and I expect Android users who prefer information density in their UI may find it a little off-putting. But in just a few days, it has already grown on me.
There are a few other functional features being tossed in beyond what’s already been announced for the developer betas, but they’re fairly minor. The new design is what matters. It looks new, but Android by and large works the same — though, of course, Google can’t help itself and again shuffled around a few system-level features.
I’ve spent a couple of hours demoing all of the new features and the subsequent few days previewing some of the new designs in the beta that’s being released today. Here’s what to expect in Android 12 when it is officially released later this year.
Material You design and better widgets
Android 12 is one implementation of a new design system Google is debuting called Material You. Cue the jokes about UX versus UI versus… You, I suppose. Unlike the first version of Material Design, this new system is meant to mainly be a set of principles for creating interfaces — one that goes well beyond the original paper metaphor. Google says it will be applied across all of its products, from the web to apps to hardware to Android. Though as before, it’s likely going to take a long time for that to happen.
In any case, the point is that the new elements in Android 12 are Google’s specific implementations of those principles on Pixel phones. Which is to say: other phones might implement those principles differently or maybe even not at all. I can tell you what Google’s version of Android 12 is going to look and act like, but only Samsung can tell you what Samsung’s version will do (and, of course, when it will arrive).
The feature Google will be crowing the most about is that when you change your wallpaper, you’ll have the option to automatically change your system colors as well. Android 12 will pull out both dominant and complementary colors from your wallpaper automatically and apply those colors to buttons and sliders and the like. It’s neat, but I’m not personally a fan of changing button colors that much.
The lock screen is also set for some changes: the clock is huge and centered if you have no notifications and slightly smaller but still more prominent if you do. It also picks up an accent color based on the theming system. I especially love the giant clock on the always-on display.
Android’s widget system has developed a well-deserved bad reputation. Many apps don’t bother with them, and many more haven’t updated their widget’s look since they first made one in days of yore. The result is a huge swath of ugly, broken, and inconsistent widgets for the home screen.
Google is hoping to fix all of that with its new widget system. As with everything else in Android 12, the widgets Google has designed for its own apps are big and bubbly, with a playful design that’s not in keeping with how most people might think of Android. One clever feature is that when you move a widget around on your wallpaper, it subtly changes its background color to be closer to the part of the image it’s set upon.
I don’t have especially high hopes that Android developers will rush to adopt this new widget system, so I hope Google has a plan to encourage the most-used apps to get on it. Apple came very late to the home screen widget game on the iPhone, but it’s already surpassed most of the crufty widget abandonware you’ll find from most Android apps.
Bigger buttons and more animation
As you’ve no doubt gathered already from the photos, the most noticeable change in Android 12 is that all of the design elements are big, bubbly, and much more liberal in their use of animation. It certainly makes the entire system more legible and perhaps more accessible, but it also means you’re just going to get fewer buttons and menu items visible on a single screen.
That tradeoff is worth it, I think. Simple things like brightness and volume sliders are just easier to adjust now, for example. As for the animations, so far, I like them. But they definitely involve more visual flourish than before. When you unlock or plug in your phone, waves of shadow and light play across the screen. Apps expand out clearly from their icon’s position, and drawers and other elements slide in and out with fade effects.
More animations mean more resources and potentially more jitter, but Samat says the Android team has optimized how Android displays core elements. The windows and package manager use 22 percent less CPU time, the system server uses 15 percent less of the big (read: more powerful and battery-intensive) core on the processor, and interrupts have been reduced, too.
Android has another reputation: solving for jitter and jank by just throwing ever-more-powerful hardware at the problem: faster chips, higher refresh rate screens, and the like. Hopefully none of that will be necessary to keep these animations smooth on lower-end devices. On my Pixel 5, they’ve been quite good.
One last bit: there’s a new “overscroll” animation — the thing the screen does when you scroll to the end of a page. Now, everything on the screen will sort of stretch a bit when you can’t scroll any further. Maybe an Apple patent expired.
Shuffling system spaces around
It wouldn’t be a new version of Android without Google mucking about with notifications, Google Assistant, or what happens when you press the power button. With Android 12, we’ve hit the trifecta. Luckily, the changes Google has made mostly represent walking back some of the changes it made in Android 11.
The combined Quick Settings / notifications shade remains mostly the same — though the huge buttons mean you’re going to see fewer of them in either collapsed or expanded views. The main difference in notifications is mostly aesthetic. Like everything else, they’re big and bubbly. There’s a big, easy-to-hit down arrow for expanding them, and groups of notifications are put together into one bigger bubble. There’s even a nice little visual flourish when you begin to swipe a notification away: it forms its own roundrect, indicating that it has become a discrete object.
The thing that will please a lot of Android users is that after just a year, Google has bailed on its idea of creating a whole new power button menu with Google Wallet and smart home controls. Instead, both of those things are just buttons inside the quick settings shade, similar to Samsung’s solution.
Holding down the power button now just brings up Google Assistant. Samat says it was a necessary change because Google Assistant is going to begin to offer more contextually aware features based on whatever screen you’re looking at. I say the diagonal swipe-in from the corner to launch Assistant was terrible, and I wouldn’t be surprised if it seriously reduced how much people used it.
I also have to point out that it’s a case of Google adopting gestures already popular on other phones: the iPhone’s button power brings up Siri, and a Galaxy’s button brings up Bixby.
New privacy features for camera, mic, and location
Google is doing a few things with privacy in Android 12, mostly focused on three key sensors it sees as trigger points for people: location, camera, and microphone.
The camera and mic will now flip on a little green dot in the upper-right of the screen, indicating that they’re on. There are also now two optional toggles in Quick Settings for turning them off entirely at a system level.
When an app tries to use one of them, Android will pop up a box asking if you want to turn it back on. If you choose not to, the app thinks it has access to the camera or mic, but all Android gives it is a black nothingness and silence. It’s a mood.
For location, Google is adding another option for what kind of access you can grant an app. Alongside the options to limit access to one time or just when the app is open, there are settings for granting either “approximate” or “precise” locations. Approximate will let the app know your location with less precision, so it theoretically can’t guess your exact address. Google suggests it could be useful for things like weather apps. (Note that any permissions you’ve already granted will be grandfathered in, so you’ll need to dig into settings to switch them to approximate.)
Google is also creating a new “Privacy Dashboard” specifically focused on location, mic, and camera. It presents a pie chart of how many times each has been accessed in the last 24 hours along with a timeline of each time it was used. You can tap in and get to the settings for any app from there.
The Android Private Compute Core
Another new privacy feature is the unfortunately named “Android Private Compute Core.” Unfortunately, because when most people think of a “core,” they assume there’s an actual physical chip involved. Instead, think of the APCC as a sandboxed part of Android 12 for doing AI stuff.
Essentially, a bunch of Android machine learning functions are going to be run inside the APCC. It is walled-off from the rest of the OS, and the functions inside it are specifically not allowed any kind of network access. It literally cannot send or receive data from the cloud, Google says. The only way to communicate with the functions inside it is via specific APIs, which Google emphasizes are “open source” as some kind of talisman of security.
Talisman or no, it’s a good idea. The operations that run inside the APCC include Android’s feature for ambiently identifying playing music. That needs to have the microphone listening on a very regular basis, so it’s the sort of thing you’d want to keep local. The APCC also hands the “smart chips” for auto-reply buttons based on your own language usage.
An easier way to think of it is if there’s an AI function you might think is creepy, Google is running it inside the APCC so its powers are limited. And it’s also a sure sign that Google intends to introduce more AI features into Android in the future.
No news on app tracking — yet
Location, camera, mic, and machine learning are all privacy vectors to lock down, but they’re not the kind of privacy that’s on everybody’s mind right now. The more urgent concern in the last few months is app tracking for ad purposes. Apple has just locked all of that down with its App Tracking Transparency feature. Google itself is still planning on blocking third-party cookies in Chrome and replacing them with anonymizing technology.
What about Android? There have been rumors that Google is considering some kind of system similar to Apple’s, but there won’t be any announcements about it at Google I/O. However, Samat confirmed to me that his team is working on something:
There’s obviously a lot changing in the ecosystem. One thing about Google is it is a platform company. It’s also a company that is deep in the advertising space. So we’re thinking very deeply about how we should evolve the advertising system. You see what we’re doing on Chrome. From our standpoint on Android, we don’t have anything to announce at the moment, but we are taking a position that privacy and advertising don’t need to be directly opposed to each other. That, we don’t believe, is healthy for the overall ecosystem as a company. So we’re thinking about that working with our developer partners and we’ll be sharing more later this year.
A few other features
Google has already announced a bunch of features in earlier developer betas, most of which are under-the-hood kind of features. There are “improved accessibility features for people with impaired vision, scrolling screenshots, conversation widgets that bring your favorite people to the home screen” and the already-announced improved support for third-party app stores. On top of those, there are a few neat little additions to mention today.
First, Android 12 will (finally) have a built-in remote that will work with Android TV systems like the Chromecast with Google TV or Sony TVs. Google is also promising to work with partners to get car unlocking working via NFC and (if a phone supports it) UWB. It will be available on “select Pixel and Samsung Galaxy phones” later this year, and BMW is on board to support it in future vehicles.
For people with Chromebooks, Google is continuing the trend of making them work better with Android phones. Later this year, Chrome OS devices will be able to immediately access new photos in an Android phone’s photo library over Wi-Fi Direct instead of waiting for them to sync up to the Google Photos cloud. Google still doesn’t have anything as good as AirDrop for quickly sending files across multiple kinds of devices, but it’s a good step.
Android already has fast pairing for quickly setting up Bluetooth devices, but it’s not built into the Bluetooth spec. Instead, Google has to work with individual manufacturers to enable it. A new one is coming on board today: Beats, which is owned by Apple. (Huh!) Ford and BMW cars will also support one-tap pairing.
Android Updates
As always, no story about a new version of Android would be complete without pointing out that the only phones guaranteed to get it in a timely manner are Google’s own Pixel phones. However, Google has made some strides in the past few years. Samat says that there has been a year-over-year improvement in the “speed of updates” to the tune of 30 percent.
A few years ago, Google changed the architecture of Android with something called Project Treble. It made the system a little more modular, which, in turn, made it easier for Android manufacturers to apply their custom versions of Android without mucking about in the core of it. That should mean faster updates.
Some companies have improved slightly, including the most important one, Samsung. However, it’s still slow going, especially for older devices. As JR Raphael has pointed out, most companies are not getting updates out in what should be a perfectly reasonable timeframe.
Beyond Treble, there may be some behind-the-scenes pressure happening. More and more companies are committing to providing updates for longer. Google also is working directly with Qualcomm to speed up updates. Since Qualcomm is, for all intents and purposes, the monopoly chip provider for Android phones in the US, that should make a big difference, too.
That’s all heartening, but it’s important to set expectations appropriately. Android will never match iOS in terms of providing timely near-universal updates as soon as a new version of the OS is available. There will always be a gap between the Android release and its availability for non-Pixel phones. That’s just the way the Android ecosystem works.
That’s Android 12. It may not be the biggest feature drop in years, but it is easily the biggest visual overhaul in some time. And Android needed it. Over time and over multiple iterations, lots of corners of the OS were getting a little crufty as new ideas piled on top of each other. Android 12 doesn’t completely wipe the slate clean and start over, but it’s a significant and ambitious attempt to make the whole system feel more coherent and consistent.
The beta that’s available this week won’t get there — the version I’m using lacks the theming features, widgets, and plenty more. Those features should get layered in as we approach the official release later this year. Assuming that Google can get this fresh paint into all of the corners, it will make Google’s version of Android a much more enjoyable thing to use.
The University of Minnesota’s path to banishment was long, turbulent, and full of emotion
On the evening of April 6th, a student emailed a patch to a list of developers. Fifteen days later, the University of Minnesota was banned from contributing to the Linux kernel.
“I suggest you find a different community to do experiments on,” wrote Linux Foundation fellow Greg Kroah-Hartman in a livid email. “You are not welcome here.”
How did one email lead to a university-wide ban? I’ve spent the past week digging into this world — the players, the jargon, the university’s turbulent history with open-source software, the devoted and principled Linux kernel community. None of the University of Minnesota researchers would talk to me for this story. But among the other major characters — the Linux developers — there was no such hesitancy. This was a community eager to speak; it was a community betrayed.
The story begins in 2017, when a systems-security researcher named Kangjie Lu became an assistant professor at the University of Minnesota.
Lu’s research, per his website, concerns “the intersection of security, operating systems, program analysis, and compilers.” But Lu had his eye on Linux — most of his papers involve the Linux kernel in some way.
The Linux kernel is, at a basic level, the core of any Linux operating system. It’s the liaison between the OS and the device on which it’s running. A Linux user doesn’t interact with the kernel, but it’s essential to getting things done — it manages memory usage, writes things to the hard drive, and decides what tasks can use the CPU when. The kernel is open-source, meaning its millions of lines of code are publicly available for anyone to view and contribute to.
Well, “anyone.” Getting a patch onto people’s computers is no easy task. A submission needs to pass through a large web of developers and “maintainers” (thousands of volunteers, who are each responsible for the upkeep of different parts of the kernel) before it ultimately ends up in the mainline repository. Once there, it goes through a long testing period before eventually being incorporated into the “stable release,” which will go out to mainstream operating systems. It’s a rigorous system designed to weed out both malicious and incompetent actors. But — as is always the case with crowdsourced operations — there’s room for human error.
Some of Lu’s recent work has revolved around studying that potential for human error and reducing its influence. He’s proposed systems to automatically detect various types of bugs in open source, using the Linux kernel as a test case. These experiments tend to involve reporting bugs, submitting patches to Linux kernel maintainers, and reporting their acceptance rates. In a 2019 paper, for example, Lu and two of his PhD students, Aditya Pakki and Qiushi Wu, presented a system (“Crix”) for detecting a certain class of bugs in OS kernels. The trio found 278 of these bugs with Crix and submitted patches for all of them — the fact that maintainers accepted 151 meant the tool was promising.
On the whole, it was a useful body of work. Then, late last year, Lu took aim not at the kernel itself, but at its community.
In “On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits,” Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems. The group called these submissions “hypocrite commits.” (Wu didn’t respond to a request for comment for this story; Lu referred me to Mats Heimdahl, the head of the university’s department of computer science and engineering, who referred me to the department’s website.)
The explicit goal of this experiment, as the researchers have since emphasized, was to improve the security of the Linux kernel by demonstrating to developers how a malicious actor might slip through their net. One could argue that their process was similar, in principle, to that of white-hat hacking: play around with software, find bugs, let the developers know.
But the loudest reaction the paper received, on Twitter and across the Linux community, wasn’t gratitude — it was outcry.
“That paper, it’s just a lot of crap,” says Greg Scott, an IT professional who has worked with open-source software for over 20 years.
“In my personal view, it was completely unethical,” says security researcher Kenneth White, who is co-director of the Open Crypto Audit Project.
The frustration had little to do with the hypocrite commits themselves. In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.
Still, the paper hit a number of nerves among a very passionate (and very online) community when Lu first shared its abstract on Twitter. Some developers were angry that the university had intentionally wasted the maintainers’ time — which is a key difference between Minnesota’s work and a white-hat hacker poking around the Starbucks app for a bug bounty. “The researchers crossed a line they shouldn’t have crossed,” Scott says. “Nobody hired this group. They just chose to do it. And a whole lot of people spent a whole lot of time evaluating their patches.”
“If I were a volunteer putting my personal time into commits and testing, and then I found out someone’s experimenting, I would be unhappy,” Scott adds.
Then, there’s the dicier issue of whether an experiment like this amounts to human experimentation. It doesn’t, according to the University of Minnesota’s Institutional Review Board. Lu and Wu applied for approval in response to the outcry, and they were granted a formal letter of exemption.
The community members I spoke to didn’t buy it. “The researchers attempted to get retroactive Institutional Review Board approval on their actions that were, at best, wildly ignorant of the tenants of basic human subjects’ protections, which are typically taught by senior year of undergraduate institutions,” says White.
“It is generally not considered a nice thing to try to do ‘research’ on people who do not know you are doing research,” says Kroah-Hartman. “No one asked us if it was acceptable.”
That thread ran through many of the responses I got from developers — that regardless of the harms or benefits that resulted from its research, the university was messing around not just with community members but with the community’s underlying philosophy. Anyone who uses an operating system places some degree of trust in the people who contribute to and maintain that system. That’s especially true for people who use open-source software, and it’s a principle that some Linux users take very seriously.
“By definition, open source depends on a lively community,” Scott says. “There have to be people in that community to submit stuff, people in the community to document stuff, and people to use it and to set up this whole feedback loop to constantly make it stronger. That loop depends on lots of people, and you have to have a level of trust in that system … If somebody violates that trust, that messes things up.”
After the paper’s release, it was clear to many Linux kernel developers that something needed to be done about the University of Minnesota — previous submissions from the university needed to be reviewed. “Many of us put an item on our to-do list that said, ‘Go and audit all umn.edu submissions,’” said Kroah-Hartman, who was, above all else, annoyed that the experiment had put another task on his plate. But many kernel maintainers are volunteers with day jobs, and a large-scale review process didn’t materialize. At least, not in 2020.
On April 6th, 2021, Aditya Pakki, using his own email address, submitted a patch.
There was some brief discussion from other developers on the email chain, which fizzled out within a few days. Then Kroah-Hartman took a look. He was already on high alert for bad code from the University of Minnesota, and Pakki’s email address set off alarm bells. What’s more, the patch Pakki submitted didn’t appear helpful. “It takes a lot of effort to create a change that looks correct, yet does something wrong,” Kroah-Hartman told me. “These submissions all fit that pattern.”
So on April 20th, Kroah-Hartman put his foot down.
“Please stop submitting known-invalid patches,” he wrote to Pakki. “Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way.”
Maintainer Leon Romanovsky then chimed in: he’d taken a look at four previously accepted patches from Pakki and found that three of them added “various severity” security vulnerabilities.
Kroah-Hartman hoped that his request would be the end of the affair. But then Pakki lashed back. “I respectfully ask you to cease and desist from making wild accusations that are bordering on slander,” he wrote to Kroah-Hartman in what appears to be a private message.
Kroah-Hartman responded. “You and your group have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. Now you submit a series of obviously-incorrect patches again, so what am I supposed to think of such a thing?” he wrote back on the morning of April 21st.
Later that day, Kroah-Hartman made it official. “Future submissions from anyone with a umn.edu address should be default-rejected unless otherwise determined to actually be a valid fix,” he wrote in an email to a number of maintainers, as well as Lu, Pakki, and Wu. Kroah-Hartman reverted 190 submissions from Minnesota affiliates — 68 couldn’t be reverted but still needed manual review.
It’s not clear what experiment the new patch was part of, and Pakki declined to comment for this story. Lu’s website includes a brief reference to “superfluous patches from Aditya Pakki for a new bug-finding project.”
What is clear is that Pakki’s antics have finally set the delayed review process in motion; Linux developers began digging through all patches that university affiliates had submitted in the past. Jonathan Corbet, the founder and editor in chief of LWN.net, recently provided an update on that review process. Per his assessment, “Most of the suspect patches have turned out to be acceptable, if not great.” Of over 200 patches that were flagged, 42 are still set to be removed from the kernel.
Regardless of whether their reaction was justified, the Linux community gets to decide if the University of Minnesota affiliates can contribute to the kernel again. And that community has made its demands clear: the school needs to convince them its future patches won’t be a waste of anyone’s time.
What will it take to do that? In a statement released the same day as the ban, the university’s computer science department suspended its research into Linux-kernel security and announced that it would investigate Lu’s and Wu’s research method.
But that wasn’t enough for the Linux Foundation. Mike Dolan, Linux Foundation SVP and GM of projects, wrote a letter to the university on April 23rd, which The Verge has viewed. Dolan made four demands. He asked that the school release “all information necessary to identify all proposals of known-vulnerable code from any U of MN experiment” to help with the audit process. He asked that the paper on hypocrite commits be withdrawn from publication. He asked that the school ensure future experiments undergo IRB review before they begin, and that future IRB reviews ensure the subjects of experiments provide consent, “per usual research norms and laws.”
Two of those demands have since been met. Wu and Lu have retracted the paper and have released all the details of their study.
The university’s status on the third and fourth counts is unclear. In a letter sent to the Linux Foundation on April 27th, Heimdahl and Loren Terveen (the computer science and engineering department’s associate department head) maintain that the university’s IRB “acted properly,” and argues that human-subjects research “has a precise technical definition according to US federal regulations … and this technical definition may not accord with intuitive understanding of concepts like ‘experiments’ or even ‘experiments on people.’” They do, however, commit to providing more ethics training for department faculty. Reached for comment, university spokesperson Dan Gilchrist referred me to the computer science and engineering department’s website.
Meanwhile, Lu, Wu, and Pakki apologized to the Linux community this past Saturday in an open letter to the kernel mailing list, which contained some apology and some defense. “We made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for hypocrite patches,” the researchers wrote, before going on to reiterate that they hadn’t put any vulnerabilities into the Linux kernel, and that their other patches weren’t related to the hypocrite commits research.
Kroah-Hartman wasn’t having it. “The Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your university,” he responded. “Until those actions are taken, we do not have anything further to discuss.”
From the University of Minnesota researchers’ perspective, they didn’t set out to troll anyone — they were trying to point out a problem with the kernel maintainers’ review process. Now the Linux community has to reckon with the fallout of their experiment and what it means about the security of open-source software.
Some developers rejected University of Minnesota researchers’ perspective outright, claiming the fact that it’s possible to fool maintainers should be obvious to anyone familiar with open-source software. “If a sufficiently motivated, unscrupulous person can put themselves into a trusted position of updating critical software, there’s honestly little that can be done to stop them,” says White, the security researcher.
On the other hand, it’s clearly important to be vigilant about potential vulnerabilities in any operating system. And for others in the Linux community, as much ire as the experiment drew, its point about hypocrite commits appears to have been somewhat well taken. The incident has ignited conversations about patch-acceptance policies and how maintainers should handle submissions from new contributors, across Twitter, email lists, and forums. “Demonstrating this kind of ‘attack’ has been long overdue, and kicked off a very important discussion,” wrote maintainer Christoph Hellwig in an email thread with other maintainers. “I think they deserve a medal of honor.”
“This research was clearly unethical, but it did make it plain that the OSS development model is vulnerable to bad-faith commits,” one user wrote in a discussion post. “It now seems likely that Linux has some devastating back doors.”
Corbet also called for more scrutiny around new changes in his post about the incident. “If we cannot institutionalize a more careful process, we will continue to see a lot of bugs, and it will not really matter whether they were inserted intentionally or not,” he wrote.
And even for some of the paper’s most ardent critics, the process did prove a point — albeit, perhaps, the opposite of the one Wu, Lu, and Pakki were trying to make. It demonstrated that the system worked.
Eric Mintz, who manages 25 Linux servers, says this ban has made him much more confident in the operating system’s security. “I have more trust in the process because this was caught,” he says. “There may be compromises we don’t know about. But because we caught this one, it’s less likely we don’t know about the other ones. Because we have something in place to catch it.”
To Scott, the fact that the researchers were caught and banned is an example of Linux’s system functioning exactly the way it’s supposed to. “This method worked,” he insists. “The SolarWinds method, where there’s a big corporation behind it, that system didn’t work. This system did work.”
“Kernel developers are happy to see new tools created and — if the tools give good results — use them. They will also help with the testing of these tools, but they are less pleased to be recipients of tool-inspired patches that lack proper review,” Corbet writes. The community seems to be open to the University of Minnesota’s feedback — but as the Foundation has made clear, it’s on the school to make amends.
“The university could repair that trust by sincerely apologizing, and not fake apologizing, and by maybe sending a lot of beer to the right people,” Scott says. “It’s gonna take some work to restore their trust. So hopefully they’re up to it.”
Finding the best way to power your Raspberry Pi project is always a challenge in itself, but ensuring redundancy is an entirely different challenge. Thankfully, this Raspberry Pi UPS project from Sourav Gupta at Circuit Digest is ready to tackle the issue.
There’s nothing more frustrating than corrupting an SD card or worse, causing damage to hardware on your Pi because of power failure. Some of the best Raspberry Pi projects we’ve seen rely on continuous power and this PCB is designed to protect your hard work.
Image 1 of 2
Image 2 of 2
Gupta was kind enough to make the design open source and share all of the juicy details on how everything works. The UPS relies on an 18650 Lithium-Ion battery which can output up to 1.5A of continuous power with a peak of 2.5A. It can be recharged using a TP4056 Lithium battery charging module which uses a Micro USB port for charging the battery, on the other side of the PCB is a USB-A interface for output to the Raspberry Pi.
The board is installed between the Pi and power source to ensure power is supplied when drops occur. It plugs directly into the Pi’s power port rather than the GPIO pins.
The PCB was designed from scratch, with samples fabricated by PCBWay. If you want to check out the design up close or maybe even recreate it, you can download the Gerber files on the project page at Circuit Digest. There you’ll also find detailed instructions on what components are needed to assemble the final HAT. Be sure to follow Gupta for more cool Raspberry Pi projects and updates on this one.
The University of Minnesota Department of Computer Science and Engineering announced that it’s looking into a ban on contributing to the Linux kernel that was issued after its research attracted the ire of the stable release channel’s steward.
That ban was issued on Wednesday by Greg Kroah-Hartman, a Linux kernel developer responsible for the stable channel’s release due to a project that intentionally added bugs to the Linux kernel in the name of security research.
“We take this situation extremely seriously,” UMN computer science and engineering head Mats Heimdahl and associate department head Loren Terveen said in a statement, adding that they “immediately suspended this line of research” after the ban was announced.
Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel. pic.twitter.com/QE9rrAyyMXApril 21, 2021
See more
The project was supposed to show how bad actors can introduce vulnerabilities to open source projects—of which Linux is the most prominent example — by using “hypocrite commits” that hide malevolent intent behind seemingly benign code.
Heimdahl and Terveen also said the CS&E department will “investigate the research method and the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues, if needed.”
Their plan is to “report our findings back to the community as soon as practical.” The question, then, is whether or not any remedial action will be enough for the University of Minnesota to be welcomed back into the Linux community.
When asked about the situation yesterday, Kroah-Hartman suggested we speak to the university instead. The University of Minnesota didn’t respond to a request for comment, but tagged Tom’s Hardware on Twitter to make it aware of its response.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.