HWiNFO, which is increasingly becoming a source of new hardware information, has added support of ‘some Asus Z690 and Maximus XIV,’ according to blogger @Komachi_Ensaka. Interestingly, the mention of the Z690 has been removed since he wrote his post.
Since Intel has confirmed that it is developing an Alder Lake-based desktop platform, it’s not exactly a secret that its motherboard partners are working on appropriate mainboards. But Asus seems to be the first company to confirm these works, albeit unofficially.
Much is still up in the air about what exactly the Intel Z690 chipset will bring with it. Yet we do know that it is set to support Intel’s Alder Lake-S platform, which means DDR5 memory and PCIe 5.0 interconnections. We also know that the platform (not the CPU or chipset specifically) is set to support Thunderbolt 4.
While there’s still much to learn about Intel’s Z690 chipset, what is important in this story is timing. We know from Intel that Alder Lake-S is due in the second half of the year — which technically starts in July. And apparently Asus is pretty far along with at least one Z690 motherboard. Does this mean that the CPU will be out rather sooner than later? Who knows?
Being a top-of-the-range product, the ASRock Z590 Phantom Gaming-ITX/TB4 naturally has support for addressable RGB lighting (using the ASRock Polychrome Sync/Polychrome RGB software) and has a very sophisticated input/output department that has a number of unique features, such as three display outputs and multi-gig networking.
A production keyboard that feels like a custom build, the Ducky Mecha SF Radiant offers an excellent experience for both typing and gaming. It features sterling build quality and a gorgeous aesthetic you won’t find anywhere else, but the lack of software and hot-swappable switches are disappointing.
For
+ Unique aesthetic
+ Sturdy aluminum case
+ High-quality PBT keycaps
+ Compact design for portability and desk space
Against
– Lack of software can be limiting
– No hot-swappable key switches
With its beautifully iridescent aluminum case, custom-themed PBT keycaps and excellent typing experience, the Ducky Mecha SF Radiant combines the worlds of custom and production keyboards in a unique marriage of style and substance. At $159, it’s on the expensive side but is one of the top compact keyboards available today, competing nicely with the best gaming keyboards and offering a strong productivity experience. Assuming, that is, that you jive with the two available color ways and lack of software.
Ducky has been popular among enthusiasts for a number of years, but it wasn’t until the launch of the One 2 Mini back in 2018 that it really made it to the mainstream. Since then, it’s released a number of revisions and collaborations with major gaming brands like Razer and HyperX, but the 60% form factor can be difficult to adjust to with its lack of arrows, function keys and navigation buttons.
The One 2 SF, released in 2019, answered these challenges, adding back the arrow keys and a miniaturized nav-cluster, but was quickly overshadowed by the Mecha Mini, Ducky’s widely acclaimed aluminum-chassis take on the One 2 Mini. I was lucky enough to review each of those boards, but the Mecha Mini reigned supreme with its heavy aluminum build that made typing feel so much more satisfying.
The wait for an SF version is finally over with the Mecha SF Radiant. This new keyboard one-ups the Mini version with a brand new finish and themed keycap set for a package that is unlike anything else in the mainstream market today. It isn’t without its limitations compared to the competition, but may just be the best compact keyboard Ducky has produced yet. You’ll have to move fast if you want one for yourself, though, as only 2021 will be made of this limited edition item.
The Ducky Mecha SF Radiant is a compact keyboard that aims to achieve the size benefits of a 60% keyboard while adding functionality closer to a tenkeyless. In fact, the “SF” in the name stands for “sixty-five” alluding to its 65% layout. This is a bit of a misnomer in actual size but signifies the overall design.
The Mecha SF Radiant follows the Mini by doing away with the numpad and function row, but instead of removing the nav cluster entirely, it shrinks it to a column of three keys on the right side of the board. The buttons to the right of the spacebar and the right shift have also been shrunk to make room for dedicated arrow keys, which is a boon to gamers.
The result is a keyboard that is only slightly wider than a 60% but feels much more usable. It measures 12.8 x 4.1 x 1.6 inches and looks downright small on a full-size desk. The design is ergonomically sound and allows your hands to be spaced at a much natural distance versus spread out with a full-size keyboard. It’s also helpful in first-person shooters where big mouse movements can leave you craving more space. For my part, I most enjoy the clean, minimalist aesthetic of a compact keyboard on my desk.
The layout here is slightly different than many 65% keyboards, however. Rather than featuring four buttons on the right side like the much more traditional Drop ALT, Ducky only includes three and replaces last with an embossed case badge below the bottom key. The amount of keys follows the Ducky One 2 SF, but the addition of a case badge is a direct nod to the custom keyboard community where they have become a staple. The included keys are Delete, Page Up, and Page Down. Like most other keys on the keyboard, Page Up and Page Down also carry secondary functions, in this case Home and End.
Through clever use of function commands, the Mecha SF Radiant manages to pack most of the functionality of a tenkeyless keyboard into its tiny frame. By holding the Fn button, you can access a second layer that provides access to most of the absent keys. With Fn held, the number row will send function commands. Likewise, Print Screen, Insert, and Scroll Lock all have their own dedicated combo buttons, in addition to volume control and even buttons to control the mouse pointer. Holding Fn+Alt opens up the third layer to choose lighting presets, set custom color schemes, and program macros. It’s an impressive array that adds more software-free customization than even programmable gaming keyboards from Razer and Logitech can provide, but since there are no side legends on the keycaps, it may take a while to memorize every keymap.
All of this was true of the original One 2 SF, so what really sets Mecha apart is its metal case and PBT keycaps. The case is solid aluminum and lends the keyboard weight and density which both enhance the typing experience. It’s not the heaviest keyboard I’ve used, even compared to some sixty-percents I’ve tried, but at 1.9 pounds, it’s heavier than it looks. The most striking aspect of it is the iridescent finish which shifts from teal to blue to purple depending on the angle. Ducky has dubbed this version “Ocean” but the Radiant is also available in a green “Emerald” colorway.
The shifting, transforming quality of the keyboard is striking but a bit of a double-edged sword. The keycaps have been carefully curated with three shades of blue and white, but they don’t always match the case perfectly depending on the angle you’re viewing it from. From a normal seated or standing position, it looks great. Viewed from another angle where it appears purple, it can look mismatched against the blue.
The keycaps themselves are excellent. Ducky used its usual PBT plastic, which is more durable and resistant to shine than ABS plastic. The legends are double-shot, which means they’re made of a second piece of plastic that’s bonded to the outer shell, preventing fading or chipping over time.
The walls of the caps are also delightfully thick which lends typing a solidity that’s often lacking from the thin-walled keycaps we see on most gaming keyboards. The legends aren’t shine-through, so the RGB backlight is relegated to an underglow effect that’s more for style than helping you type in the dark. As usual, Ducky includes a selection of alternate keycaps, this time all white. I was surprised to find that the alternate arrow keys actually are backlit, if only slightly. The shape of the arrow has been carved out of the second shot of plastic, allowing them to glow a dull blue in the dark.
Despite the occasional angle-based mismatch, the keycaps ultimately work to tie the look of the keyboard together. The mix of shades of blue definitely brings sea waves to mind. When installed, the injection of white along the right-hand side is reminiscent of a rolling wave. This focus on aesthetics is another quality borrowed from the enthusiast community where look and sound often rival the feel of typing itself.
Pulling back a touch, the Mecha SF Radiant features per-key RGB backlighting. As you might expect from an RGB-enabled board, it features the usual suspects in terms of lighting presets: rainbow wave mode, color cycling, breathing, reactive typing, and more totaling ten preset modes. Five can be color customized using a built-in RGB mixer on the Z, X and C keys or by activating a built-in color palette and tapping your color of choice. This is already fairly impressive, but you can also use the lighting to play a pair of games based on Minesweeper and roulette.
The lighting looks great. The Mecha SF Radiant uses a floating key design that exposes the switch housings. This creates a common but still appealing aesthetic that highlights the illumination from the sides. The LEDs are bright and the switches are mounted on a white plate which allows the colors to blend together into a seamless pool of light. Given the highly-themed keycaps and case, matching lighting is inherently more limited but I found white, tinged blue thanks to the reflection from the keycaps, to look best.
Despite the keyboard clearly targeting the middle ground between custom and production keyboards, it doesn’t offer hot-swap support to quickly change switches. This isn’t unusual for Ducky (it just released its first hot-swappable keyboard last year) but is still disappointing. One of the most fun parts of the hobby is trying out new switches and being able to quickly change the whole feel of your keyboard but that won’t be possible here.
Typing Experience on Ducky Mecha SF Radiant
The Ducky Mecha SF Radiant is available with a wide selection of Cherry MX RGB key switches. Clicky MX Blue, tactile MX Brown, and linear MX Red are all present and accounted for, but you’ll also have the choice of Cherry MX Black, MX Silver, or MX Silent Red. Each of these switches are the updated models from Cherry and are rated for 100 million actuations instead of the 50 million they were previously.
My unit was sent with Cherry MX Silent Red switches. Silent Reds are similar to standard MX Red switches in their linear travel but feature internal dampers to reduce typing noise and cushion bottom-outs. They also have a slightly reduced actuation distance of 1.9 mm and a total travel of 3.7 mm but this isn’t really perceptible in normal use. The actuation force is the same at 45 grams. Silent Reds are not my first choice of switch for this reason but are audibly quieter and a much better fit for typing or gaming at work or with a roommate nearby.
Typing on the Mecha SF Radiant is satisfying on multiple levels. The keycaps are lightly textured and felt nice against my fingers and their thick walls lent the experience a more solid, substantial feel. The pillowy bottom-outs were also very nice and allowed me to use the keyboard at work without disturbing my co-workers.
The aluminum case itself plays an important role in the typing experience. Typing on it feels solid and dense, without much empty space inside the shell. Spring ping, which can sometimes be an issue in reverberant alloy cases, was barely audible and disappeared entirely after I lubed the switches (see our article on how to lube switches, but note that I could not remove these so only lubed them through the top). The density of the case enhanced the switch’s silencing effects.
I only wish I could have tried other kinds of switches, but if experience is any indicator, the case should lend typing a higher pitch with other switch types, though I wasn’t able to test this due to the lack of hot-swap support.
Another high-point was the stabilizers. Like most production keyboards, Ducky used plate-mount stabilizers but they came factory lubed out with absolutely minimal rattle. Gaming companies are catching up in this regard (Corsair and Razer now factory lube their stabs), so it’s good to see Ducky keeping its game strong. Stabilizers can make or break the sound of a mechanical keyboard and the Mecha SF Radiant was very good without the need for additional mods.
Transitioning to the Radiant was easy, and I didn’t lose typing speed making the jump. I went through several rounds of tests at 10fastfingers and averaged 103 words per minute. With my Drop Carina keyboard outfitted with tactile Holy Panda switches, ostensibly better for typing due to their pronounced tactile feedback, I averaged 104 words per minute.
Gaming Experience on Ducky Mecha SF Radiant
While the Ducky Mecha SF Radiant isn’t marketed as a gaming keyboard, it offered a solid gaming experience nonetheless. The form factor seems best suited to high-sensitivity shooters like CS:GO but even playing more relaxed games like Valheim, it was just as responsive and reliable as the Corsair K100 RGB Optical Gaming Keyboard I had on hand to test against.
If you prefer to have the entire keyset available to press at once, the keyboard supports n-key rollover or can be limited to only six simultaneous inputs using a DIP switch on the back. You can also permanently disable the Windows key using a second DIP switch or just while in-game using an Fn+Alt combination. The keyboard also supports customizable debounce delay from 5 – 25 ms to balance key chattering with responsiveness.
My go-to genre is first-person shooters where responsiveness reigns supreme. Even though the Mecha SF Radiant doesn’t boast an 4,000 Hz response rate like the Corsair K100, — it’s a more standard 1,000 Hz — I was hard-pressed to feel any difference in responsiveness when comparing the two keyboards0. Playing Doom Eternal, I was able to double-dash through the air, glory kill, and generally rip and tear just as if I were using a keyboard marketed explicitly for gaming.
Competitive gamers may really appreciate the condensed nature of theMecha SF Radiant. I’m used to gaming on a compact keyboard, so I spent some time “resetting” with the Corsair K100 before this review. Swapping back to the Mecha SF Radiant made playing Battlefield 5 more comfortable. Having my arms closer together felt immediately more natural. The smaller size also made it easier to manage repositioning the keyboard at a comfortable angle. The Mecha SF Radiant is small enough to move with one hand and doing the same with the K100 was cumbersome at best.
The biggest limitation I found came with World of Warcraft. MMO players and macro fans may find the compact size doesn’t lend itself well to storing lots of macros. The lack of dedicated macro keys is expected on a keyboard designed to save space, but their absence is mitigated by the column of additional keys along the right side. For gaming, these can easily be set to macro commands and thanks to built-in memory support for up to six profiles, it’s possible to maintain different key sets for different games and productivity tasks.
Programming Ducky Mecha SF Radiant
One of the greatest strengths of the Ducky Mecha SF Radiant is also its greatest Achilles heel: the lack of dedicated software. It’s an asset to the keyboard because it can be programmed on any machine, regardless of security limitations, and function the same between devices. That means you won’t be missing features because you can’t install the software. At the same time, it means programming requires multiple steps, more time, and is more limited than competing keyboards with full software suites.
With a few different key combinations, you’re able to record macros and remap keys and even set custom lighting schemes. The keyboard supports five programmable profiles in addition to another that’s locked to default settings, so there’s plenty of latitude to create unique layouts and color schemes to match your different use cases.
In the case of macros, holding Fn+Alt+Tab for three seconds puts the keyboard into recording mode. You press the key you want to remap, enter your string, and press Fn+Alt+Tab a second time to end recording. This can also be used to change the position of different keys, though the keyboard also supports swapping the location of popular remaps like Fn, Ctrl, and Alt using another Fn+Alt+K combination.
For lighting, presets can be selected using Fn+Alt+T. The first five are color locked but the second half all allow you to customize the hue using the built-in palette or RGB mixer. The mixer allows for greater control by tapping Red, Green, and Blue values up to 10 times but takes much longer to dial in. Alternatively, Fn+Alt+Spacebar illuminates all of the keys in a rainbow and you can simply tap the color you want. Creating a custom color scheme is also possible following this same process after holding Fn+Alt+Caps Lock and tapping each key you want to illuminate a given color.
If that sounds like a lot, it is. In comparison to opening a simple app and hitting a “record” for macros or “painting” the keys your color of choice, it’s just not as simple or intuitive. I love that it’s possible to completely customize the board without installing anything, but it demands a level of memorization that is initially frustrating.
Bottom Line
The Ducky Mecha SF Radiant isn’t the perfect compact keyboard but it is a very good one. The combination of unique looks, excellent build quality, and sterling typing experience make this an excellent choice for users not ready to take the plunge into custom mechanical keyboards. At the same time, the lack of hot-swap support or optional software really are disappointing for flexibility and ease of use. Still, the pros far outweigh the cons here and this is an incredibly solid buy if you enjoy the look.
At $159, the Radiant doesn’t come cheap. If you’re looking for an aluminum keyboard and don’t mind it coming in a larger size, the HyperX Alloy FPS Origins might be a good fit. Alternatively, if you want something compact but that still has all the bells and whistles of a high-end gaming keyboard, the Corsair K70 RGB TKL is definitely worth a look.
If you want the best of both worlds and don’t mind sticking with the switches you start with, the Ducky Mecha SF Radiant is definitely worth considering.
Playing games on your Raspberry Pi is far easier with a good game controller. Many different game controllers can be connected to your Raspberry Pi using USB. Furthermore, some well-known console controllers can also be linked up using Bluetooth.
In theory, all controllers should work with any Raspberry Pi projects. This covers everything from generic USB joypads to the latest Bluetooth devices. So, you can expect to be able to connect an Xbox One controller and a PS4 controller to your Raspberry Pi. Controllers designed for the PlayStation 3 and Xbox 360 will also work, as will Nintendo gamepads.
Own a PlayStation 5? The new Sony console features a major revision of the much-loved game controller. But despite being fresh out of the box in 2020, the PS5 controller will easily connect to a Raspberry Pi over Bluetooth, just like its predecessor. Meanwhile, Xbox Series S and X controllers are backward compatible, and can be used on an Xbox One console. The new controller design should also work with the Raspberry Pi.
In this tutorial, we’ll look at what you need to do to connect the most widely used game controllers to a Raspberry Pi: those intended for the Xbox One, PS4, Xbox 360 and PS3 consoles.
Connecting the Xbox One Controller Via USB to Raspberry Pi
The Xbox One boasts one of the most popular game controllers available. Also compatible with PC games, this is a well-designed, multi-purpose controller that can be easily connected to a Raspberry Pi, either using USB or Bluetooth.
1. Update and upgrade the software on your Raspberry Pi.
sudo apt update
sudo apt upgrade
2. Connect the controller and launch a game such as Minecraft Pi Edition, which comes preloaded when you install Raspberry Pi OS with all the recommended software. If you can move your character with the controller then everything is ready to go. If not, go to the next step.
3. Install the Xbox One driver and then reboot your Raspberry Pi.
sudo apt install xboxdrv
4. Open your game and test that you can move around.
Connecting the Xbox One / Playstation 4 and 5 Controller Via Bluetooth
Using a wireless Xbox One controller with the Raspberry Pi is a little more complicated. Two types of wireless Xbox One controller have been released. One uses wireless, while the second requires Bluetooth. How can you tell which is which?
If you have the 1697 wireless model, you’ll need to connect the official Microsoft Xbox Wireless Adapter to your Raspberry Pi. This is a standard USB dongle that should work out of the box. Simply hold the pairing buttons on the adapter and the Xbox One controller to sync, then start playing.
To Connect the Xbox One Bluetooth Controller
1. Update and upgrade the software on your Raspberry Pi.
sudo apt update
sudo apt upgrade
2. Install the Xbox One driver.
sudo apt install xboxdrv
3. Disable ERTM (Enhanced Re-Transmission Mode). While enabled, this Bluetooth feature blocks syncing between the Xbox One controller and your Raspberry Pi.
echo ‘options bluetooth disable_ertm=Y’ | sudo tee -a /etc/modprobe.d/bluetooth.conf
4. Reboot your Raspberry Pi.
5. Open a terminal and start the bluetooth control tool.
sudo bluetoothctl
6. At the [Bluetooth]# prompt, enable the agent and set it as default.
agent on
default-agent
7. Power up the Xbox One controller and hold the sync button. At the [Bluetooth]# prompt, scan for devices.
scan on
The MAC address should appear, comprising six pairs of letters and numbers followed by “Xbox Wireless Controller.”
8. Use the MAC address to connect the Xbox controller.
connect [YOUR MAC ADDRESS]
9. To save time for future connections, use the trust command to automatically connect.
trust [YOUR MAC ADDRESS]
Connecting an Xbox 360 Controller to Raspberry Pi
If you don’t have more recent controllers (or the budget to buy them), it might be easier for you to grab a controller from an older generation of consoles, such as the Xbox 360, or PS3.
1. Update and upgrade the software on your Raspberry Pi.
sudo apt update
sudo apt upgrade
2. Install the Xbox One driver.
sudo apt install xboxdrv
3. Connect your controller via USB and it should just work. Wireless controllers will require a dedicated wireless receiver (the type that is developed for PC use).
Connecting a Playstation 3 Controller to Raspberry Pi
Connecting a Playstation 3 controller via USB is straightforward, but Bluetooth access requires some compiling.
1. Update and upgrade the software on your Raspberry Pi.
sudo apt update
sudo apt upgrade
2. Install the libusb-dev software. This ensures the PS3 can communicate with the Raspberry Pi over Bluetooth.
sudo apt install libusb-dev
3. Create a folder for the sixpair software, switch to that folder, and download the sixpair.c software.
mkdir ~/sixpair
cd ~/sixpair
wget http://www.pabr.org/sixlinux/sixpair.c
4. Compile the code with gcc.
gcc -o sixpair sixpair.c -lusb
5. Connect the controller to the Pi using its USB cable and run sixpair to configure the Bluetooth connection.
sudo ~/sixpair/sixpair
6. Take note of the MAC code, then disconnect the PS3 controller.
7. Open a terminal and start the bluetooth control tool.
sudo bluetoothctl
8. At the [Bluetooth]# prompt, enable the agent and set it as default.
agent on
default-agent
9. Power up the Playstation 3 controller and hold the sync button. In the [Bluetooth]# prompt scan for devices.
scan on
10. The MAC address should appear, comprising six pairs of letters and numbers. Look for your Playstation 3 controller’s MAC address. Use the MAC address to connect the controller.
connect [YOUR MAC ADDRESS]
11. To save time for future connections, use the trust command to automatically connect.
trust [YOUR MAC ADDRESS]
For other Bluetooth controllers, meanwhile, generic connections should work. This means that anything – smartphone game controllers, for example – can conceivably be connected using bluetoothctl , but some calibration may be required.
Whatever device you’re using, you may need to test it. To do this, simply use the testing tool in the Linux joystick utility.
sudo apt install joystick
To test your gamepad, ensure that it is connected and run the jstest command to check that each button is registered.
sudo jstest /dev/input/js0
This article originally appeared in an issue of Linux Format magazine.
Just two days before Apple gets dragged into a California court to justify its 30 percent App Store fee — and two days after Microsoft axed its 30 percent cut on PC — we’re learning that gaming giant Valve is now facing down lawsuits against its own 30 percent cut and alleged anticompetitive practices with its PC gaming platform Steam.
“Valve abuses its market power to ensure game publishers have no choice but to sell most of their games through the Steam Store, where they are subject to Valve’s 30% toll,” argues indie game developer and Humble Bundle creator Wolfire Games, in a lawsuit filed Tuesday (via Ars Technica).
Much like Epic v. Apple, the new suit argues that a platform owner is using an effective monopoly over the place where people run their software (there, iOS; here, Steam) to dominate and tax an entire separate industry (alternative app / game stores), an industry that could theoretically flourish and produce lower prices for consumers if not for (Apple’s / Valve’s) iron grip.
Wolfire claims that Valve now controls “approximately 75 percent” of the entire PC gaming market, reaping an estimated $6 billion in annual revenue as a result from that 30 percent fee alone — over $15 million per year per Valve employee, assuming the company still has somewhere in the vicinity of the 360 employees it confirmed having five years ago.
As to how Valve might be abusing its power, there’s a laundry list of complaints that you might want to read in full (which is why I’ve embedded the complaint below), but the arguments seem to boil down to:
Every other company’s attempt to compete with Steam has failed to make a dent, even though many of them offered developers a bigger cut of the profits, such as the Epic Game Store’s 88-percent revenue share
Steam doesn’t allow publishers to sell PC games and game keys for less money elsewhere
That in turn means rival game platforms can’t compete on price, which keeps them from getting a foothold
Most of those rival game stores have largely given up, like how EA and Microsoft have each brought their games back to Steam
That ensures Steam stays the dominant platform, because companies that could have become competitors are reduced to simply feeding the Steam engine with their games or selling Steam keys
Wolfire says that the Humble Bundle in particular has been a victim of Valve’s practices — the lawsuit claims that “publishers became more and more reluctant to participate in Humble Bundle events, decreasing the quantity and quality of products available to Humble Bundle customers,” because they feared retaliation if Humble Bundle buyers resold their Steam keys on the grey market for cheap — and though Valve once worked with Humble Bundle on a keyless direct integration, the lawsuit claims that Valve abruptly pulled the plug on that partnership with no explanation.
As you’d expect, the lawsuit doesn’t waste much ink considering why gamers might prefer Steam to the likes of EA’s Origin or Microsoft’s Windows Store beyond the simple matter of price; I’d argue most Steam competitors have been somewhat deficient when it comes to addressing PC gamers’ many wants and needs. But that doesn’t excuse Valve’s anticompetitive practices, assuming these claims are true.
Valve didn’t respond to a request for comment.
This isn’t the first lawsuit brought against Valve; a group of individual game buyers filed a fairly similar complaint in January, and I’ve embedded the new amended version of that complaint below as well. But that earlier complaint also accused game companies alongside Valve — this new one lawsuit is by a game company itself.
Each suit is hoping to win class-action status.
Whether these plaintiffs succeed against Valve or no, the pressure is clearly mounting to reduce these app store fees across the industry, and Valve may have a harder time justifying them than most — it’s seemingly more dominant in the PC gaming space than either Apple or Google are in the smartphone one, even if there are far fewer PC gamers than phone users.
Valve also hasn’t necessarily made a huge concession to game developers so far. In 2018, Valve did adjust its revenue split to give bigger companies more money, reducing its 30 percent cut to 25 percent after a developer racks up $10 million in sales, and down to 20 percent after they hit $50 million. (Apple and Google drop their cuts to 15 percent for developers with under $1 million in sales, theoretically helping smaller developers instead of bigger ones.) But the Epic Games Store only takes 12 percent, and Microsoft’s Windows Store just copied that lead by dropping its 30 percent cut to 12 percent as well.
The EU may also add additional pressure in the future; yesterday, European Commission executive vice president Margrethe Vestager revealed it would also “take an interest in the gaming app market” following its conclusion that Apple has broken EU antitrust laws around music streaming apps. The European Commission already has Valve on its radar, too; it fined the company earlier this year for geo-blocking game sales.
Researchers from two universities have discovered several new variants of Spectre exploits that affect all modern processors from AMD and Intel with micro-op caches. Existing Spectre mitigations do not protect the CPUs against potential attacks that use these vulnerabilities. Meanwhile, researchers believe that mitigating these vulnerabilities will cause more significant performance penalties than the fixes for previous types of Spectre exploits. However, it remains unknown how easy these vulnerabilities are to exploit in the real world, so the danger may be limited to directed attacks.
Three New Types of Potential Spectre Attacks
Scholars from the University of Virginia and University of California San Diego have published a paper describing three new types of potential Spectre attacks using vulnerabilities of micro-op caches (thanks Phoronix for the tip). The team of researchers led by Ashish Venkat discovered that hackers can potentially steal data when a CPU fetches commands from the micro-op cache. Since all modern processors from AMD (since 2017) and Intel (since 2011) use micro-op caches, all of them are prone to a hypothetical attack.
The document lists three new types of potential attacks:
A same thread cross-domain attack that leaks secrets across the user- kernel boundary;
A cross-SMT thread attack that transmits secrets across two SMT threads running on the same physical core, but different logical cores, via the micro-op cache;
Transient execution attacks that have the ability to leak an unauthorized secret accessed along a misspeculated path, even before the transient instruction is dispatched to execution.
Fixes Going to Hurt
Both AMD and Intel had been informed about the vulnerabilities in advance, but so far, no microcode updates or OS patches have been released. In fact, the researchers believe that since potential attacks must use mitigations in extremely low-level caches, it will be impossible to fix the weaknesses without severe performance impacts.
The document describes several ways to mitigate the vulnerabilities.
One of the ways is to flush the micro-op cache at domain crossings, but since modern CPUs need to flush the Instruction Translation Lookaside Buffer (iTLB) to flush the micro-op cache, frequent flushing of both will ‘lead to heavy performance consequences, as the processor can make no forward progress until the iTLB refills.’
The second way is to partition micro-op caches based on privileges. However, as the number of protection domains increase, such partitioning would translate into heavy underutilization of the micro-op cache, removing much of its performance advantages.
Yet another way is to implement a performance counter-based monitoring that detects anomalies, but the technique is prone to misclassification errors, whereas frequent probing leads to significant performance degradation.
Low Risk?
One thing to keep in mind is that exploiting micro-ops cache vulnerabilities is extremely tricky as such malware will have to bypass all other software and hardware security measures that modern systems have and then execute a very specific type of attack that is unconventional, to say the least. To that end, chances that the new Spectre vulnerabilities will lead to widespread wrongdoings are rather low. Instead, they could be used for specific targeted attacks from sophisticated players, like nation-states.
After a controversial blog post in which CEO Jason Fried outlined Basecamp’s new philosophy that prohibited, among other things, “societal and political discussions” on internal forums, company co-founder David Heinemeier Hansson said the company would offer generous severance packages to anyone who disagreed with the new stance. On Friday, it appears a large number of Basecamp employees are taking Hansson up on his offer: according to TheVerge contributing editor Casey Newton’s sources, roughly a third of the company’s 57 employees accepted buyouts today. As of Friday afternoon, 18 people had tweeted they were planning to leave.
Not long after Fried’s Monday blog post went public — and was revised several times amid public backlash online — Hansson outlined the terms of the new severance offer in a separate Wednesday blog post.
Yesterday, we offered everyone at Basecamp an option of a severance package worth up to six months salary for those who’ve been with the company over three years, and three months salary for those at the company less than that. No hard feelings, no questions asked. For those who cannot see a future at Basecamp under this new direction, we’ll help them in every which way we can to land somewhere else.
Among those who announced on Twitter they’re leaving the company are reportedly head of marketing Andy Didorosi, head of design Jonas Downey, and head of customer support Kristin Aardsma. Most cited “recent changes” at the company as their reason for leaving.
“Given the recent changes at Basecamp, I’ve decided to leave my job as Head of Design,” Downey tweeted. “I’ve helped design & build all of our products since 2011, and recently I’ve been leading our design team too.”
I resigned today from my role as Head of Marketing at Basecamp due to recent changes and new policies.
I’ll be returning to entrepreneurship. My DMs are open if you’d like to talk or you can reach me at andy@detroitindie.com
— Andy Didorosi (@ThatDetroitAndy) April 30, 2021
I’ve resigned as Head of Customer Support at Basecamp. I’m four months pregnant, so I’m going to take some time off to build this baby and hang out with my brilliant spouse and child.
— Kristin Aardsma (@kikiaards) April 30, 2021
I have left Basecamp due to the recent changes & policies.
I’ve been doing product design there for 7yrs. The last 3yrs I led the iOS team working alongside @dylanginsburg and @zachwaugh — they are the best .
If you need a product designer, please DM or email me: hi@conor.cc
— Conor Muirhead (@conormuirhead) April 30, 2021
After nearly 8 years, given the recent changes at Basecamp, I’ve decided to leave my job as an Android programmer there. Will eventually be looking for something new, so please feel free to reach out / RT. DMs open.
Thank you all for your support and kindness. It means a lot. ❤️
— Dan Kim (@dankim) April 30, 2021
Software developer John Breen is tracking additional Basecamp departures in this Twitter thread.
The original blog post that started the brouhaha at the tiny company with an outsized voice also detailed how Basecamp would do away with “paternalistic benefits,” committees, and would prohibit “lingering or dwelling on past decisions.” But it was the “societal and political discussions” item that stirred up the most reaction:
Today’s social and political waters are especially choppy. Sensitivities are at 11, and every discussion remotely related to politics, advocacy, or society at large quickly spins away from pleasant. You shouldn’t have to wonder if staying out of it means you’re complicit, or wading into it means you’re a target. These are difficult enough waters to navigate in life, but significantly more so at work. It’s become too much. It’s a major distraction. It saps our energy, and redirects our dialog towards dark places. It’s not healthy, it hasn’t served us well. And we’re done with it on our company Basecamp account where the work happens. People can take the conversations with willing co-workers to Signal, Whatsapp, or even a personal Basecamp account, but it can’t happen where the work happens anymore
While the company argued that it was just trying to get its own employees focused on work, company founders don’t tend to shy away from “societal and political discussions” online, with Hansson in particular having become a vocal critic of Apple’s App Store policies, to the point that he has testified in favor of antitrust regulation.
As The Verge later reported, the initial motivation for the letter stemmed from internal disagreement over a controversial list of “funny names” of Basecamp customers. Several of the names on the list, which resurfaced several times over the years and of which management was well aware, were of Asian or African origin. Employees considered their inclusion inappropriate at best, and racist at worst.
Hansson acknowledged the list and tried to move on (you can read his internal communications here), but employees pressed the issue.
Hansson did not reply to a request for comment from The Verge on Friday.
The University of Minnesota’s path to banishment was long, turbulent, and full of emotion
On the evening of April 6th, a student emailed a patch to a list of developers. Fifteen days later, the University of Minnesota was banned from contributing to the Linux kernel.
“I suggest you find a different community to do experiments on,” wrote Linux Foundation fellow Greg Kroah-Hartman in a livid email. “You are not welcome here.”
How did one email lead to a university-wide ban? I’ve spent the past week digging into this world — the players, the jargon, the university’s turbulent history with open-source software, the devoted and principled Linux kernel community. None of the University of Minnesota researchers would talk to me for this story. But among the other major characters — the Linux developers — there was no such hesitancy. This was a community eager to speak; it was a community betrayed.
The story begins in 2017, when a systems-security researcher named Kangjie Lu became an assistant professor at the University of Minnesota.
Lu’s research, per his website, concerns “the intersection of security, operating systems, program analysis, and compilers.” But Lu had his eye on Linux — most of his papers involve the Linux kernel in some way.
The Linux kernel is, at a basic level, the core of any Linux operating system. It’s the liaison between the OS and the device on which it’s running. A Linux user doesn’t interact with the kernel, but it’s essential to getting things done — it manages memory usage, writes things to the hard drive, and decides what tasks can use the CPU when. The kernel is open-source, meaning its millions of lines of code are publicly available for anyone to view and contribute to.
Well, “anyone.” Getting a patch onto people’s computers is no easy task. A submission needs to pass through a large web of developers and “maintainers” (thousands of volunteers, who are each responsible for the upkeep of different parts of the kernel) before it ultimately ends up in the mainline repository. Once there, it goes through a long testing period before eventually being incorporated into the “stable release,” which will go out to mainstream operating systems. It’s a rigorous system designed to weed out both malicious and incompetent actors. But — as is always the case with crowdsourced operations — there’s room for human error.
Some of Lu’s recent work has revolved around studying that potential for human error and reducing its influence. He’s proposed systems to automatically detect various types of bugs in open source, using the Linux kernel as a test case. These experiments tend to involve reporting bugs, submitting patches to Linux kernel maintainers, and reporting their acceptance rates. In a 2019 paper, for example, Lu and two of his PhD students, Aditya Pakki and Qiushi Wu, presented a system (“Crix”) for detecting a certain class of bugs in OS kernels. The trio found 278 of these bugs with Crix and submitted patches for all of them — the fact that maintainers accepted 151 meant the tool was promising.
On the whole, it was a useful body of work. Then, late last year, Lu took aim not at the kernel itself, but at its community.
In “On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits,” Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems. The group called these submissions “hypocrite commits.” (Wu didn’t respond to a request for comment for this story; Lu referred me to Mats Heimdahl, the head of the university’s department of computer science and engineering, who referred me to the department’s website.)
The explicit goal of this experiment, as the researchers have since emphasized, was to improve the security of the Linux kernel by demonstrating to developers how a malicious actor might slip through their net. One could argue that their process was similar, in principle, to that of white-hat hacking: play around with software, find bugs, let the developers know.
But the loudest reaction the paper received, on Twitter and across the Linux community, wasn’t gratitude — it was outcry.
“That paper, it’s just a lot of crap,” says Greg Scott, an IT professional who has worked with open-source software for over 20 years.
“In my personal view, it was completely unethical,” says security researcher Kenneth White, who is co-director of the Open Crypto Audit Project.
The frustration had little to do with the hypocrite commits themselves. In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.
Still, the paper hit a number of nerves among a very passionate (and very online) community when Lu first shared its abstract on Twitter. Some developers were angry that the university had intentionally wasted the maintainers’ time — which is a key difference between Minnesota’s work and a white-hat hacker poking around the Starbucks app for a bug bounty. “The researchers crossed a line they shouldn’t have crossed,” Scott says. “Nobody hired this group. They just chose to do it. And a whole lot of people spent a whole lot of time evaluating their patches.”
“If I were a volunteer putting my personal time into commits and testing, and then I found out someone’s experimenting, I would be unhappy,” Scott adds.
Then, there’s the dicier issue of whether an experiment like this amounts to human experimentation. It doesn’t, according to the University of Minnesota’s Institutional Review Board. Lu and Wu applied for approval in response to the outcry, and they were granted a formal letter of exemption.
The community members I spoke to didn’t buy it. “The researchers attempted to get retroactive Institutional Review Board approval on their actions that were, at best, wildly ignorant of the tenants of basic human subjects’ protections, which are typically taught by senior year of undergraduate institutions,” says White.
“It is generally not considered a nice thing to try to do ‘research’ on people who do not know you are doing research,” says Kroah-Hartman. “No one asked us if it was acceptable.”
That thread ran through many of the responses I got from developers — that regardless of the harms or benefits that resulted from its research, the university was messing around not just with community members but with the community’s underlying philosophy. Anyone who uses an operating system places some degree of trust in the people who contribute to and maintain that system. That’s especially true for people who use open-source software, and it’s a principle that some Linux users take very seriously.
“By definition, open source depends on a lively community,” Scott says. “There have to be people in that community to submit stuff, people in the community to document stuff, and people to use it and to set up this whole feedback loop to constantly make it stronger. That loop depends on lots of people, and you have to have a level of trust in that system … If somebody violates that trust, that messes things up.”
After the paper’s release, it was clear to many Linux kernel developers that something needed to be done about the University of Minnesota — previous submissions from the university needed to be reviewed. “Many of us put an item on our to-do list that said, ‘Go and audit all umn.edu submissions,’” said Kroah-Hartman, who was, above all else, annoyed that the experiment had put another task on his plate. But many kernel maintainers are volunteers with day jobs, and a large-scale review process didn’t materialize. At least, not in 2020.
On April 6th, 2021, Aditya Pakki, using his own email address, submitted a patch.
There was some brief discussion from other developers on the email chain, which fizzled out within a few days. Then Kroah-Hartman took a look. He was already on high alert for bad code from the University of Minnesota, and Pakki’s email address set off alarm bells. What’s more, the patch Pakki submitted didn’t appear helpful. “It takes a lot of effort to create a change that looks correct, yet does something wrong,” Kroah-Hartman told me. “These submissions all fit that pattern.”
So on April 20th, Kroah-Hartman put his foot down.
“Please stop submitting known-invalid patches,” he wrote to Pakki. “Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way.”
Maintainer Leon Romanovsky then chimed in: he’d taken a look at four previously accepted patches from Pakki and found that three of them added “various severity” security vulnerabilities.
Kroah-Hartman hoped that his request would be the end of the affair. But then Pakki lashed back. “I respectfully ask you to cease and desist from making wild accusations that are bordering on slander,” he wrote to Kroah-Hartman in what appears to be a private message.
Kroah-Hartman responded. “You and your group have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. Now you submit a series of obviously-incorrect patches again, so what am I supposed to think of such a thing?” he wrote back on the morning of April 21st.
Later that day, Kroah-Hartman made it official. “Future submissions from anyone with a umn.edu address should be default-rejected unless otherwise determined to actually be a valid fix,” he wrote in an email to a number of maintainers, as well as Lu, Pakki, and Wu. Kroah-Hartman reverted 190 submissions from Minnesota affiliates — 68 couldn’t be reverted but still needed manual review.
It’s not clear what experiment the new patch was part of, and Pakki declined to comment for this story. Lu’s website includes a brief reference to “superfluous patches from Aditya Pakki for a new bug-finding project.”
What is clear is that Pakki’s antics have finally set the delayed review process in motion; Linux developers began digging through all patches that university affiliates had submitted in the past. Jonathan Corbet, the founder and editor in chief of LWN.net, recently provided an update on that review process. Per his assessment, “Most of the suspect patches have turned out to be acceptable, if not great.” Of over 200 patches that were flagged, 42 are still set to be removed from the kernel.
Regardless of whether their reaction was justified, the Linux community gets to decide if the University of Minnesota affiliates can contribute to the kernel again. And that community has made its demands clear: the school needs to convince them its future patches won’t be a waste of anyone’s time.
What will it take to do that? In a statement released the same day as the ban, the university’s computer science department suspended its research into Linux-kernel security and announced that it would investigate Lu’s and Wu’s research method.
But that wasn’t enough for the Linux Foundation. Mike Dolan, Linux Foundation SVP and GM of projects, wrote a letter to the university on April 23rd, which The Verge has viewed. Dolan made four demands. He asked that the school release “all information necessary to identify all proposals of known-vulnerable code from any U of MN experiment” to help with the audit process. He asked that the paper on hypocrite commits be withdrawn from publication. He asked that the school ensure future experiments undergo IRB review before they begin, and that future IRB reviews ensure the subjects of experiments provide consent, “per usual research norms and laws.”
Two of those demands have since been met. Wu and Lu have retracted the paper and have released all the details of their study.
The university’s status on the third and fourth counts is unclear. In a letter sent to the Linux Foundation on April 27th, Heimdahl and Loren Terveen (the computer science and engineering department’s associate department head) maintain that the university’s IRB “acted properly,” and argues that human-subjects research “has a precise technical definition according to US federal regulations … and this technical definition may not accord with intuitive understanding of concepts like ‘experiments’ or even ‘experiments on people.’” They do, however, commit to providing more ethics training for department faculty. Reached for comment, university spokesperson Dan Gilchrist referred me to the computer science and engineering department’s website.
Meanwhile, Lu, Wu, and Pakki apologized to the Linux community this past Saturday in an open letter to the kernel mailing list, which contained some apology and some defense. “We made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for hypocrite patches,” the researchers wrote, before going on to reiterate that they hadn’t put any vulnerabilities into the Linux kernel, and that their other patches weren’t related to the hypocrite commits research.
Kroah-Hartman wasn’t having it. “The Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your university,” he responded. “Until those actions are taken, we do not have anything further to discuss.”
From the University of Minnesota researchers’ perspective, they didn’t set out to troll anyone — they were trying to point out a problem with the kernel maintainers’ review process. Now the Linux community has to reckon with the fallout of their experiment and what it means about the security of open-source software.
Some developers rejected University of Minnesota researchers’ perspective outright, claiming the fact that it’s possible to fool maintainers should be obvious to anyone familiar with open-source software. “If a sufficiently motivated, unscrupulous person can put themselves into a trusted position of updating critical software, there’s honestly little that can be done to stop them,” says White, the security researcher.
On the other hand, it’s clearly important to be vigilant about potential vulnerabilities in any operating system. And for others in the Linux community, as much ire as the experiment drew, its point about hypocrite commits appears to have been somewhat well taken. The incident has ignited conversations about patch-acceptance policies and how maintainers should handle submissions from new contributors, across Twitter, email lists, and forums. “Demonstrating this kind of ‘attack’ has been long overdue, and kicked off a very important discussion,” wrote maintainer Christoph Hellwig in an email thread with other maintainers. “I think they deserve a medal of honor.”
“This research was clearly unethical, but it did make it plain that the OSS development model is vulnerable to bad-faith commits,” one user wrote in a discussion post. “It now seems likely that Linux has some devastating back doors.”
Corbet also called for more scrutiny around new changes in his post about the incident. “If we cannot institutionalize a more careful process, we will continue to see a lot of bugs, and it will not really matter whether they were inserted intentionally or not,” he wrote.
And even for some of the paper’s most ardent critics, the process did prove a point — albeit, perhaps, the opposite of the one Wu, Lu, and Pakki were trying to make. It demonstrated that the system worked.
Eric Mintz, who manages 25 Linux servers, says this ban has made him much more confident in the operating system’s security. “I have more trust in the process because this was caught,” he says. “There may be compromises we don’t know about. But because we caught this one, it’s less likely we don’t know about the other ones. Because we have something in place to catch it.”
To Scott, the fact that the researchers were caught and banned is an example of Linux’s system functioning exactly the way it’s supposed to. “This method worked,” he insists. “The SolarWinds method, where there’s a big corporation behind it, that system didn’t work. This system did work.”
“Kernel developers are happy to see new tools created and — if the tools give good results — use them. They will also help with the testing of these tools, but they are less pleased to be recipients of tool-inspired patches that lack proper review,” Corbet writes. The community seems to be open to the University of Minnesota’s feedback — but as the Foundation has made clear, it’s on the school to make amends.
“The university could repair that trust by sincerely apologizing, and not fake apologizing, and by maybe sending a lot of beer to the right people,” Scott says. “It’s gonna take some work to restore their trust. So hopefully they’re up to it.”
(Pocket-lint) – The Amazfit T-Rex Pro is a sportswatch built for outdoor lovers. Its maker, Zepp Health, has sought to make it a better companion for trail runs, hikes and open water swims than the original 2020 T-Rex model – by making the Pro better suited to surviving in extreme conditions and adding new sensors to offer richer metrics too.
A core part of the T-Rex Pro is its affordable price point – it’s significantly cheaper than most outdoor watches, so could save you some money if you wanted something to take out on adventures. But while the price and feature set might read as appealing, does this T-Rex bring future goodness or is it a bit of a dinosaur at launch?
Design & Display
Measures: 47.7mm (diameter) x 13.5mm (thickness)
1.3-inch touchscreen display, 360 x 360 resolution
10ATM waterproofing (to 100m depth)
Weighs: 59.4g
The T-Rex Pro largely sticks to the same design formula as the T-Rex. There’s a similar-sized 47mm polycarbonate case, matched with a 22mm silicone rubber strap, all weighing in at 59.4g. To put that into perspective: the 47mm Garmin Fenix 6 weighs 80g, and the Polar Grit X weighs 66g. So the T-Rex Pro is a lighter watch thanks to that plastic case. We’d almost like a bit more weight to it, if anything.
Pocket-lint
There’s also a chunky bezel with exposed machined screws to emphasise its rugged credentials – and it’s passed more military grade tests than the original T-Rex to make it better suited to the outdoors. The Pro attains 15 military grade tests – up from the original’s 12 – and is built to handle extreme humidity and freezing temperatures.
Along with those improved military grade toughness credentials, it’s also ramped up the water-resistance rating – offering protection up to 100 metres depth (10ATM). The ‘non-Pro’ T-Rex can be submerged in water up to 50 metres.
At the heart of that light, rugged, chunky exterior is a 1.3-inch AMOLED touchscreen display, which can be set to always-on. Tempered glass and an anti-fingerprint coating has been used to make it a more durable and smudge-free display – and we can confirm it’s a screen that doesn’t give you that unattractive smudgy look as its predecessor suffered.
Pocket-lint
It’s a bright and colourful screen, with good viewing angles. In bright outdoor light, that vibrancy isn’t quite as punchy as in more favourable conditions, but it’s on the whole a good quality display to find on a watch at this price.
Around the back is where you’ll find the optical sensors and the charging pins for when you need to power things back up again. It uses the same slim charging setup as the T-Rex, which magnetically clips itself in place and securely stays put when it’s time to charge.
Fitness & Features
GPS, GLONASS, Beidou, Galileo satellite system support
Firstbeat training analysis
Heart rate monitor
SPO2 sensor
In true Amazfit fashion, the T-Rex Pro goes big on sports modes – and includes the kinds of sensors that should make it a good workout companion.
There’s 100 sports modes up from just the 14 included in the standard T-Rex. It still covers running, cycling and swimming (pool and open water), but it’s also added profiles for activities like surfing, dance, and indoor activities like Pilates.
The majority of these new modes will offer you the basics in terms of metrics, though modes like surfing and hiking will offer additional ones like speed and ascent/descent data in real-time. The addition of an altimeter here means you can capture richer elevation data, which is useful if you’re a fan of getting up high and hitting those mountains and hilly terrain.
For outdoor tracking, there’s support for four satellite systems with GPS, GLONASS, Beidou and Galileo all on board to improve mapping accuracy. You don’t have any type of navigation features to point you in the right direction, though, nor can you upload routes to follow on the watch.
For road and off-road runs, we found core metrics were reliable during our testing. GPS-based distance tracking came up a little short compared to a Garmin Enduro sportswatch, plus we had issues inside of the app generating maps of our routes as well.
Best Garmin watch 2021: Fenix, Forerunner and Vivo compared
By Chris Hall
·
Swim tracking metrics were generally reliable and it was a similar story for indoor bike and rowing sessions. In the pool, it was a couple of lengths short of the Enduro’s swim tracking, though stroke counts for indoor rowing largely matched up to what we got from a Hydow rowing machine.
But when you dig a little deeper beyond core metrics, some of the T-Rex Pro’s data seems a little questionable. If you’re happy to stick to the basics, though, then the Pro does a good enough job.
Along with manual tracking, there’s support for automatic exercise recognition for eight of those sports modes. This is something we’ve seen crop up on Fitbit, Garmin and Samsung smartwatches with varying success. On the T-Rex Pro, you’ll need to select whether to automatically track activities like running, swimming and indoor rowing. As Zepp Health outlines: there can be instances where accidental recognition can happen with some activities when you jump on a bus or a car. Fortunately, that wasn’t the case for us.
Zepp Health’s newest BioTracker 2 optical sensor is included to power a host of heart-rate features outside of continuous monitoring and measuring effort levels during exercise. It’s also used for the PAI scores, which seeks to shift the attention away from counting steps to regularly raising heart rate through exercise. It’s also used for taking heart rate variability measurements to track stress levels and is used for training insights – like those found on Garmin watches – that will generate VO2 Max scores, training effect, training load, and recovery times.
As far as the reliability of that heart rate monitoring, the Amazfit is better suited to resting heart rate and continuous heart rate data as opposed to relying on exercise and those additional training and fitness insights. In our testing it generally posted higher maximum heart rate readings and lower average heart rate readings compared to a Garmin HRM Pro heart rate monitor chest strap. Those readings were enough to put us in different heart rate zones, which undermines the usefulness of those training insights and PAI scores.
That sensor also unlocks blood oxygen measurements with a dedicated SpO2 app on board to offer on-the-spot measurements. It can be used to offer alerts when you hit major altitude changes. We didn’t get up high enough to trigger those altitude alerts but did compare on-the-spot measurements against a pulse oximeter and they largely all matched up.
Pocket-lint
You’ll get those staple activity tracking features here too, such as capturing daily step counts and monitoring sleep as well as naps, capturing sleep stages and breathing quality, which is tagged as a beta feature and makes use of the new onboard SpO2 sensor.
We found step counts were at times well within the counts of a Fitbit smartwatch – but also some days where we registered longer step totals there was a much bigger difference.
When you’re not tracking your fitness, the Pro does do its duty as a smartwatch too. It runs on Zepp Health’s own RTOS software – and while it might not be the most feature-rich smartwatch experience, it will give you a little more than the basics.
Google Android and Apple iPhone users can view notifications, control music playing on your phone, along with setting up alarms, reminders and changing watch faces. You don’t have payments, the ability to download apps, a music player or a smart assistant, which has appeared on some Amazfit watches.
Pocket-lint
Notification support is of the basic kind, letting you view notifications from native and third-party apps, but not respond to them. They’re easy to read, but what you can read varies based on the type of notification. If you happen to have multiple notifications from the same app, it struggles to display them all and merely lets you know you have multiple messages. Music controls work well as they do on other Amazfit watches and features like weather forecasts and watch faces are well optimised to that touchscreen display.
Performance & Battery Life
Up to 18 days in typical usage
Up to 9 days in heavy usage
40 hours of GPS battery life
The T-Rex Pro features a 390mAh capacity battery – matching what’s packed into the T-Rex. That should give you 18 days in typical usage, 9 days in heavy usage, with an impressive 40 hours of GPS battery life.
Like other Amazfit watches, those battery numbers tend to be based on some very specific lab testing scenarios. In our experience, it’s always felt a little on the generous side. In our time with the T-Rex Pro, we got to around the 10 day mark on a single charge. That was with regular GPS tracking, continuous heart rate monitoring, stress monitoring, and the richer sleep tracking enabled. We had the screen on max brightness but not in always-on mode.
Pocket-lint
The standard T-Rex felt like it was good for a solid week using it in similar conditions, orring 20 days in typical usage by comparison – but the Pro can get you longer than a week even with some of the more demanding features in use.
Things seem to have improved on the GPS battery front as well. An hour of using the GPS usually knocked the Pro’s battery just under 10 per cent, while the T-Rex usually lost 10 per cent from 30 minutes using the GPS. It might not be the 40 hours that was promised, but the Pro does seem to hold up a little better than the T-Rex when it comes to tracking.
Verdict
The T-Rex Pro is a solid outdoor watch offering that’s missing one key ingredient that would makes it a great one – there’s no maps to point you in the right direction when you think you’re lost.
Otherwise, if you want something that offers a durable design and can track your outdoor activities, then the T-Rex Pro’s chunky-but-light design will no doubt appeal to adventurers on a budget. Its fitness and sports tracking features by and large do a good enough job too.
So if you’re hoping that you’ll be able to get an experience that rivals what the Garmin Fenix, Instinct, and the likes of the Polar Grit X can offer, then this T-Rex isn’t quite the full package. But that’s reflected in the price – which is so much less that you should be willing to accept such compromise.
Also consider
Pocket-lint
Garmin Instinct Solar
Garmin’s outdoor watch that sits underneath the pricier Fenix does still cost considerably more than the T-Rex Pro, but will give you those navigation features and great long battery life too.
Read our review
squirrel_widget_4540363
Pocket-lint
Polar Grit X
The Grit X will give you navigation features, a light design, and help you fuel for long runs and hikes to make sure you’re not running on empty.
VMware updated us on its progress on making Fusion compatible with Apple’s M1 chip this week. The company said it’s committed to “delivering a Tech Preview of VMware Fusion for macOS on Apple silicon this year,” but it’s not clear if that version of the tool will support Windows 10 on Arm, because of Microsoft’s licensing terms.
This isn’t the first time VMware has warned against M1-equipped Mac owners running Windows 10 on Arm. VMWare product line manager Michael Roy said earlier this month that “It’s uncharted waters, so everyone is treading lightly… Like, you can’t even BUY Windows for ARM, and folks using it who aren’t OEMs could be violating EULA… we’re not into doing that for the sake of a press release…”
So don’t expect VMware to follow Parallels in enabling Windows 10 on Arm support for M1-equipped Macs until Microsoft gives it the go-ahead. Roy said in the official announcement that VMware has “reached out to Microsoft for comment and clarification on the matter,” and that the company is “confident that if Microsoft offers Windows on Arm licenses more broadly, we’ll be ready to officially support it.”
For its part, Microsoft seems content not to commit to bringing Windows to the latest Macs. Apple said in November 2020 that its silicon is ready for Windows; it’s simply up to Microsoft to update the operating system to natively support the M1 chip. Now we have two leading virtualization software makers either moving forward without Microsoft (Parallels) or publicly calling for a verdict on the issue (VMware).
But this week’s announcement wasn’t all about Windows. The next major update to VMware Fusion is set to support Linux-based operating systems, and that progress appears to be going well. Roy said that he could boot seven Arm-based VMs—two command-line interfaces and five full desktops “configured with 4CPU and 8GB of RAM”—on a battery-powered MacBook Air that doesn’t even include a fan.
“Of course, just booting a bunch of VMs that are mostly idle isn’t quite a ‘real world experience’, nor is it the same as doing some of the stress testing that we perform in the leadup to a release,” Roy said. “Even with that said, and note that I’m using ‘debug’ builds which perform slower, in my 12 years at VMware I’ve never seen VMs boot and run like this. So we’re very encouraged by our early results, and seriously can’t wait to get it on every Apple silicon equipped Mac out there.” (Emphasis his.)
But there are some caveats. VMware Fusion doesn’t “currently have things like 3D hardware accelerated graphics,” Roy said, “and other features that require Tools which Fusion users on Intel Macs have come to expect.” The company also doesn’t plan to offer x86 emulation via Fusion—which means M1-equipped Mac owners won’t be able to install Windows or Linux .ISOs meant for the architecture.
Roy said VMware plans to release a preview of an M1-compatible version of Fusion “before the end of this year.” The company should offer more information about its progress toward supporting Apple silicon via the VMware Technology Network and Twitter “in the coming months.” Maybe that will give Microsoft enough time to publically decide whether or not it wants to make it easier to run Windows on the latest Macs.
The Raspberry Pi Foundation released its Sense HAT add on back in 2015 and yet this board remains one of the best Raspberry Pi HATs, because it still packs a full scientific platform and an 8×8 RGB LED matrix for a little fun in your Raspberry Pi projects. The Sense HAT is packed with sensors for temperature, humidity, acceleration, orientation and air pressure. Plus we have a great RGB LED matrix and a joystick that can be used in our projects.
To control the Sense HAT we used Python, but we can also use Scratch and Node-RED. Sense HAT was developed alongside a project called AstroPi which saw two Raspberry Pi B+ boards being sent to the International Space Station. These two Pis had their own Sense HAT boards, official Pi cameras and a custom designed aluminium case designed to protect and cool the Pis in space. Projects written by children across the world were run on those two Raspberry Pis and that project still exists today and you can take part via their website https://astro-pi.org/
In this tutorial we shall introduce the board, show text on the LED matrix and learn how to read accelerometer data for a classic game of chance.
For this project you will need
Any Raspberry Pi that has 40 GPIO pins
A SenseHAT
Raspberry Pi OS on a microSD card
If you have never set up a Raspberry Pi before, see our articles on how to set up a Raspberry Pi for the first time or how to do a headless Raspberry Pi install, which doesn’t require a keyboard, mouse or screen.
Connecting the Sense HAT To a Raspberry Pi
Connecting a Sense HAT to the Raspberry Pi is simple. With the Raspberry Pi powered off, connect the Sense HAT to all of the GPIO pins, ensuring that the Sense HAT perfectly overlaps the Raspberry Pi.
Use the included stand offs to securely mount the SenseHAT. Now attach your keyboard, mouse, HDMI, micro SD and finally power to boot the Raspberry Pi to the desktop. As we are using the latest version of Raspberry Pi OS, we do not need to install any software or Python packages.
Writing a Hello World Program with Sense HAT
The first project with any new piece of tech or software is “Hello World”. It proves that our kit is working, and that everything is ready for us to move further. The first project is a simple scrolling message you program in your preferred Python editor: Thonny, IDLE, Mu or a text editor. Create a new file and call it text_scroll.py. Remember to save often!
1. Import the SenseHat class from the sense_hat module, then create an object, “sense” for easy use of the module.
from sense_hat import SenseHat
sense = SenseHat()
2. Create two objects to store the RGB color values for red and white. These objects are tuples, data storage structures which can be created and destroyed but never updated. These tuples store the RGB color values for a particular colour in a format that Sense() expects.
red = (255, 0, 0)
white = (255, 255, 255)
3. Create a variable to store a message that will scroll across the screen.
message = "Hello World"
4. Create an exception handler and loop. The handler will try to run the code indented within, and the loop will continuously run project code.
try:
while True:
5. Inside of the loop, use the show_message function to print the message to the screen. In this case it will scroll “Hello World”, the color of the text will be red, set via the tuple. The background color is white, again set via the tuple. The scroll speed is set to 0.1, slow enough to be read. Note that the spelling of color is UK English.
6. Outside of the loop, create an exception, in this case a KeyboardInterrupt. When the user presses CTRL + C it will stop the code, and clear the LED matrix of the Sense HAT.
except KeyboardInterrupt:
sense.clear()
The code for this project should look like this.
from sense_hat import SenseHat
sense = SenseHat()
red = (255, 0, 0)
white = (255, 255, 255)
message = "Hello World"
try:
while True:
sense.show_message(message, text_colour=red, back_colour=white, scroll_speed=0.1)
except KeyboardInterrupt:
sense.clear()
7. Save the code and run via your Python editor. Thonny and Mu have a run / play button. IDLE uses F5 or via the Run menu. “Hello World” should now scroll across the LED matrix. When finished, press CTRL + C to clear the matrix.
Magic 8 Ball on Raspberry Pi
The Magic 8 Ball is a classic game. We ask a question out loud, then shake the 8 ball. In a few seconds a message floats to a viewing portal, ready to read. With the Sense HAT we can make a modern day version which uses raw data from the accelerometer to determine that it has been shook.
Create a new Python project in your favourite editor, call the project 8ball.py and remember to save often.
1. Import the SenseHat class from the sense_hat module, then import the choice function from the random module. Next create an object, “sense” for easy use of the module.
from sense_hat import SenseHat
from random import choice
sense = SenseHat()
2. Create two objects to store the RGB color values for red and white. These objects are tuples, data storage structures which can be created and destroyed but never updated. These tuples store the RGB colour values for a particular colour in a format that Sense() expects.
red = (255, 0, 0)
white = (255, 255, 255)
3. Create a list, “answers” that stores five text strings that are the answers to our eager player’s questions. Lists are what is known as arrays in other programming languages. A list stores data using an index which starts at zero. The first item in a list is at position zero, and subsequent items have their own numerical position. Lists can be created, destroyed and updated.
answers = ["Not likely","Chances are slim","Maybe","Quite possibly","Certainly"]
4. Create an exception handler and loop. The handler will try to run the code indented within, and the loop will continuously run project code.
try:
while True:
5. Create an object, “acceleration” which is used to get raw accelerometer data from the Sense HAT’s onboard accelerometer.
acceleration = sense.get_accelerometer_raw()
6. Create three variables, x,y,z to store the raw accelerometer data for each axis.
x = acceleration['x']
y = acceleration['y']
z = acceleration['z']
7. Using abs() we can update the variables x,y,z to store an absolute value that ignores if the value is positive or negative. The values stored inside the variables x,y,z can be positive or negative values, it all depends on the orientation of the Sense HAT. Our code will not need to understand negative numbers, just that the numbers can go over a threshold that will trigger the answers to appear.
x = abs(x)
y = abs(y)
z = abs(z)
8. Create a simple conditional test that checks the value stored inside the three variables x,y,z and if any of the values are greater than 1 then the indented line of code is run.
if x > 1 or y > 1 or z > 1:
9. Using the show_message function, employ the choice function to randomly select a string from the list “answers”. This is scrolled across the LED matrix with red text, and a white background. The background colors are set using the red and white tuples we earlier created.
10. Use an else condition to clear the LED matrix when the Sense HAT has not been shook.
else:
sense.clear()
11. Create an exception, in this case a KeyboardInterrupt. When the user presses CTRL + C it will stop the code, and clear the LED matrix of the Sense HAT.
except KeyboardInterrupt:
sense.clear()
The code for this project should look like this.
from sense_hat import SenseHat
from random import choice
sense = SenseHat()
red = (255, 0, 0)
white = (255,255,255)
answers = ["Not likely","Chances are slim","Maybe","Quite possibly","Certainly"]
try:
while True:
acceleration = sense.get_accelerometer_raw()
x = acceleration['x']
y = acceleration['y']
z = acceleration['z']
x = abs(x)
y = abs(y)
z = abs(z)
if x > 1 or y > 1 or z > 1:
sense.show_message(choice(answers), text_colour=red, back_colour=white, scroll_speed=0.05)
else:
sense.clear()
except KeyboardInterrupt:
sense.clear()
12. Save and run the code. Pick up the Sense HAT and give it a little shake. The answer to your questions is now but a shake away.
This article originally appeared in an issue of Linux Format magazine.
Mustafa Mahmoud 12 hours ago Console, Featured Tech News, Software & Gaming
For most of its history, the Monster Hunter franchise had seen support from a select and dedicated number of fans, with the series existing within its own moderately-sized niche. With Monster Hunter World exploding into popularity and becoming the best selling Capcom game of all time, all eyes were on the game’s next entry. Monster Hunter Rise has now sold over 6 million copies in just one month.
Just one month after its 26th of March release date, Monster Hunter Rise has sold 6 million copies. This is a massive achievement on many fronts. This makes Rise the 8th best selling Capcom game of all time in its 40 year history.
This is doubly impressive when considering the fact that Rise is currently only available for the Nintendo Switch, which now makes Rise the second best selling single-platform title, just 300,000 copies behind Street Fighter II on the SNES.
With its current trajectory it wouldn’t be surprising to see Monster Hunter Rise take the number 2 slot on the publisher’s best selling games of all time. Currently, Monster Hunter World sits at the top with 16.8 million copies sold. Second place meanwhile goes to Resident Evil 7 Biohazard at 8.5 million.
With Rise only having been out for a month, and with it set to come to PC some time next year, it is clear that despite being over 15 years old, the Monster Hunter franchise is just getting started.
KitGuru says: Have you picked up Monster Hunter Rise? Are you surprised by its sales success? Do you think it will outsell World? Let us know down below.
Become a Patron!
Check Also
Starfield reportedly targeting 2021 release; will be Xbox exclusive
Starfield is the next big game from Bethesda Game Studios. Teased all the way back …
Mustafa Mahmoud 13 hours ago Console, Featured Tech News, Software & Gaming
Starfield is the next big game from Bethesda Game Studios. Teased all the way back in 2018, nothing else has been shown from the game, leading many fans to assume it to be a long ways away. According to one leaker however, Microsoft is hoping to get the game out of the gate by the end of this year.
According to Jez Corden (of WindowsCentral and the Xbox Two podcast), as well as Rand Al Thor 19, Microsoft is “trying their hardest to get the game out for this holiday. They really want Starfield out this holiday,” with the game currently said to be “basically sort of finished – it’s in bug squashing mode right now, very much like Halo Infinite, and it would be a big boon for Game Pass and Xbox if both Halo and Starfield could launch this fall.”
This isn’t the only statement made by the duo, with them claiming that “I’ve been told, by very reliable people, that Starfield was 100% an Xbox exclusive. I’ve even made bets about it, and I don’t bet unless I know I’m willing to bet. So I’m really, really confident that Starfield is only releasing on Xbox when it does.”
Ever since the announcement that Microsoft had acquired ZeniMax Media (and by proxy Bethesda), the future of Bethesda titles has been somewhat uncertain, especially regarding those which had already been announced prior to the acquisition (such as Starfield and The Elder Scrolls VI).
If what has been reported is true, then it will be interesting to see how the perception of both PlayStation and Xbox is impacted by this decision – and whether Starfield will in fact manage to release by the end of this year.
KitGuru says: Do you think Starfield will be exclusive to Xbox (and PC)? Will future Bethesda games impact your console purchasing decision? Do you think Starfield will come out this year? Let us know down below.
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Home/Software & Gaming/Predator: Hunting Grounds launches on Steam with crossplay
Matthew Wilson 13 hours ago Software & Gaming
It has been a full year since Predator: Hunting Grounds first released on PS4 and PC. The Sony-published title has been exclusive to the Epic Games Store on PC since launch, but that changed today, with the game finally landing on Steam.
Predator: Hunting Grounds is an asymmetrical multiplayer shooter, with four players playing as a marine fireteam and the fifth player taking on the role of the Predator. Each round, the marine squad works their way through the map completing objectives and facing smaller PvE enemies all while trying to avoid the Predator and get to the chopper. The Predator is constantly on the hunt, trying to get to the other players before they escape.
As of today, the game is now available on Steam for £34.99 with full crossplay support with other versions. Unfortunately, while cross-platform multiplayer is in place, there is no cross-save functionality, so your progress is tied to one platform.
The game is easy enough to run, with the minimum system requirements calling for an Intel Core i5-6400 or AMD FX-8320 CPU, 8GB of RAM and a GTX 960 or AMD R9 280x graphics card.
KitGuru Says: This game has been on PC for a while already, but we know a lot of people avoid the Epic Games Store and prefer to buy on Steam. Will you be picking this one up now that it has launched on Steam?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Home/Software & Gaming/Drivers/Nvidia’s latest GeForce driver preps for Metro, Resident Evil and Mass Effect
Matthew Wilson 14 hours ago Drivers, Featured Tech News
There are several big games coming to PC in May, including the likes of Metro Exodus Enhanced Edition, Resident Evil Village and the Mass Effect Legendary Edition remastered trilogy. Nvidia is getting GeForce GPU users prepared ahead of time, with a new driver release today.
The GeForce driver version 466.27 is now rolling out, bringing day-one optimisations for Metro Exodus Enhanced Edition, Resident Evil Village and Mass Effect Legendary Edition. This driver is particularly important for those jumping into the new version of Metro Exodus, as the driver will enable greater performance for all the new ray-tracing effects and DLSS 2.0.
Aside from day-one optimisations for some of next month’s most anticipated PC games, this driver also expands the list of G-Sync Compatible gaming monitors. Five more displays have received certification, including:
AOC 24G2W1G8
AOC AG274US4R6B
ASUS XG349C
Philips 279M1RV
Samsung LC27G50A
The GeForce Game Ready 466.27 WHQL driver is now available to download through the GeForce Experience application.
KitGuru Says: May is shaping up to be a rather big month for PC games. Will you be picking up any of the big releases?
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
Mustafa Mahmoud 14 hours ago Featured Tech News, Software & Gaming
When the soulsborne genre was first introduced in 2009 with Demon’s Souls, it was nothing more than a niche interpretation of the action-RPG genre. Since then, not only has the gameplay stylings expanded outwards into its own genre, but souls-like games are competing on a commercial front too. Such is the case with Nioh, which has now sold over 5 million copies across the duology.
Making the announcement, Team Ninja revealed that Nioh 1 and 2 have sold a combined 5 million copies worldwide. Nioh 1 first launched in February 2017 for the PlayStation 4, before coming to PC later that same year. This was then followed up by its sequel just over 3 years later in March 2020. Once again, the PC release came later, launching in February of this year.
From the sales breakdown provided, which puts Nioh 1 at 3 million copies sold and its sequel at 2 million, the franchise is on the up and up, with the sequel selling faster than the first entry, and securing the series as a viable franchise despite being exclusive to PlayStation consoles (and PC).
Despite the sales success, Team Ninja previously announced that they have no plans for Nioh 3 at this time. It will be interesting to see whether this newly achieved milestone affects their decision.
Both Nioh and its sequel received a consistently high reception from both critics and fans alike, and until now serves as the only real souls-like series to be placed on similar standing with FromSoftware’s own output in the eyes of the fans. It will be interesting to see what Team Ninja has planned next.
KitGuru says: What do you think of Nioh? Do you play souls-like games? Which is your favourite? Let us know down below.
Become a Patron!
Check Also
Monster Hunter Rise has outsold Street Fighter V in just one month
For most of its history, the Monster Hunter franchise had seen support from a select …
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.