The University of Minnesota isn’t making any friends in the Linux community. Phoronix reported that Greg Kroah-Hartman, the Fellow at the Linux Foundation responsible for stable releases of the Linux kernel, has banned the University from contributing to that kernel after two students purposely added faulty code to it.
The students in question published a research paper titled “On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits” on February 10. Those so-called “hypocrite commits” were defined as “seemingly beneficial commits that in fact introduce other critical issues.”
Although the paper was ostensibly focused on open source software generally, the students devoted much of their attention to the Linux kernel specifically because it’s so popular. The kernel is practically ubiquitous—it’s found in everything from single-board computers like the Raspberry Pi to the most powerful supercomputers.
All of which is to say that Linux is vital. It would make sense, then, for the person responsible for the Linux kernel’s stable release branch to be upset about those students’ efforts to undermine that project. Kroah-Hartman made his stance on the issue clear in a message posted to the Linux kernel mailing list earlier today:
“Our community does not appreciate being experimented on, and being ‘tested’ by submitting known patches that are either do nothing [sic] on purpose, or introduce bugs on purpose. If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here.”
Kroah-Hartman also said that he “will now have to ban all future contributions from
your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems.” It seems that research conducted by two students will now affect the entire University of Minnesota.
That actually includes five schools spread across Crookston, Duluth, Morris, Rochester, and Twin Cities. We’ve reached out to the overarching University of Minnesota as well as Kroah-Hartman to learn more about the full extent of the ban and will update this post if either responds to our request for more information.
The FBI says it used facial recognition technology to track down and arrest an individual suspected of taking part in the US Capitol riots earlier this year. The case, which was first reported by The Huffington Post, is notable for the FBI’s acknowledgement that it used facial recognition not just to confirm a suspect’s identity, but to discover it in the first place.
According to an affidavit shared online by the Huffington Post, federal agents tracked down an individual named Stephen Chase Randolph using crowdsourced images from the riots (including those shared on Twitter by a group known as SeditionHunters). They searched these pictures on the web using “an open source facial recognition tool, known to provide reliable results,” and this led to a public Instagram page apparently belonging to Randolph’s girlfriend which contained “numerous images” of the suspect.
Pictures on the account showed Randolph wearing the same items of clothing as in stills captured at the Capitol. These included a grey knitted hat with the Carhartt logo embroidered in white on the front. This hat was key for tracking Randolph’s activities across multiple videos and images, leading to SeditionHunters dubbing him #GrayCarharttHat.
The FBI affidavit says Randolph was seen assaulting multiple US Capitol Police officers. “In the process of pushing the barricades to the ground, the SUBJECT and others knocked over a USCP Officer […] causing [her] head to hit the stairs behind her, resulting in loss of consciousness,” says the report. “The SUBJECT then continued to assault two other USCP officers by physically pushing, shoving, grabbing, and generally resisting the officers and interfering with their official duties of protecting the closed and restricted US Capitol grounds.”
After finding Randolph’s girlfriend’s Instagram account, federal agents found Facebook accounts apparently belonging to his family members, revealing Randolph’s full name. They then cross-referenced his identity with state driving license records and surveilled him at his home and workplace, where he was spotted still wearing the same Carhartt hat.
To confirm Randolph’s participation in the riots, two undercover FBI agents approached him at his work on April 13. They struck up a conversation with Randolph in which he admitted to attending the riots, saying “I was in it” and “It was fucking fun.” Randolph also said he’d seen a female police officer being pushed over by the barricades, and suggested that the officer had suffered a concussion because she’d curled up into the fetal position. Randolph was arrested in Kentucky a week after this interview took place, says TheHuffington Post.
The case shows how the FBI is using facial recognition and crowdsourced images to track down those who participated in the January 6 riots. Such tools are not always necessary to find suspects, but their use appears to be becoming increasingly common. A report from BuzzFeed News earlier this month revealed that some 1,803 publicly funded agencies, including local and state police, tested controversial facial recognition service Clearview AI prior to February 2020. The technology has only become more widely known since.
In the 1980s the home computer market was dominated by the likes of Atari, IBM, Tandy and Commodore and one of the most popular machines was the Amiga. The most popular of the range was the Amiga 500 and with Claude Schwarz’s Raspberry Pi Amiga PCB known as PiStorm you can seriously boost the power of this humble home computer.
PiStorm is an adaptor board that plugs into the socket for the Amiga’s Motorola 68000, 7.16 MHz CPU. Upgrading the A500+ to a 68EC030 CPU running at 70 – 80 MHz. At the time that would’ve been a dream setup requiring a much more expensive machine, the A4000 and a huge injection of cash for third party accessories.
PiStorm is designed to use a Raspberry Pi 3 A+ which functions in place of the CPU and according to Schwarz, there are plans in the works for a CM4 edition which could see a greater increase in performance. Other features provided by PiStorm are support for large amounts of RAM (up to 128MB), virtual SCSI devices, swappable Kickstart ROM images (the BIOS of an Amiga) and accelerated high resolution graphics via the Raspberry Pi’s HDMI port.
The board was confirmed to work with an Amiga 500+. A USB hard drive can be used for additional storage, such as hard disk image files. An 8GB microSD card is needed for PiStorm and its dependencies.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The current plan is to offer pre-made boards for sale but the project is open source (the best Raspberry Pi projects usually are), meaning you can download the PCB files for free to use and modify on your own. PiStorm is a direct competitor to Apollo Accelerators who make their own hardware acceleration solutions.
Visit the project page on GitHub for more details and installation instruction. You can also find more pictures in the original thread at Twitter.
There’s no denying that Dogecoin is a meme. It’s also proven to be quite valuable to those who decided to buy in, with Coindesk today reporting that returns on the coin have risen 6,000% this year — and over 450% just in the past week — despite the fact that it was specifically created as a joke.
Dogecoin was created in 2013 as “an open source peer-to-peer digital currency, favored by Shiba Inus worldwide,” as its official website proclaims. It also offers a helpful conversion tool that explains 1 Dogecoin = 1 Dogecoin. Mind, blown.
But if there are three safe-for-work things people love on the internet, they’re dogs, memes and cryptocurrencies. Redditors are particularly fond of Dogecoin, and they often gift the financially viable meme to people whose posts they’ve enjoyed.
All of which makes Dogecoin a low-priced cryptocurrency (more on that in a moment) that’s also popular on one of the world’s most-visited social platforms. No wonder CoinGecko puts it as the fifth most-traded coin on popular exchanges.
Let’s be clear: Nobody’s getting rich by owning a few Dogecoin. Coindesk’s data puts the coin’s price at $0.005405 on January 1; it peaked at $0.434727 this morning. That means $1 is worth the same as roughly 2.3 Dogecoins at its highest price to date.
Even pennies add up over time, however, and at time of writing CoinGecko puts Dogecoin’s market cap at nearly $45 billion. Newsweek also reported today that Dogecoin made a man from Los Angeles a millionaire.
So is Dogecoin even close to Bitcoin in terms of market cap or value? No. Bitcoin’s market cap is over $1 trillion, and it’s currently priced at around $61,500 per coin, according to CoinMarketCap. No other cryptocurrency even comes close on either metric.
But there’s another key difference: Dogecoin is a meme; Bitcoin is supposed to be the future of the global economy. (At least according to those who stand most to profit from it becoming as such.) The fact that anyone’s even talking about Dogecoin eight years after its introduction is both a miracle and a bit of a meme unto itself.
Dogecoin’s ascendance could also have a similar—but obviously much smaller—effect on the cryptocurrency market as Bitcoin’s. Rapid increases to one coin’s value often result in, or are at least accompanied by, price bumps for other coins as well.
That could hold especially true for other “Memecoins” that were created more as performance art than actual currency. Or maybe Dogecoin is the only one that will ever be worth anything. We’re talking about the economics of an eight-year-old meme coin with a dog’s face on it; does any of this seem predictable?
Google is going it alone with its proposed advertising technology to replace third-party cookies. Every major browser that uses the open source Chromium project has declined to use it, and it’s unclear what that will mean for the future of advertising on the web.
A couple of weeks ago, Google announced it was beginning to test a new ad technology inside Google Chrome called the Federated Learning of Cohorts, or FLoC. It uses an algorithm to look at your browser history and place you in a group of people with similar browsing histories so that advertisers can target you. It’s more private than cookies, but it’s also complicated and has some potential privacy implications of its own if it’s not implemented right.
Google Chrome is built on an open source project, and so FLoC was implemented as part of that project that other browsers could include. I am not aware of any Chromium-based browser outside of Google’s own that will implement it and very aware of many that will refuse.
One note I’ll drop here is that I am relieved that nobody else is implementing FLoC right away, because the way FLoC is constructed puts a very big responsibility on a browser maker. If implemented badly, FLoC could leak out sensitive information. It’s a complicated technology that does appear to keep you semi-anonymous, but there are enough details to hide dozens of devils.
Anyway, here’s Brave: “The worst aspect of FLoC is that it materially harms user privacy, under the guise of being privacy-friendly.” And here’s Vivaldi: “We will not support the FLoC API and plan to disable it, no matter how it is implemented. It does not protect privacy and it certainly is not beneficial to users, to unwittingly give away their privacy for the financial gain of Google.”
We’ve reached out to Opera for comment as well. DuckDuckGo isn’t a browser, but it’s already made a browser extension to block it. And the Electronic Frontier Foundation, which is very much against FLoC, has even made a website to let you know if you’re one of the few Chrome users who have been included in Google’s early tests.
But maybe the most important Chromium-based browser not made by Google is Microsoft Edge. It is a big test for Google’s proposed FLoC technology: if Microsoft isn’t going to support it, that would pretty much mean Chrome really will be going it alone with this technology.
In the grand tradition of Congressional tech hearings, I asked Microsoft a yes or no question: does it intend to implement FLoC in Edge? And in the same grand tradition, Microsoft answered:
We believe in a future where the web can provide people with privacy, transparency and control while also supporting responsible business models to create a vibrant, open and diverse ecosystem. Like Google, we support solutions that give users clear consent, and do not bypass consumer choice. That’s also why we do not support solutions that leverage non-consented user identity signals, such as fingerprinting. The industry is on a journey and there will be browser-based proposals that do not need individual user ids and ID-based proposals that are based on consent and first party relationships. We will continue to explore these approaches with the community. Recently, for example, we were pleased to introduce one possible approach, as described in our PARAKEET proposal. This proposal is not the final iteration but is an evolving document.
That is a LOT to unpack, but it sounds very much like a “no” to me. However, it’s a “no” with some important context. But before I get too deep into it, let’s talk about a couple of non-Chromium browsers — because one important piece of all of this is that Google’s FLoC technology is still a proposal. Google is saying it would like to make it a fundamental part of the web, not simply a new feature in its browser.
Here’s a statement that a Mozilla spokesperson provided to us on the plans for Firefox:
We are currently evaluating many of the privacy preserving advertising proposals, including those put forward by Google, but have no current plans to implement any of them at this time.
We don’t buy into the assumption that the industry needs billions of data points about people, that are collected and shared without their understanding, to serve relevant advertising. That is why we’ve implemented Enhanced Tracking Protection by default to block more than ten billion trackers a day, and continue to innovate on new ways to protect people who use Firefox.
Advertising and privacy can co-exist. And the advertising industry can operate differently than it has in past years. We look forward to playing a role in finding solutions that build a better web.
As for Apple’s Safari, I will admit I didn’t reach out for comment because at this point it’s not difficult to guess what the answer will be. Apple, after all, deserves some credit for changing everybody’s default views on privacy. However, the story here is actually much more interesting that you might guess at first. John Wilander is a WebKit engineer at Apple who works on Safari’s privacy-enhancing Intelligent Tracking Prevention features. He was asked on Twitter whether or not Safari would implement FLoC and here’s his reply:
We have not said we will implement and we have our tracking prevention policy. That’s it for the time being. Serious standards proposals deserve thinking and I appreciate Brave sharing theirs.
— John Wilander (@johnwilander) April 12, 2021
Wilander’s reply jibes with Microsoft’s statement that “the industry is on a journey” when it comes to balancing new advertising technologies and privacy. But it speaks to something really important: web standards people take their jobs seriously and are seriously committed to the web standards process that creates the open web.
I often make light of that process as being slow, contentious, and frustrating. It is all those things. But it’s also the last line of defense against the complete and total fracturing of the web into pages that are only compatible with specific web browsers. That isn’t the web at all.
And so what you’d expect to be a hard “no” from Apple (and what will almost surely be a hard “no” in the end) instead becomes a commitment to the web standards process and taking Google’s proposals seriously. Ditto from Microsoft.
All of this is happening because every major browser already has or will soon block third-party cookies, the default way of identifying you and tracking you across the web. And every major browser has committed to ensuring that you can’t be personally identifiable to third-party advertisers. Even Google’s own ad team has said as much.
The end of those cookies is called the Cookiepocalypse, and it’s apocalyptic because nobody really knows what advertisers will do once those tracking methods are raptured. And so right now, major browser vendors are proposing different, new solutions.
Apple, Google, and Microsoft all have ideas for how advertising on the web should work. We’ve discussed Google’s FLoC at length, but you might be surprised to hear that Apple isn’t just trying to stop all ads; it has privacy-enhancing ad proposals of its own. And that random reference to PARAKEET in Microsoft’s statement? Another ad proposal.
The problem here is that the Cookiepocalypse is already nigh. Many browsers are already blocking third-party cookies. Google Chrome is the big holdout on blocking third-party cookies, but it’s also the browser with the biggest market share.
Google has committed to cutting off third-party cookies in 2022, but it seems very unlikely that the web standards process will get to an answer by then. In fact, one of Google’s other proposals isn’t going to begin testing until late this year — far too late to be implemented by the ad industry if Google sticks to its original promise. Who knows what advertisers will do then?
The technology here is complicated, the process is slow, and the outcome is unclear. That’s par for the course for the web. Normally I’d tell you not to worry about it and just let the W3C run its course. But the stakes are very high: your privacy, vast pools of money, and the interoperable nature of the web itself could all go up in a puff of smoke if these browser makers don’t figure out a way to thread all these needles. Cookiepocalypse, indeed.
The Neotron dev team, consisting of makers Jonathan Pallant and Kaspar Emanuel, have created a custom PCB to carry our new favorite microcontroller—the Raspberry Pi Pico. The system is designed to resemble a retro-style computer you might find in the ’80s, albeit with a micro-ATX form factor.
The best Raspberry Pi projects don’t just draw inspiration from others, they add value and utilize the board to its fullest potential. The Neotron Pico is based off the team’s existing project, the Neotron 32, another ARM-based retro-style system using the same OS but the Pico adds a new dimension with room for expansion and a cheaper price point. The PCB was designed using KiCad, a free and open source electronics design application, and in the render we can see the Raspberry Pi Pico at the rear of the board, along with ports for PS/2 peripherals, sound, video and a DC barrel jack for power. An unpopulated SD card reader is also present to the right of the VGA connector, if the tracks exist on the board then adding an card reader should be relatively simple.
According to the project documentation, the board is able to output 12-bit Super VGA video using PIO state machines on the Pico. An SPI-to-GPIO expander is used to offer a total of eight IRQs and SPI chip-selects. Users can install up to eight peripherals or expansion slots.
Software-wise the board runs Neotron OS. This OS was written in Rust and is very similar to MS-DOS. You can read more about the PCB in detail and explore the code used in this project on the official Neotron Pico GitHub page.
Smile and say cheese! This Raspberry Pi project, known as RuhaCam, is ready to capture all of life’s finest moments. It was created by Ruha Cheng and Penk Chen who decided to develop an open-source approach to handheld digital cameras.
The camera is housed inside of a 3D-printed shell designed with a retro, handheld shape. Inside is a Raspberry Pi Zero and a Raspberry Pi HQ camera module for high quality results when capturing photos.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The best Raspberry Pi projects are the ones you can recreate yourself and this one is totally open source. The 3D printer files and code used are available to anyone on GitHub. There you’ll find a complete list of parts which includes additional components like a 10MP 16mm Telephoto Lens and a 2.2-inch TFT display to serve as a viewfinder.
Image 1 of 3
Image 2 of 3
Image 3 of 3
The unit is portable, as well, featuring a 2000mAh Li-Po battery that can be recharged with the help of a TP4056 micro USB battery charging module. It also includes a power switch for safely shutting down the Pi inside. To read more about this project in detail, visit the official RuhaCam project website.
It’s been two years since Sony launched 360 Reality Audio, a format that uses Sony’s object-based spatial audio technology to deliver 360-degree sound. In that time, 360 Realit Audio has become available on Deezer, Tidal, Amazon Music HD and nugs.net, with subscribers to those services able to listen to them in all their immersive glory.
Compatible products include Sony’s dedicated SRS-RA5000 and SRS-RA3000 wireless speakers, Amazon’s Echo Studio smart speaker and certain Sony headphones (such as the WH-1000XM4) via the firm’s dedicated Headphone Connect app.
Now, however, it seems as though Sony has plans to widen compatibility for the format to more Android devices. In the Android Open Source Project, code reviews and comments from both Sony and Google have directly mentioned Sony 360 Reality Audio, as spotted by XDA Developers. The report highlights a comment by Sony software engineer Kei Murayama: “This is one of the patches mentioned in the meeting ‘Android OS 360RA support’ between Google and Sony.”
That suggests a collaboration is in the works to bring Sony’s custom decoder for the format (which is built on the open MPEG-H 3D Audio standard) to the wider Android world, presumably allowing app providers to easily offer 3D audio playback.
While 360 Reality Audio can support up to 64 channels of audio, the code mentions support for a 13-channel audio layout “which uses surround 5 channels, top 5 channels and bottom 3 channels”. It also states that a “Virtualizer can place individual sounds in a 360 spherical sound field from these channels on any headphones” – so it looks like the technology may be able to work in a psuedo capacity with any set of headphones or, indeed, speakers.
With that Sony and Google joint venture in mind, plus the fact Sony recently announced video streaming capabilities for 360 Reality Audio and 360 Reality Audio Creative Suite content creation software, we could well be seeing – and hearing – more of Sony’s immersive audio efforts in the near future. Look out, Dolby Atmos Music.
MORE:
Here’s everything you need to know about Sony 360 Reality Audio
And its rival, Dolby Atmos Music
Best free music apps: free music on Android and iPhone
Crowbits’ progressive STEM kits teach future engineers (ages 6-10 and up) the basics of electronics and programming, but nondurable paper elements and poorly translated documentation could lead to frustration and incomplete projects.
For
+ 80+ Lego-compatible electronic modules and sensors
+ Helpful programming software
+ Progressive learning kits
+ Examples are very helpful
+ Engaging projects for pre-teen and teen engineers
Against
– Inadequate and inaccurate project tutorial
– Cable modules are stiff and pop off easily
– Cardboard projects are flimsy and cumbersome
– Labels are hard to read
They say that the best method of teaching is to start with the basics. This is true for most subjects, but even more so for getting kids involved and interested in learning about electronics and programming. This is exactly Elecrow Crowbits’ approach to launching young inventors and creators into the world of technology.
Available via Kickstarter, the STEM kit series starts with building simple projects that make use of basic electronic concepts, then steps up kids’ skills by introducing projects that require some coding and graduates to more advanced application development. The Crowbits lineup consists of five interactive STEM-based packages, each appropriately themed with projects that cater to kids from ages 6 -10 and up. These are the Hello Kit, Explorer Kit, Inventor Kit, Creator Kit and Master Kit.
With the variety of engineering kits out in the market today, Crowbits’ pricing falls in the mid-range category. Ranging from $26 to $90, depending on which kit you prefer, it is money well spent. One of the key values that Crowbits brings is its focus on teaching kids the basics of electronics through the use of these programmable blocks and sensors and ties that learning to current practical uses, like turning the lights on or off. This simple circuit logic is used to program small home appliances like coffee machines, automatic dispensers or even smart home security systems.
Much like the company’s previous Kickstarter project the CrowPi2, a Raspberry Pi-powered laptop which we reviewed last year, Crowbits also presented issues with documentation. Makers and creators know that clear and concise directions are very important for any project building. Unclear and inadequate instructions causes users, especially beginners, to feel that they may have done something wrong. They may be able to troubleshoot some issues themselves, but if left unresolved an air of defeat and frustration ensues.
Crowbits Setup
Setup for Crowbits starts with choosing which components to use depending on the project the child wants to try. The modules are designed to be plug-and-play so young makers can use them to build structures and experiment right away. Modules are also compatible with the entire series of learning kits, so if you purchased more than one, you can use them interchangeably.
If you want to try building from the suggested projects, of which there are plenty to choose from, note that they become more challenging as you move up in the series and may include some coding and firmware downloads.
How Crowbits Work
Every kit consists of a number of modules. Each module has magnetic pogo-pins on all sides that help connect them easily. Another way of connecting modules are by the magnetic cables. At the back of each module are Lego holes for seamless integration of Lego bricks to any structure.
There are four different types of modules and are easily identified by color: Blue for power/logic, yellow for input, green for output and orange for special modules. It’s important to keep in mind a few rules for creating a circuit sequence. There should be at least a power, an input and an output module in order to build a circuit, with the proper sequence having the input block before the output.
There could be multiple input and output blocks in a sequence where the output is controlled by the nearest input block. Lastly, names of modules must be facing up to ensure the correct pins are being used.
Crowbits Module and Sensor Breakdown
There are four different types of modules and sensors for Crowbits and each function is distinguished by color:
Power Modules (Blue) – the power source and a core module that’s required for every project build. You’ll see a green light that indicates when the power is on. Use the included micro-USB cable to re-charge the power supply when needed.
Logic Modules (Blue) – for basic operations. Includes: 315 MHz Controller, Expansion, etc.
Input Modules (Yellow) – accepts input data like touch, vibration or object detection and passes it to the output modules. Includes: Touch module, IR reflective sensor, light sensor, etc.
Output Modules (Green) – receives command from input module and executes ending action. Examples are: Buzzer module (makes a sound), LED (Y) light up, or vibrate
Special Modules (Orange) – used for advanced programming tasks. Examples are: I2C or UART
Crowbits Software and Hardware
Programming Languages Supported: Letscode (Elecrow’s visual programming software based on Scratch 3.0), which supports Python and Arduino IDE.
Open Source Hardware Compatibility: ESP32 TFT, Micro:bit board, Arduino UNO and Raspberry Pi (TBA).
OS Supported: Windows and Mac
Crowbits Learning Kits Use Cases
Hello Kit and Explorer Kit
The Hello Kit and Explorer Kit are learning tools for beginners and targets children ages 6-8 and up. It introduces the concept of modules and their functionality. No coding is required for any of the suggested experiments and projects here. Building the projects with cardboard elements proved to be difficult for my seven-year-old and she got easily frustrated trying to use the thin double-sided tape that came with the kit.
Once the structures were built (with my help) she did enjoy putting the modules together and making things happen like sounding the buzzer on the anti-touching device or making the lights turn on her window display project. Another annoyance to note was when using the cable module that serves to connect modules together. The cable is quite thick and not flexible so it had the tendency to pop off and break the connection for multiple projects.
I would have to say that my daughter was most engaged with the Explorer Kit, perhaps because the projects had more integration with Lego blocks, and some projects were also very interactive like the Quadruped Robot and the Lift, which were her favorites. She enjoyed building the structures and seeing the creations come to life, especially when there was movement, sounds and lights.
Inventor Kit and Creator Kit
The Inventor Kit and Creator Kit are the intermediate learning tools of the Crowbits series and targets children ages 10 and up. The Inventor Kit includes more advanced projects that incorporate the Micro:bit board in the builds. This requires some coding and the use of Letcsode, Elecrow’s Scratch-based drag-and-drop visual programming software.
The software seemed a bit buggy (mainly in steps like downloading custom code) and there were inaccuracies in the project documentation that led to a lot of troubleshooting on our part. Hopefully, by the time Crowbits is ready for release in June, these kinks will have been resolved.
It is worth noting, though, that the list of projects suggested for the Inventor kit seem to be age-appropriate. My tween worked on the Horizontal Bar and the Ultrasonic Guitar projects. She thoroughly enjoyed the experience and had no issues following the diagrams in building the Lego structures. There was a little hiccup in using the software, as I mentioned earlier, where we were wanting for troubleshooting tips and more clear documentation.
Unfortunately, we were not able to try out the Creator Kit as it was not available when we received our evaluation samples. We may update this review when we receive the Kit after its June release.
Master Kit
The Master Kit definitely is the most challenging of the engineering kits in the Crowbits lineup, with the task of programming hardware and software to build real-life products like a mobile phone, a game console and a radar. I’ll set aside my comments for this kit as I was unsuccessful in trying to make the phone and console work due to a corrupted SD card.
Additionally, we had intermittent issues while uploading firmware. It is unfortunate because I was looking forward to this kit the most, but perhaps I can re-visit the Master Kit and post an update at a later time.
The one successful project build out of this kit, the radar, honestly left us scratching our heads. The expected results were not seen as we tried a placing variety of objects in the vicinity of the rotating radar dish and none of them seemed to be detected.
Crowbits Learning Kits Specs and Pricing
Modules
Projects
Age
Price
Hello Kit
7 Modules
5 Cardboard Projects
6+
$26
Explorer Kit
13 Modules
12 Projects
8+
$70
Inventor Kit
10 Modules
12 Lego, graphic programming projects and Letscode introduction
10+
$80
Creator Kit
TBD
TBD
10+
$90
Master Kit
TBD
TBD
10+
$90
Crowbits Available Bundles and Special Pricing
Bundles
Kits Included
Pricing
Bundle #1
Explorer Kit, Creator Kit, Master Kit
$239
Bundle #2
Explorer Kit, Inventor Kit, Master Kit
$249
Bundle #3
Hello Kit, Explorer Kit, Inventor Creator Kit, Master Kit
$354
Bottom Line
Despite all its kinks, overall the Crowbits STEM Kit appears to be another great educational tool from Elecrow with the emphasis on educating kids on electrical engineering. Whether it be building simple circuit projects or coding more complex applications for use in everyday living, the Crowbits series provides a complete learning platform for kids ages 6-10 and up.
With its average pricing and the flexibility to pick and choose which kit to purchase, it is an attractive choice for someone looking to buy an educational STEM kit for their child or loved one. Of course you can also buy the entire set as a bundle and enjoy helping your child build models and program as you go through the different stages of electronic learning from basic to advanced concepts. It’s also worth noting that the Letscode software program that comes with the packages is free and supports Python and Arduino programming which is a welcome added bonus.
According to Moore’s Law Is Dead, Intel’s successor to the DG1, the DG2, could be arriving sometime later this year with significantly more firepower than Intel’s current DG1 graphics card. Of course it will be faster — that much is a given — but the latest rumors have it that the DG2 could perform similarly to an RTX 3070 from Nvidia. Could it end up as one of the best graphics cards? Never say never, but yeah, big scoops of salt are in order. Let’s get to the details.
Supposedly, this new Xe graphics card will be built using TSMC’s N6 6nm node, and will be manufactured purely on TSMC silicon. This isn’t surprising as Intel is planning to use TSMC silicon in some of its Meteor Lake CPUs in the future. But we do wonder if a DG2 successor based on Intel silicon could arrive later down the road.
According to MLID and previous leaks, Intel’s DG2 is specced out to have up to 512 execution units (EUs), each with the equivalent of eight shader cores. The latest rumor is that it will clock at up to 2.2GHz, a significant upgrade over current Xe LP, likely helped by the use of TSMC’s N6 process. It will also have a proper VRAM configuration with 16GB of GDDR6 over a 256-bit bus. (DG1 uses LPDDR4 for comparison.)
Earlier rumors suggested power use of 225W–250W, but now the estimated power consumption is around 275W. That puts the GPU somewhere between the RTX 3080 (320W) and RTX 3070 (250W), but with RTX 3070 levels of performance. But again, lots of grains of salt should be applied, as none of this information has been confirmed by Intel. TSMC N6 uses the same design rules as the N7 node, but with some EUV layers, which should reduce power requirements. Then again, we’re looking at a completely different chip architecture.
Regardless, Moore’s Law Is Dead quotes one of its ‘sources’ as saying the DG2 will perform like an RTX 3070 Ti. This is quite strange since the RTX 3070 Ti isn’t even an official SKU from Nvidia (at least not right now). Put more simply, this means the DG2 should be slightly faster than an RTX 3070. Maybe.
That’s not entirely out of the question, either. Assuming the 512 EUs and 2.2GHz figures end up being correct, that would yield a theoretical 18 TFLOPS of FP32 performance. That’s a bit less than the 3070, but the Ampere GPUs share resources between the FP32 and INT32 pipelines, meaning the actual throughput of an RTX 3070 tends to be lower than the pure TFLOPS figure would suggest. Alternatively, 18 TFLOPS lands half-way between AMD’s RX 6800 and RX 6800 XT, which again would match up quite reasonably with a hypothetical RTX 3070 Ti.
There are plenty of other rumors and ‘leaks’ in the video as well. For example, at one point MLID discusses a potential DLSS alternative called, not-so-creatively, XeSS — and the Internet echo chamber has already begun to propogate that name around. Our take: Intel doesn’t need a DLSS alternative. Assuming AMD can get FidelityFX Super Resolution (FSR) to work well, it’s open source and GPU vendor agnostic, meaning it should work just fine with Intel and Nvidia GPUs as well as AMD’s offerings. We’d go so far as to say Intel should put it’s support behind FSR, just because an open standard that developers can support and that works on all GPUs is ultimately better than a proprietary standard. Plus, there’s not a snowball’s chance in hell that Intel can do XeSS as a proprietary feature and then get widespread developer support for it.
Other rumors are more believable. The encoding performance of DG1 is already impressive, building off Intel’s existing QuickSync technology, and DG2 could up the ante signficantly. That’s less of a requirement for gaming use, but it would certainly enable live streaming of content without significantly impacting frame rates. Dedicated AV1 encoding would also prove useful.
The DG2 should hopefully be available to consumers by Q4 of 2021, but with the current shortages plaguing chip fabs, it’s anyone’s guess as to when these cards will actually launch. Prosumer and professional variants of the DG2 are rumored to ship in 2022.
We don’t know the pricing of this 512EU SKU, but there is a 128EU model planned down the road, with an estimated price of around $200. More importantly, we don’t know how the DG2 or its variants will actually perform. Theoretical TFLOPS doesn’t always match up to real-world performance, and architecture, cache, and above all drivers play a critical role for gaming performance. We’ve encountered issues testing Intel’s Xe LP equipped Tiger Lake CPUs with some recent games, for example, and Xe HPG would presumably build off the same driver set.
Again, this info is very much unconfirmed rumors, and things are bound to change by the time DG2 actually launches. But if this data is even close to true, Intel’s first proper dip into the dedicated GPU market (DG1 doesn’t really count) in over 10 years could make them decently competitive with Ampere’s mid-range and high-end offerings, and by that token they’d also compete with AMD’s RDNA2 GPUs.
Doom 3 has a long history with VR, so it’s a little surprising that this week’s PlayStation VR release is the first version ever to go on sale. Almost nine years ago, we visited Id Software luminary John Carmack to see his duct-taped prototype “Oculus Rift” headset for the first time, and Doom 3 was the game he used to show it off.
Carmack later moved to Oculus, of course, and Id never put out an official Doom 3 VR release until now. That said, it’s not like you haven’t been able to play Doom 3 in VR before — the game is open source, and there are various third-party mods available, including a recent port to the Oculus Quest. So why is Doom 3: VR Edition only landing now as a PS4 exclusive?
After playing it for a while, I think I have the answer: the PSVR Aim Controller makes for a pretty good shotgun.
For better or worse, Doom 3: VR Edition is a straightforward port of Doom 3, including its expansions. That means there’s a lot more content here than most VR shooters, but it also means you spend a fair bit of time watching 2D cutscenes rendered in the kind of quality you’d expect from a game that came out in 2004. PSVR games aren’t known for their visual pyrotechnics, though, and Doom 3’s limited technology at least means it can run at a reasonable frame rate and resolution on a PS4 Pro.
Doom 3 itself is a somewhat divisive game that Id struggled to follow up for a long time before reviving the series with an excellent reboot and last year’s sequel Doom Eternal. Id decided to focus on atmosphere and horror with Doom 3, leaning into the series’s moodier moments rather than rampant monster blasting. You spend much of your time traipsing through corridors, anticipating the next jump-scare where a demon bursts out from nowhere.
In theory, this makes Doom 3 a much better fit for VR than the more recent Doom games, which are based around frenetic combat that’s difficult enough to keep track of on a regular monitor. (Id did adapt the reboot into a separate game called Doom VFRin 2017, but it suffered from awkward controls and didn’t play anything like the original.) VR is often all about atmosphere, and Doom 3 has plenty, even 17 years on.
That said, this is still an old-school PC first-person shooter at its heart. The sprawling levels are designed to be navigated quickly, and they wouldn’t really work with typical solutions for VR locomotion like teleportation. Doom 3: VR Edition forces you to be comfortable with analog stick movement and its affordances designed to ward off nausea, like a vignette that limits field of view and the use of snap turning instead of free motion on the right stick.
I didn’t have issues with nausea myself, though Doom 3: VR Edition is definitely on the intense side of VR games. It does help that the PSVR is generally geared around seated experiences, given that it’s likely to be used in living rooms. You can play with a regular DualShock controller, which works reasonably well, but there’s no option for the Move motion controllers — probably for the better.
Really, though, the only way to play Doom 3: VR Edition is the PlayStation VR Aim Controller. Sony’s abstract gun-styled peripheral doesn’t have a lot of support, but it might as well have been made for this version of Doom 3. It gives you all of the controls you need, including independent movement and aiming ability, and — crucially — it makes you feel like you’re holding a Super Shotgun. The game immediately becomes more immersive and easier to play.
Not all of the weapons work perfectly with the controller. The starting pistol feels a little off with the Aim Controller’s two-handed setup, for example, and the machine gun’s sights don’t quite track with the controller itself. Overall, though, the Aim Controller is great to use, and it’s the only time PSVR ever beats competing VR systems on the control front. If you have one, that’s reason alone to check out this version of Doom 3.
Still, the original Doom 3’s combat wasn’t designed for VR, and it does show. You’ll find yourself needing to use the 180-degree-turn button constantly as enemies move behind you, which really detracts from the otherwise smooth controls. There’s just no escaping the fact that the game was always intended to be played with a mouse and keyboard, for all that this version makes the most of its best control option.
Doom 3: VR Edition is a good port, but it’s also a 2004 PC shooter that’s been adapted to VR, and there’s really only so much you can do with that. If you’re okay with an experience that ignores most of what we’ve learned about modern VR game design, and you have an Aim Controller, it’s a reasonable way to spend a few demon-destroying hours. But there are better VR shooters — and there are better ways to play Doom 3.
Doom 3: VR Edition is out now on the PlayStation 4. A PlayStation VR headset is required.
Motorola has added support for two new indigenous languages spoken in the Amazon as part of a larger effort to make technology more accessible. Beginning today, Kaingang and Nheengatu will be among the language options available on Motorola Android devices. Any Motorola phone updated to Android 11 will be able to access the new language options, not just its most expensive models.
“We believe that this initiative will raise awareness towards language revitalization, not only will impact the communities that we’re working directly with, but right now we’re in the process of open sourcing all that language data from Android into Unicode,” Janine Oliveira, Motorola’s executive director for globalization software, said in an interview with The Verge. “And by doing that we believe that we’re going to pave the way for more endangered indigenous languages to be added, not only on Android, but also on other smartphones.”
The Kaingang language comes from an agricultural community of people in southeastern Brazil, and only about half of the community still speaks it, Motorola found. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has designated Kaingang “definitely endangered.” This means that children no longer learn it as their first language at home.
The Nheengatu community of about 20,000 people lives mostly in the Amazon, but only about 6,000 people in the region still speak that language, so UNESCO considers Nheengatu “severely endangered.” That’s the second-most serious category before a language is considered “extinct.” UNESCO classifies a language as severely endangered if it’s spoken by grandparents and older generations, who may not speak it among themselves or to children.
Both of the indigenous communities rely heavily on mobile technology, even though they may not always have reliable internet access, said Juliana Rebelatto, globalization manager and head linguist at Motorola’s mobile business group. “Teachers use their mobile phones in their classroom to teach their curriculum, so now that the phones will be in Kaingang and Nheengatu this will really help with the learning process,” she said.
It makes sense that Motorola has a focus on Brazil: as of February, it had 21 percent of the market share in the country among smartphone manufacturers, ahead of Apple and second only to Samsung. Rebelatto acknowledged there isn’t necessarily a big return on investment for Motorola by incorporating the indigenous languages into its system; the move isn’t likely to add a huge number of new users for its products.
“We know that for most people it will be just another language in a drop down menu but for the people who speak that language, it’s a big innovation. It is part of the bigger mindset we have about digital inclusion,” she said.
Rebelatto said it was their colleague Robert Melo, Motorola’s internationalization lead, who first realized that there were no Latin American indigenous languages represented in any form of digitalized technology. “We started researching ways that Motorola could change that story,” she said.
The company partnered with the University of Campinas in São Paulo, Brazil, and worked with Professor Wilmar D’Angelis, a researcher in cultural anthropology and indigenous languages. “He has dedicated his life, 40 plus years, into researching languages,” Rebelatto noted, and he proved vital in helping the company narrow down which indigenous languages it would choose.
Motorola’s linguistics team worked with native language speakers of both languages throughout the project, which meant training them on the company’s tools and practices while on a multinational schedule. “We had to ship Lenovo PCs to communities where the mail barely got into,” Oliveira said, all during a pandemic.
But the native speakers were eager to help, Rebelatto added. One of the women who was a translator on the project told them she couldn’t wait for the languages to be available on phones: “She now has all the argument she needs to convince her child to learn their ancestral language, because it will be on the phones they use every day.”
Nheengatu speaker Ozias Yaguarê Yamã Glória de Oliveira Aripunãguá worked with Motorola on the project and emphasized the cultural importance of the language. “You must understand that over time, Nheengatu has been weakening more and more, and today, many times, due to discrimination against the language, people are ashamed to use it,” he said in an email to The Verge.
“But you can’t talk about the Amazon without talking about Nheengatu because the two are linked … it’s part of the essence, it’s the core. The soul of the Amazon is Nheengatu,” he said. Seventy percent of fish names are Nheengatu names, and 50 to 60 percent of the city and river names are Nheengatu names as well, Yaguarê added. “There is no way to talk about one without talking about the other.”
The team plans to open source all of the data it collected as part of the project, hundreds of thousands of UI strings, for anyone to use or research the Amazon languages, not only on Android, but other platforms as well. They had to customize a keyboard and are working with Google on the process of including the languages in G-board.
“We don’t intend to stop here,” said Renata Altenfelder, Motorola executive director for brand. “We are putting this as an open source because we truly believe this should be something for everyone to join.” More endangered languages will be added to the project, she added, they just haven’t decided which ones yet.
Rebelatto added that by digitizing endangered languages, the company hoped it would draw more attention to them and motivate other tech companies to consider similar initiatives. The Motorola project, she added, “will allow technology to have its rightful place in the preservation of not only the language but in their traditions, in their culture and their story.”
(Pocket-lint) – 2020 was the year of the ultra-premium super phones – among other things – with more than one manufacturer now offering a big, spec monsters. They also started becoming far more expensive than previous generations of flagship.
For Samsung, that beast was the S20 Ultra. For Huawei, the P40 Pro+ led the lineup. Unfortunate naming perhaps, but one that makes sure we know it’s not just Pro, it’s extra Pro.
With a spec sheet that reads like a tech nerds wish list, does Huawei’s all-singing all-dancing smartphone compete with the best?
squirrel_widget_184581
Design
S20 Ultra: 166.9 x 76 x 8.8 mm
P40 Pro+: 158.2 x 72.6 x 9 mm
Both IP68 dust/water resistant
S20 Ultra comes in grey and black glass finishes
P40 Pro+ available with white/black ceramic options
The design of a smartphone can often make or break an experience using it, and when building big, spec-heavy behemoths it’s important to make ergonomics a focus. Both Huawei and Samsung take similar approaches in this regard, with both featuring slim metal edges, and glass that curves around the sides. Styling is a little different, but the ethos is the same.
Interestingly, Samsung opted to only release two colours (or non colours) of Ultra edition: black and grey. Huawei has a few different coloured glass finishes, including white, black blue, ‘blush gold’ and ‘silver frost’ as well as ceramic options. This last finish is designed to be shiny but ultra durable. The other glass finishes are either glossy glass or matte/frosty glass. So there’s no shortage of colours or textures.
Both have quite large rectangular protrusions on the back where the camera systems are housed, both are also water and dust resistant up to IP68 certification.
With Samsung having the larger display, the phone is noticeably larger than Huawei’s.
Display
S20 Ultra: 6.9-inch AMOLED, QHD+
P40 Pro+: 6.58-inch, QHD+
S20 Ultra: 120Hz refresh
P30 Pro_: 90Hz refresh
If what you want is the biggest display possible, the Samsung is going to be the best option here. The S20 Ultra features a 6.9-inch QuadHD+ resolution panel built using one of the company’s own Dynamic AMOLED panels.
Similarly, Huawei’s phone also has a QuadHD+ resolution screen, but measuring 6.58-inches diagonally, which means technically it will appear slightly sharper because it has a similar number of pixels in a smaller space.
Both have quite high refresh rates too, with Samsung offering up to 120Hz (as long as you use it in a lower resolution mode) and Huawei offering 90Hz. It should mean they both feel fluid and fast, with no lag in the interface or gaming animations.
Both feature hole-punch cutouts in the display to make space for the front facing camera, but Samsung’s a really small singular cutout in the centre. Huawei’s has a dual-coutout placed in the left corner.
Both of the phones also have invisible in-display fingerprint sensors, but using different technologies. Huawei uses an optical scanner, which means it uses a camera to take a picture of your fingerprint, while Samsung uses ultrasonic technology which doesn’t need a light to flash, and is technically more accurate since it measures depth.
Cameras
P40 Pro+ has five cameras
S20 Ultra has four
P40 Pro+ offers 10x optical zoom
S20 Ultra has 10x hybrid optical zoom
P40 Pro+ primary sensor is 50MP
Samsung primary is 108MP
Huawei has gone all in on the cameras for the P40 Pro+. The primary camera is 50MP built on a 1/1.28-inch sensor, making it one of the largest smartphone camera sensors around for better detail, light capture and dynamic range. Samsung’s primary camera 108MP on slightly smaller 1/1.33-inch sensor.
Curiously, Huawei has gone with two optical zoom cameras for the P40 Pro. One’s a traditional 8-megapixel 3x optical zoom, the other is an 8-megapixel 10x periscope camera. Samsung has a 48-megapixel periscope zoom too, offering 10x hybrid zoom.
Of course, the both have ultra-wide cameras as well, with Huawei opting for a 40-megapixel sensor in that one, and Samsung going with 12-megapixels.
The additional sensor on both phones is a depth sensing background camera. You can’t take pictures with it, but it helps the cameras get a better understanding of depth and distances to help produce those portrait shots with blur.
Both manufacturers also have their own versions of post processing and analysing to decide which effects to apply to a particular shot. Whether that’s making skies more blue, or plants more green and so on.
Hardware and performance
Both 5G
Huawei: Kirin 990 processor
Samsung: Exynos 990 or Snapdragon 865
Huawei: 4,200mAh battery w/40W wired or wireless charging
Samsung: 5,000mAh battery w/45W wired and 15W wireless
Both these phones are about as powerful as you can get right now. Huawei uses its own custom processor called the Kirin 990 with built-in 5G capabilities. Similarly, Samsung has either the Exynos 990 or Snapdragon 865. They’re all octa-core processors built on 7nm processes.
What that means for the every day user is that the phones both feel fast and fluid and won’t struggle to launch even the most demanding games and apps.
As for battery size, Samsung clearly has the advantage here with 5,000mAh capacity compared to Huawei’s 4,200mAh. Huawei is known for its efficient battery optimisations in its software, so actually battery life will still be very good.
Charging speed is similar when you use a cabled connection. Samsung can accept 45W power to charge up quickly, although it only ships with a 25W adapter. Huawei ships with 45W, and is also able to charge wirelessly at a similar speed. Samsung’s wireless charging is much slower.
Conclusion
A big reason to choose one of these phones over the other may end up just being software. Huawei has been forced to try its own route, using the open source version of Android that doesn’t come with Play Store or Google Play Services. That means hoping your most-used apps are on the Huawei AppGallery. While it’s improving every week, not all the most popular apps are on there yet.
From a hardware perspective, Huawei’s cameras seem to offer more, especially with the extra zoom capabilities, but Samsung’s display being noticeably bigger and having a much smaller punch-hole camera means there’s less intrusion.
In the end – although the situation is improving all the time – it’s still difficult to recommend any Huawei phone without Google Play Services, and so Samsung will still give you the most complete experience, even if Huawei’s hardware is fantastic.
Tesla now accepts bitcoin as payment for its cars in the US, CEO Elon Musk announced on Twitter. The option to pay using the cryptocurrency now appears on the company’s US website, where it’s available alongside the traditional card payment option. Musk said that the option to pay with bitcoin will be available to other countries “later this year.”
As well as confirming the availability of the new payment option, Musk offered some details on how Tesla is handling the cryptocurrency. “Tesla is using only internal & open source software & operates Bitcoin nodes directly,” he said in a followup tweet, “Bitcoin paid to Tesla will be retained as Bitcoin, not converted to fiat currency.”
Pay by Bitcoin capability available outside US later this year
— Elon Musk (@elonmusk) March 24, 2021
Tesla lays out how the bitcoin payment process works in an FAQ on its site, where it notes that users will have the option of scanning a QR code or copying and pasting its bitcoin wallet address to initiate the payment. It adds that trying to send any other form of cryptocurrency to its wallet means it “will not receive the transaction and it will likely result in a loss of funds for you.” According to Tesla’s bitcoin payment terms and conditions, its cars will continue to be priced in US dollars, and customers who choose to will pay the equivalent value in bitcoin. Tesla estimates that a $100 deposit paid today equals 0.00183659 BTC, for example.
Tesla’s terms and conditions also caution that customers need to be careful when inputting both the bitcoin address and amount to be paid. It notes in all caps that “bitcoin transactions cannot be reversed” and “if you input the bitcoin address incorrectly, your bitcoin may be irretrievably lost or destroyed.” Customers are also responsible for directly paying all bitcoin transaction fees associated with their purchase, and Tesla warns that although bitcoin payments typically take less than an hour to complete, this can extend to up to “one day or more.” And because of bitcoin volatility, Tesla warns that the value of any refund “might be significantly less” than the value of bitcoin relative to US dollars at the time of purchase.
Tesla announced its intention to start accepting bitcoin as payment a little over a month ago in its annual 10-K report, when it said it would be adding the option in the “near future.” In the same filing, the company said it had also invested a total of $1.5 billion in the cryptocurrency. The news sent the price of bitcoin up to over $43,000, an all-time high at the time. As of this writing, 1 bitcoin is now worth a little over $56,000.
A Swiss computer hacker named Till Kottmann has been charged by the US government with multiple accounts of wire fraud, conspiracy, and identity theft. The indictment accuses Kottmann and co-conspirators of hacking “dozens of companies and government entities,” and posting private data and source code belonging to more than 100 firms online.
The 21-year-old Kottmann, who uses they / them pronouns and is better known as Tillie, was most recently connected to the security breach of US firm Verkada, which exposed footage from more than 150,000 of the companies’ surveillance cameras. But the charges filed this week date back to 2019, with Kottmann and associates accused of targeting online code repositories (known as “gits”) belonging to major private and public sector entities, ripping their contents and sharing them to a website they founded and maintained named git.rip.
Git.rip has since been seized by the FBI, but previously shared code and data belonging to numerous companies including Microsoft, Intel, Nissan, Nintendo, Disney, AMD, Qualcomm, Motorola, Adobe, Lenovo, Roblox, and many others (though no firms are explicitly named in the indictment). The exact nature of this data varied in each case. A rip of hundreds of code repositories maintained by German automaker Daimler AG contained the source code for valuable smart car components, for example, while a breach of Nintendo’s systems (which Kottmann said did not originate from them directly but which they reshared through a Telegram channel) offered gamers rare insight into unreleased features from old games.
In interviews about earlier breaches, Kottmann noted repeatedly that the data they found was usually exposed by companies’ own poor security standards. “I often just hunt for interesting GitLab instances, mostly with just simple Google dorks, when I’m bored, and I keep being amazed by how little thought seems to go into the security settings,” Kottmann told ZDNetin May 2020. (“Google dorks” or “Google dorking” refers to the use of advanced search strings to find vulnerabilities on public servers using Google.)
In the case of the Verkada breach, Kottmann and their associates reportedly found “super admin” credentials that gave them unfettered access to the company’s systems that were “publicly exposed on the internet.” These logins allowed the hackers to look through the live feeds of more than 150,000 internet-connected cameras. These cameras were installed in various facilities including prisons, hospitals, warehouses, and Tesla factories.
Kottmann said they were motivated by a hacktivist spirit: wanting to expose the poor security work of corporations before malicious actors could cause greater damage. Kottmann told BleedingComputerlast June that they didn’t always contact companies before exposing their data, but that they attempted to prevent direct harm. “I try to do my best to prevent any major things resulting directly from my releases,” they said.
After the Verkada breach, Kottmann told Bloomberg their reasons for hacking were “lots of curiosity, fighting for freedom of information and against intellectual property, a huge dose of anti-capitalism, a hint of anarchism — and it’s also just too much fun not to do it.”
The US government, not surprisingly, takes a dimmer view of these activities. “Stealing credentials and data, and publishing source code and proprietary and sensitive information on the web is not protected speech — it is theft and fraud,” Acting U.S. Attorney Tessa M. Gorman said in a press statement. “These actions can increase vulnerabilities for everyone from large corporations to individual consumers. Wrapping oneself in an allegedly altruistic motive does not remove the criminal stench from such intrusion, theft, and fraud.”
The indictment includes as evidence, numerous tweets and messages sent by Kottmann using handles including @deletescape and @antiproprietary. These include a tweet sent on May 17, 2020 saying “i love helping companies open source their code;” messages to an unnamed associate soliciting “access to any confidential info, documents, binaries or source code;” and tweets sent on October 21 in which Kottmann said that “stealing and releasing” corporate data was “the morally correct thing to do.”
Kottmann is currently located in Lucerne, Switzerland, where their premises were recently raided by Swiss authorities and their devices seized. Whether or not they will be extradited to the US is unclear. Bloomberg reports that Kottmann has retained the services of Zurich lawyer Marcel Bosonnet, who previously represented Edward Snowden. The charges against Kottmann carry up to 20 year prison sentences.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.