Deliveries of GMMK Pro barebone keyboards will begin in the first quarter of next year, but the ISO layout will not be available until later.
The Glorious PC Gaming Race is a peripheral manufacturer that has risen to the world map this year as the lightest RGB gaming mouse. The company has now expanded its product range to include keyboards.
According to the company, GMMK Pro is a keyboard with a premium 75% layout, designed for both enthusiasts, gamers and professional computer users. It is built inside a CNC machined aluminum frame and is equipped with a modular circuit board. As a special feature, the keyboard support plate is installed inside the seals, which is said to silence key tones and create a unique feel.
The Barebone keyboard supports 5-pin key switches mounted on both the support board and directly on the circuit board. The keyboard also has a built-in scroll wheel that adjusts the volume by default, but is reprogrammable. The keyboard supports open source QMK and VIA firmware and the company’s own Glorious Core software. By default, the keyboard utilizes the company’s own GOAT stabilizers, but it also supports other screw-in or clip-on stabilizers.
According to today’s trends, mandatory RGB lighting is implemented on the sides of the keyboard, but it naturally also supports RGB-lit switches. The keyboard is connected to a computer with a detachable USB Type-C cable.
The Glorious PC Gaming Race BMMK Pro barebone keyboard pre-order opens 11. November, next Wednesday. Its tax-free recommended price is 169, 99 dollars and deliveries are scheduled to begin during the first quarter of next year. A version of the keyboard is also promised with an ISO layout, but its availability is expected to go into the second quarter of next year.Source: Glorious PC Gaming Race
With Ceph Day on 12. November the last online day of storage2day 2020 takes place in a week. The central topic is the open source object storage Ceph and its integration in the most varied of environments. Seven lectures lead the participants through the quasi-standard of software-defined storage.
After an introduction to the distributed storage system, there are two first inconvenient topics: Ceph security and myths about object storage. The time around the lunch break belongs to the topic of integration: First, Ceph is to be married to Samba, then to become part of a container environment, in two ways: On the one hand, Ceph itself is to be operated within a container environment, on the other hand, it is to be provided as a storage container .
Two practical reports fill out the afternoon: First, Klaus Steinberger reports on a highly available virtualization cluster with Ceph and open source at the LMU Munich. Then the lecture “GitOps for Storage – A Merge Request to the Productive Ceph Cluster” shows how the administrators of HanseMerkur-Versicherung applied the GitOps concept to the deployment of the Ceph cluster and took on the challenge of using this approach with SUSE’s Ceph – Marrying the Deployment solution Deepsea and distributing the cluster fail-safe over two locations.
Continuation in the year 2021 Ceph Day concludes this year’s storage2day, which is taking place online this year due to Corona. In February 2021 the iX conference resumes for storage networks and data management, their program continues seamlessly with the topic of storage with open source. This is followed by the days for storage architecture in March and storage performance after the Easter holidays.
Like the on-site counterpart 2019, the online conference is also aimed at admins, data center and storage managers and IT managers who want to receive further training in the field of storage technology. The Ceph Day is flanked by two Ceph workshops: The workshop introducing Ceph “Object Storage 101: The fastest way to your own Ceph Cluster “takes place all day on . November and the further “ROOK – Ceph-Storage for Kubernetes” also all day on 17. November.
The teachers’ group in the Society for Computer Science (GI) describes the change to commercial solutions such as Microsoft Office 365 for the planned by the Baden-Württemberg Ministry of Culture Education platform as a “big step backwards”. This would affect “all areas” such as data protection, education for democracy in the digital area, methodology and didactics as well as “uniformity, openness and collaboration”.
Moodle open source platform “It is very important to us that the excellent infrastructure in Baden-Württemberg’s schools is maintained on an open source basis” , stated the committee in a position paper that has now been published. “The center of a digital school should always be a learning management system that is really focused on learning.” With the already used open source platform Moodle this is available – “including open standards and interoperability”.
For the teachers’ group it is also questionable, ” whether our training and collaboration structures would cope with the parallel operation of several systems “. An additional platform certainly only makes sense if it reliably supports “an open export and exchange of the materials created”. For Moodle with the associated video conference system Big Blue Button this is “the case now and in the future”, with proprietary systems like those from Microsoft “usually not”.
“Damage to the image of our technology country” The switch to a commercial solution could after the 2018 education platform Ella, which was declared failed “The next damage to the image of our technology country will be”, warns the Gesellschaft für Informatik. Baden-Württemberg threatens to “lose its digital sovereignty in the education system”.
With their plan to introduce Microsoft 365, move Education Minister Susanne Eisenmann (CDU) is “on thin ice,” it says. Because: “The software collects telemetry data of unknown type and scope.” The fact that the legal basis for this ceased to exist in the summer in the course of the “Schrems II judgment” “does not seem to concern anyone”. The Cloud Act also obliges Microsoft to “break European law” and to release this information to US authorities.
Open and low-threshold Even today, sensitive student data such as behavior, performance, health and absenteeism “should not actually be processed by teachers on digital devices”, states the GI. “At least permanent storage must currently take place in the school’s administration network.” If the information just went into a Microsoft cloud, “the problem worsens”. The pragmatic consequence is likely to be a prohibition of such storage processes: The responsibility will be “again shifted to teachers”.
The Society for Computer Science therefore appeals to the state government: “Take responsibility – stay open source ! ” Platforms with free software made broad access “independent of the operating system and device” possible. Even with the mini computer Raspberry Pi, schoolchildren could participate in digital school life. The executive could “also do justice to the freedom of teaching materials at low cost”.
Data flow could not be stopped Similar had The Association for the Promotion of MINT Teaching (MUN), which campaigns for the subjects of mathematics, computer science, natural sciences and technology and “clearly opposes the use of Microsoft Office 365 – tools “in schools. Even a specially developed, more data-efficient version of the office package, as it is now to be tested at vocational schools in cooperation with the state data protection officer, is unsuitable: due to US surveillance laws, the use of European servers does not guarantee adequate data protection, since the obligation to publish remains.
The market strategy of the lock-in effect, which begins at school, implies “a strengthening of the Microsoft monopoly and an already dangerous technological dependence of Europe on an American company”, emphasizes the MUN in its submission to Eisenmann. This development can be countered with Linux and open source programs of European origin. A rethinking has to “start early, that is, at school”. Other parent and teacher associations and other civil society organizations also see no place for Microsoft at educational institutions.
A new alliance is preparing to stir up the market for login services with a master key for many web services. Verimi has teamed up with the Fraunhofer Institute for Applied and Integrated Security (AISEC). Both want to combine their concepts submitted for the “Secure Digital Identities” competition of the Federal Ministry of Economics (BMWi) into a “Germany ID” (DeID). The aim is to develop a uniform standard for secure digital identities in Germany and Europe and to stand up to the data-hungry ID services from Facebook and Google, for example.
The partners want to advance the project as part of the BMWi’s innovation competition Showcase Secure Digital Identities (SDI). Verimi is supported by companies such as the Allianz Insurance Group, Axel Springer, Bundesdruckerei, Daimler, Deutsche Bank, Deutsche Telekom and Lufthansa and has previously led the “People-ID” consortium at SDI. The AISEC, which 2019 started the open source service Re: claimID, was previously represented there with “SDIinNRW”.
More than “single sign-on” Under the umbrella of DeID, around 35 partners from research, Economy and public administration at BMWi for the planned three-year implementation phase. In addition to the cities of Bochum and Bonn and the companies Governikus, Procilon and the Sparkasse finance portal, according to Verimi, this also includes 1 & 1 Mail & Media GmbH, which belongs to United Internet and operates the webmail services GMX and Web.de. The parent company is also involved in the local Verimi alternative NetID, for example with the TV stations ProSiebenSat.1 and RTL. Axel Springer announced in April that it would be represented in both German login alliances in the future.
In addition to a “single sign-on” service, So that users do not have to constantly create new accounts with associated passwords, DeID is intended to bring together numerous locally, distributed and centrally aligned identity solutions for everyday-relevant applications in many sectors across different levels of trust. Initially, pilot projects are planned in North Rhine-Westphalia and Hesse, for example with administrative services, the e-prescription as well as with banks and insurance companies (account opening).
According to AISEC expert Marian Margraf, the main challenge at DeID is this to establish an ecosystem for electronic identification (eID). This must meet “high security requirements” and still be easy to use by all citizens and also with a smartphone. To do this, it is necessary to take into account the security functions of various current mobile devices and to integrate data protection in a transparent and easily traceable manner.
Uniform standards for the overarching acceptance of digital identities are required by all application partners from business and industry the public sector, says Verimi Managing Director Roland Adrian for the project. A broad acceptance for an online ID card is also “the urgently needed catalyst for digitization in Germany”. According to observers, German login services have so far not been able to make up much ground in competition with their competitors from Silicon Valley. The SDI jury is expected to decide on the award of state funding by mid-December.
The developers of SaltStack, an open source software for automated server system configuration, have released security packages and patches for several versions. The fixes address three critical security vulnerabilities. One of them (CVE – 2020 – 17490) the Salt team estimates the hazard potential as “Low” for the time being, but the others (CVE – 2020 – 16846, CVE – 2020 – 25592) as “High” to “Critical”.
SaltStack users should apply the packages (or alternatively: the available patches) as recommended by the team as soon as possible.
Unauthorized code execution possible SaltStacks Security Advisory gives details of the gaps. According to this, unauthenticated attackers with network access to the Salt API CVE – 2020 – 16846 to run code abuse on vulnerable systems via the SSH client (Shell injection).
CVE – 2020 – 25592 is based on insufficient validation of access data and tokens and could Allow attackers to bypass authentication mechanisms in order to execute commands via the SSH client. CVE – 2020 – 17490 summarizes security-relevant bugs in the TLS encryption module.
Security packages and patches SaltStacks Advisory does not limit the Vulnerability to certain versions before; Ultimately, all (unpatched) issues seem to be vulnerable through the three holes.
Security packages are in the SaltStack repository for the versions 3002. x, 3001. x, 3000. x and 019. x ready (select from the dropdown menu).
Patches provided by GitLab secure the following SaltStack versions:
3002 3001.1, 3001. 2 3000. 3, 2020. 4 2019. 2.5, 2019. 2.6 2018. 3.5 2017. 7.4, 2017. 7.8 2016. 11. 3, 2016. 11. 6, 2016. 11. 10 2015 .3.4, 2016. 3.6, 2016. 3.8 2015.8th.10, 2015.8th.13 According to the development team, users of older versions should first update to one of the above versions in order to be able to apply the respective patch.
The team behind the MDN, formerly started as the Mozilla Development Network, announced plans to move the documentation site on the Mozilla Hacks web developer blog and published the first beta of the new platform. A GitHub-based approach replaces the previous Wiki.
The MDN team obviously has a certain sympathy for Pokémon.
(Image: Mozilla Hacks)
The new platform bears the code name Yari as a reference to the traditional Japanese lance. The blog post compares the development of Pokémon with the change from the previous Kuma base to Yari. In the games, the development is often more of a metamorphosis, in which the appearance and sometimes the skills change significantly. Obviously the MDN team has been planning a “radical change of the platform” for a long time.
The beta form of Yari The major upheavals only affect the backend. For users of the documentation, however, the change should be made relatively transparent. The changes will affect those who develop the platform on the one hand, and the authors of articles on the other.
Yari’s first beta has been on GitHub since November 3rd. Developers can test it out, and the blog post says the first release is firm for the 14. December planned, which already suggests a stable state of the software at the beta start.
Motivation for the change The team names four main reasons for the changeover of the platform: Less administrative effort for the development side, better workflows for contributors, expanded involvement of the community and an improved front-end architecture. In addition, the job cuts at Mozilla in August, which were also associated with the downsizing of the MDN team, should have given an additional boost.
Obviously, it is quite difficult to add new functions to the Kuma platform expand. With Yari, a large part of the code base should disappear and thus the management of the project will become much easier.
Pull requests instead of Wiki The model is geared more towards the processes that software developers are used to: In the future, content contributions will be created similar to code submissions for open source projects in the form of Pull requests (PR) instead of direct changes in the wiki. In addition, processing should be able to be integrated into typical processes and MDN source files should be conveniently processed in development environments.
The PR approach means that the publication is preceded by a review process. This means that the MDN team has a look at the posts and can provide additional feedback before new content goes live and potentially has to be changed afterwards. This promises a stronger bond with those who regularly make content-related contributions – analogous to the communities of open source projects.
When checking changed or new contributions, additional tools for quality control can be integrated like automatic test tools to better ensure that code is correct. At the same time, the frontend is to be revised, which currently has weak points in some areas.
JAMstack instead of Wiki In the old model, all clients access the content via a content delivery network (CDN), regardless of whether they are only calling up articles or creating or editing them. At Yari, the creation of posts takes place on GitHub. The architecture also separates the delivery of documents from search queries and account-specific traffic. As before, the latter services are located in their own Kubernetes cluster, which is, however, much smaller than before.
Architecturally, Yari relies on a JAMStack where the first three letters stand for JavaScript, APIs and markup. In terms of architecture, the system delivers statically generated websites, and the dynamics take place via the APIs or serverless functions. The rendering of the web pages does not take place with every client request as with server rendering, but in the course of the page build process.
The new platform separates contributors from readers and is intended to handle the delivery of documents speed up the CDN.
(Image: Mozilla Hacks)
The old platform read the content from a MySQL database, converted it to HTML and delivered it via a CDN, where it was cached for five minutes to enable the same queries without database access. In the new model, the content goes daily to an S3 instance on Amazon Web Services, which delivers it to the CDN for reading. Version management Git plays an essential role in the ecosystem for creating content.
Detour via the IDE Contributors may initially need to get used to the changeover. You can no longer edit content by simply clicking on Edit edit the page in a WYSIWYG editor, but have to fall back on the processes and tools used in software development.
Typically, you create new contributions in a development environment or a source code Editor like Visual Studio Code and then submit it as a pull request in the GitHub repository. For simple code changes you don’t necessarily have to go through local tools, but can edit the adjustments via the GitHub UI.
To start, those who write contributions have to edit all files in the HTML source code . This includes checking the output in the browser before submitting the PR. In the long term, however, the team is planning to switch to Markdown as the standard format for the content.
Further details on the process and the infrastructure can be found in the blog post on Mozilla Hacks.
In the dispute over the Python library YouTube-DL, with which videos can be downloaded from the streaming platform, the developers of the open source software are unaffected by the legal threats from the other side. You have now released a new version of your software. The software was removed from Microsoft’s code-sharing platform GitHub following a DMCA cease-and-desist request by representatives of the US music industry, which sparked violent protests in the open source community, in parts of the press and among network activists.
Nat Friedman: “This time it annoyed me” Meanwhile, Nat Friedman has too , Managing Director of GitHub and longstanding open source developer in the GNOME environment, got involved. As reported by the news site TorrentFreak, Friedman is actively trying to restore YouTube-DL’s GitHub repository. “GitHub exists to help developers. We don’t want to complicate their work. We want to help the Youtube-DL developers to remove the DMCA cease-and-desist request from the world so that we can restore the repo,” the GitHub boss told TorrentFreak . Speaking to the website, Friedman also admitted that the case had annoyed him personally.
Friedman thinks the YouTube DL developers could do theirs Probably bring your code back online with a few minor changes without fear of legal consequences. It would therefore be advisable to remove an example of downloading copy-protected material from the program code. In addition, one would have to remove code that bypasses a measure (rolling cipher) with which YouTube prevents the download of some videos. However, it is not clear how many videos are protected in this way and whether it is not trivial to bypass this protection manually.
Legally questionable, in terms of PR a clear disaster Some observers are of the opinion that a lawsuit by the music industry interest group RIAA against YouTube-DL based on the Digital Millennium Copyright Act (DMCA) has little success. Because there is also a lot of content on YouTube that is under free licenses and is not provided with copy protection that has to be bypassed. This is how many network activists see it, including the Electronic Frontier Foundation (EFF), which sharply criticized the RIAA’s DMCA injunction.
Regardless of any upcoming legal disputes, it seems clear that the RIAA shot themselves a violent own goal with their actions. In addition to global public criticism, so many GitHub users had duplicated the YouTube DL code in their own repositories that GitHub finally had to issue an explicit warning that users who take part in this protest run the risk of their GitHub account being damaged is blocked.
New version of Youtube-DL is available for download Also the Youtube-DL developers themselves do not seem intimidated by the threat of the music industry. They have a new version on their own website 2020. 11. 01. 1 of their software released. The changes contained therein apparently have nothing to do with the dispute with the RIAA. So far, the development of the software seems to proceed without restrictions.
Amazon Web Services has made available in open source form a new simulator and a series of tools machine learning that allow to study and try to predict the spread of the COVID pandemic infections – 19 . It is a set of tools and data that can help shed light on the complexities of this virus, offering a spread simulator and various models to test the impact of various intervention strategies.
Although today we know something more about COVD – 19 compared to the pandemic, the construction of a correct epidemiological model is still an arduous task. This is because it is necessary first to identify the variables that can influence the spread of the disease at the city level, then at the country level and finally at the population level. An effective intervention model must then be able to modify strategies (closures, quarantines) by exploring the trends of diseases that have shown trends similar to those of COVID – 19
Studying and preventing the spread of the COVID pandemic – 19 with AWS Machine Learning
The machine learning models made available by AWS estimate the progression of disease by comparing the data with the historical results. This offers the possibility for scientists and researchers to exploit a simulator to reproduce hypothetical scenarios for different intervention approaches and to use state-level models in the US, India and European countries.
The AWS simulator is able to assign a series of probabilities to the disease variables for each individual , such as the length of time between exposure to the pathogen and the development of symptoms. It is also possible to study the dynamics at the population level, in such a way that the passage of an individual from one state to the next is conditioned by the states of the other individuals in the population. For example, an individual may go from “susceptible” to “exposed” depending on factors such as vulnerability due to pre-existing conditions or external interventions such as social distancing.
“Our open source code simulates COVID case projections – 19 a various levels of regional granularity. The output is the projection of total confirmed cases on a specific timeline for a target state or country, for a given degree of intervention. Our solution first seeks to understand the approximate time to peak and the expected case percentages of COVID cases – 19 per day for the target entity (state / country) by analyzing the incidence of the disease. Next, it selects the optimal parameters using optimization techniques on a simulation model. Finally, it generates the projections of the daily and cumulative confirmed cases, starting from the beginning of the outbreak up to a specified time period in the future “yes read on the official AWS blog.
Amazon Web Services is not alone in releasing ML models and datasets to help develop adequate intervention measures to contain the spread of the pandemic: Google has made public its in March, while Facebook released them a few days ago.
Companies, authorities and other organizations cannot easily use widely used video conferencing systems such as Microsoft Teams, Skype, Zoom, Google Meet, GoToMeeting and Cisco WebEx, even in times of the coronavirus pandemic. In an orientation guide published on Friday, the data protection officers of the federal and state governments recommend that relevant services of US providers be “carefully checked” prior to deployment.
Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.
Adequate protection of personal information “The largest and best-known providers of video conferencing products are based in the USA and process the data there,” states the Data Protection Conference (DSK) in its handout. After the European Court of Justice (ECJ) recently declared the transatlantic Privacy Shield to be invalid, this instrument is no longer available to ensure adequate protection of personal information transmitted to the USA.
Who at Data export based on the alternative standard contractual clauses must “analyze the legal situation in the third country with regard to official access and legal protection options for data subjects” before the start of the transmission, the supervisory authorities explain .
According to the DSK, further analyzes are required to make “more concrete statements” about additional protective measures in the light of the ECJ case law The separate inspection obligation also applies if the contractual partner is a European subsidiary of a US company or he if European providers in turn transmitted personal data to the USA.
Green light for open source software Previously, the leading systems from overseas had already failed a short test by the Berlin data protection officer Maja Smoltczyk. The inspector gave the go-ahead for commercially available instances of the open source software Jitsi Meet, such as the service from Netways orafe-videokonferenz.de. She also rated the Tixeo Cloud, BigBlueButton instances from Werk and the Messenger Wire as positive.
It would be best to operate conference services with open source software yourself, the DSK is now working out. Those responsible would then also have to “have sufficient technical and personnel capacities for operation and maintenance and take suitable technical and organizational measures to protect the data”. This could be challenging for smaller institutions.
Service providers and “ready-made” online services If operation by an external service provider is also possible, the analysis shows that “the software used or offered to participants must be examined for data leaks to the manufacturer and third parties”. This includes diagnostic and telemetry data. Corresponding “calling home” must “be prevented unless there is a legal basis for this”.
It becomes no less complicated when institutional users fall back on a “ready-made” online service, say the inspectors to consider. “The person responsible must ensure compliance with the data protection principles by selecting a suitable provider” and give them appropriate instructions “and take their own precautions”. For this purpose, he has to check the relevant contracts, conditions of use and security evidence submitted by the processor and also its data protection declaration.
Informed and voluntary consent often doubtful According to the paper, anyone who wants to hold a video conference must first find out to what extent they are authorized to process a large number of personal data associated with it. In doing so, he had “to pay particular attention to the principle of data economy”. If the choice falls on the tool of an external provider, first “clarify the data protection relationship to this”.
As a legal basis for the use of a video conferencing service comes under the General Data Protection Regulation (GDPR) in addition to “Legitimate interest” includes an informed and voluntary consent in question. Especially in a professional or school context, however, voluntariness is “often doubtful”, the DSK states. This applies above all when indispensable information “is only communicated in the context of a video conference”.
Problem home office transmission of picture or sound As far as employees participate from their home office, according to the document, the problem arises that other participants without the consent of the employees “cannot look into whose privacy may be preserved through image or sound “. The employer must therefore provide neutral backgrounds. An “unfavorable camera alignment, taking the devices into unsuitable rooms or rooms occupied by third parties, the unprepared visual and / or acoustic appearance of third parties in the video conference and similar ‘breakdowns’ are to be avoided”.
On 25 pages, the data protection officers list many other points such as adequate IT security that must be observed. At the time of work on the paper, for example, end-to-end encrypting solutions that meet these requirements and enable video conferences for a larger number of participants “even if they only have a low or varying bandwidth at the endpoints they use Computing power is available, not yet marketable “. Transport encryption can therefore currently be sufficient to meet the legal requirements, provided that an appropriate level of protection is guaranteed through compensatory measures.
“Only authorized persons should be able to access a video conference session and its data,” write the Author. If there was a threat of high risks for the rights and freedoms of the participants, “at least two-factor authentication according to the state of the art” must take place.
In Baden-Württemberg, in a trial run lasting several weeks, 20 to 30 Vocational schools clarify whether Microsoft Office 365 is used in the education sector in compliance with data protection regulations can. The state data protection officer Stefan Brink cleared the way for this on Friday and declared that he would like to take part in this voluntary pilot project of the Ministry of Culture in an advisory capacity. A “specially configured version for schools” of the US software company’s office package should be used.
Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.
Data protection improved According to the ministry, the test for the planned educational platform in Ländle should include the open source solution Moodle with the video conference system Big Blue Button, the messenger service Threema for teachers and another learning management system, as well as cloud-based Microsoft Products include. In addition to a business e-mail address for teachers, this involves “classic office tools such as Word, PowerPoint, Excel”, a data storage device and teams as an additional video conference and collaboration system.
For Microsoft Components, the department worked with external partners on a data protection impact assessment (DPIA). The state data protection officer rejected the first version “because it had considerable deficits in data protection law, which did not appear acceptable in such a weighty and sensitive project”. In mid-October, the ministry presented a second, “considerably revised” DPIA. Although this still does not answer all data protection questions, it represents “a sufficient basis for the now upcoming pilot”.
Better staffing necessary “This remains, however – especially when using US service providers – Considerable imponderables, “warns Brink: With a view to the so-called Schrems II ruling of the European Court of Justice, it is currently unclear how future data transfers from the EU to the USA will be legally possible at all. This question must ultimately be decided at European level.
This is also an important reason for the inspector “why schools should always look at available and reliably usable alternatives for the software solutions used”. In addition to the open source offerings already included in the package, he refers to the video conference software Jitsi, the cloud service Nextcloud and the office software OnlyOffice. In order to be able to use these permanently, the ministry would have to significantly increase “the already completely inadequate staffing of the schools with data protection officers”. The state university network BelWü has also been offering specific e-mail addresses for educational institutions for a long time.
A somewhat “data-efficient” software version At the same time, the data protection activist reports successes in his talks with Microsoft. The ministry’s offer will be based on special “data-saving” software versions that restrict the outflow of telemetry data to the provider and the option to create user profiles. The company has pledged to improve encryption and “reduce its own processing purposes”.
According to Brink, Microsoft has also promised “a considerable strengthening of user rights in relation to access by US security authorities”, for which about “legal protection guarantees and compensation obligations” are provided. All information and measured values are processed “exclusively in Germany”. In addition, the group will provide teachers with instructions on how to use the programs in a data-saving manner.
“School does not serve the provider” As part of the test, the head of the supervisory authority now wants to check “whether the promised deactivation of problematic processing has actually taken place” and Microsoft keeps the remaining promises. Otherwise real operation of the package is out of the question. Brink emphasized: “We want the software used to serve the school, and not the school for the provider when creating profiles or product offers.” All concerned should know “what data is created, where it is collected and how it is used”.
Previously, parent and teacher associations, the Alliance for Humane Education and the Chaos Computer Club had raged against this The Ministry of Culture’s plan to include Microsoft products in the educational platform. They referred, for example, to the end of the transatlantic Privacy Shield and the decision of the data protection conference of the federal and state governments, according to which the US provider was not yet able to fully meet the requirements of the General Data Protection Regulation (GDPR), at least in the first quarter.
The Baden-Württemberg Minister of Education, Susanne Eisenmann, recently received a Big Brother Award for her approach. In view of the temporary compromise, the CDU politician spoke of a “good signal for the schools and school authorities in the country”. The “extensive examination of all data protection issues” was worth it.
One of the worst jobs in the world right now is being a greeter at a retail store who has to tell people to put on their face masks. Instead of making a human check for mask compliance, we can create a Raspberry Pi-powered mask detector that uses image recognition. Then unruly patrons can yell at a Raspberry Pi screen instead.
In this article, we’ll show you how to set up a Raspberry Pi Face Mask Detection System and sound a buzzer when someone is not wearing their face mask. This project was inspired by a video of a mall in Asia where an entry gate could only be activated by a user wearing a face mask.
How does the Raspberry Pi Face Mask Detector project work?
When a user approaches your webcam, the Python code utilizing TensorFlow, OpenCV, and imutils packages will detect if a user is wearing a face mask or not. Users not wearing a face mask will be designated with a red box around their face, and users wearing a face mask will see a green box around their face with the text, “Thank you. Mask On.” Users not wearing a face mask will see a red box around their face with, “No Face Mask Detected.”
How long does the Raspberry Pi mask detector project take?
Starting with a fresh install of the Raspberry Pi OS, to complete all elements of this project will take at least 5 hours. If you completed our previous post on Raspberry Pi Facial Recognition, you can subtract 1.5 hours for the install of OpenCV. Even better, we’ve included a pre-trained model for you to jump directly to a working Pi mask detection system.
ICYMI – Facial Recognition with Raspberry Pi: We recently posted a facial recognition tutorial where we used machine learning to train our Raspberry Pi to recognize specific faces. This tutorial uses many of the same principles of machine learning and AI, but today we are adding TensorFlow to identify an object, specifically a face mask. We recently featured another Raspberry Pi Tensorflow project that determined if a cat was carrying prey to its owner’s door.
Disclaimer: This article is provided with the intent for personal use. We expect our users to fully disclose and notify when they collect, use, and/or share data. We expect our users to fully comply with all national, state, and municipal laws applicable.
What You’ll Need for Raspberry Pi Face Mask Detection
Raspberry Pi 4 (Raspberry Pi Zero is not recommended for this project, and the Raspberry Pi 3 ran very slowly.)
16GB (or larger) microSD card (see best Raspberry Pi microSD cards) with a fresh install of Raspberry Pi OS
Power supply/Keyboard/Mouse/Monitor/HDMI Cable (for your Raspberry Pi)
USB Webcam or Raspberry Pi Camera
Optional: 7-inch Raspberry Pi touchscreen
Optional: Stand for Pi Touchscreen
The majority of this tutorial is based on terminal commands. If you are not familiar with terminal commands on your Raspberry Pi, we highly recommend reviewing 25+ Linux Commands Raspberry Pi Users Need to Know first.
Part 1: Install Dependencies for Raspberry Pi Face Mask Detection
In this step, we will install OpenCV, imutils, and Tensorflow.
OpenCV is an open source software library for processing real-time image and video with machine learning capabilities.
Imutils is a series of convenience functions to expedite OpenCV computing on the Raspberry Pi.
Tensorflow is an open source machine learning platform.
1. Install fresh copy of the Raspberry Pi Operating System on your 16GB or larger microSD card. Check out our article on how to set up a Raspberry Pi for the first time or how to do a headless Raspberry Pi install. We tried this project by running ‘sudo apt-get update && sudo apt-get upgrade’ and we failed to build / install OpenCV.
2. Plug in your webcam into one of the USB ports of your Raspberry Pi. If you are using a Raspberry Pi camera instead of a webcam, use your ribbon cable to connect it to your Pi. Boot your Raspberry Pi.
There will be an optional step to add LEDs and a buzzer in the last step.
3. If you are using a Pi camera instead of a webcam, enable Camera from your Raspberry Pi configuration. Press OK and reboot your Pi.
4. Open a Terminal. You can do that by pressing CTRL + T.
5. Install OpenCV. This step takes about 2 hours. Please see Part 1 of our Raspberry Pi Facial Recognition Tutorial for full instructions on installing OpenCV. Upon completion of installing OpenCV, your terminal should look something like this:
6. Install TensorFlow. This step took about 5-10 minutes.
3. Run the pre-made model trained with over 1,000 images. In your terminal change directory (cd) into the directory you just cloned from GitHub.
cd face_mask_detection
4. Run the Python 3 code to open up your webcam and start the mask detection algorithm.
python3 detect_mask_webcam.py
If you are using a Pi Camera, enter python3 detect_mask_picam.py
After a few seconds, you should see your camera view pop-up window and see a green box indicating face mask presence.
Or a red box indicating lack of face mask.
You can try experimenting with various face masks, improper and proper wearing of your face mask (i.e. face mask hanging from your ear, or face mask below the nose).
Press ESC to stop the script.
Part 3: Face Mask Model Training (Long Method)
Now that you have your face mask detector up and running, you’re probably wondering, “How does it work?”
Over one thousand photos were used to train the model that detect_mask_webcam.py uses to make the mask or no mask determination. The more examples provided, the better the machine learning because fewer photos = less accuracy.
Photos were divided into 2 folders in our dataset, with_mask and without_mask and the training algorithm created a model of mask vs. no mask based on the dataset. The sample photos provided in the dataset folder you downloaded from GitHub are my own photos.
What if instead of hundreds of photos, we trained our Raspberry Pi Mask Detection system on 20 photos? Fortunately, we have a pre-trained model for you to test out.
From your face_mask_detection folder in your terminal, run the Python 3 code to open up your webcam with the 20 photo model.
python3 detect_mask_webcam.py --model mask_detector-20.model
If you are using a Pi Camera, enter python3 detect_mask_picam.py --model mask_detector-20.model
After a few seconds, you should see your camera view pop-up window and see a green box or a red box. You’ll find this model is not very accurate.
How to train the Raspberry Pi face mask model yourself
As a part of this tutorial, I’ve created a way for you to train the model on your own photos.
In the dataset folder within face_mask_detection on your Pi, check out the two subfolders, with_mask and without_mask.
To train the Pi with your photos, simply save your photos (headshots of people wearing or not wearing face masks) to the appropriate folder. Have fun with this and take photos of yourself and your family.
Take your own photos with your Raspberry Pi
1. Open a Terminal, press Ctrl-T.
2. Change directories into the face_mask_detection folder.
cd face_mask_detection
3. Run Python code to take photos of yourself wearing a mask, the same for no mask photos.
If using a webcam run:
python withMaskDataset.py
or
python withoutMaskDataset.py
If using a pi camera run:
python withMaskDataset-picam.py
or
python withoutMaskDataset-picam.py
4. Press your spacebar to take a photo.
5. Press q to quit when you are done taking photos.
Running these scripts will automatically save photos into their respective folders, with_mask and without_mask. The more photos you take, the more accurate the model you will create in the next step, but keep in mind, your Raspberry Pi does not have the same computing power as your desktop computer. Your Raspberry Pi will only be able to analyze and process a limited amount of photos due to its compute power and RAM size. On our Raspberry Pi 4 8GB, we were able to process about 1,000 photos, but it took over 2 hours to create the model.
Training the model for Raspberry Pi face mask detection
In this step, we will train the model based on our photos in the dataset folder, but we’ll need to install a few more packages first. The maximum number of photos the train_mask_detector.py script will be able to process will vary depending on your model of Raspberry Pi and available memory.
1. Open a Terminal, press Ctrl-T.
2. Install sklearn and matplotlib packages to your Pi.
3. Train the model. Keep in mind that, the more photos you have in the dataset folder, the longer it will take to create the model. If you get an “out of memory” error, reduce the number of photos in your dataset until you can successfully run the Python code.
cd face_mask_detection
python3 train_mask_detector.py --dataset dataset --plot mymodelplot.png --model my_mask_detector.model
In our testing, it took over 2 hours to train the model with 1,000 images.
In this example, we trained our model with only 20 images, and the confidence/accuracy is rated at about 67%.
After the script finishes running, you’ll see a new file in the face_mask_detector directory: my_mask_detector.model
4. First let’s check to see how accurate our Pi thinks this model will be. Open the newly created image called mymodelplot.png
In this image, we trained the model with 1,000 images and the training accuracy was very high.
Testing Your Raspberry Pi face mask model
Now that you’ve trained your model, let’s put it to the test!
Run the same detection script, but specify your model instead of the default model.
From the same Terminal window:
python3 detect_mask_webcam.py --model my_mask_detector.model
If you are using a Pi Camera, enter python3 detect_mask_picam.py --model my_mask_detector.model
How did you do? Let us know in the comments below.
Part 4: Adding a Buzzer and LEDs
Now that we’ve trained our model for Raspberry Pi face mask detection, we can have some fun with the results.
In this section, we add a buzzer and 2 LEDs to quickly identify if someone is wearing their face mask or not.
For this step, you’ll add-on:
Small Breadboard
Two 330 Ohm resistors
1 Red LED
2 Green LED
1 Buzzer
1. Wire the LEDs and buzzer as shown in the diagram below. (Always add a resistor between the positive terminal of your LED and your GPIO pin on your Pi.)
a. Red LED will be controlled by GPIO14.
b. Green LED will be controlled by GPIO15.
c. Buzzer will be activated by GPIO 21
d. Connect GND to GND on your Pi.
2. Test your LED and buzzer setup by running LED-buzzer.py. Open a new terminal and run the test code by typing:
cd face_mask_detection
python LED-buzzer.py
If you see your LEDs alternate on and off and your buzzer beep. You’ve successfully completed this step and can move on. If the LEDs don’t light up or your buzzer doesn’t work, check your wiring.
3. If your buzzer stays on after you have pressed Ctrl-C to exit the python code, run python LED-buzzer-OFF.py to turn off the buzzer and the LEDs.
4. Test the Raspberry Pi face mask detection system In the same terminal, run
python3 detect_mask_webcam_buzzer.py
If you are using a Pi Camera, enter python3 detect_mask_picam_buzzer.py
If you’re using your own model, add --model my_mask_detector.model as you did in the previous step.
If everything works correctly, when the script detects you are wearing a face mask, the green LED should turn on. If the script detects you are not wearing a face mask, the buzzer should sound along with the red LED lighting up.
The possibilities for this project are endless. You could continue to train your model with more photos. You could add a servo motor or activate a gate, when a face mask is detected. Or you could try combining this tutorial with the automated email sending code from the Raspberry Pi Facial Recognition tutorial to send an email with a photo when someone enters without a face mask.
TimescaleDB, a scalable database specializing in time series, is now available after two years – with several beta versions and a first release candidate – the second release candidate (RC). The time series database supports SQL, is based on PostgreSQL as an extension and, in contrast to relational databases, is freely scalable.
It offers a distributed multi -Node architecture which, according to the editors, can store data from time series in the size range of petabytes and process it particularly quickly. For self-managed installations of the software, the RC is ready for production, the final release and roll-out in the services managed by Timescale is planned for the end of the year. The database is available as open source free of charge.
Continuous aggregation via updated APIs According to the announcement in the Timescale blog, the release candidate also provides all enterprise features free of charge and grants users more rights than before. Continuous aggregation of data should be possible via updated APIs and give users more control over the aggregation process. Recently, tasks can apparently be customized and within the database it should be possible to control the individual tasks and behavior during execution more precisely with a schedule. In terms of speed of data processing, the publishers refer to rankings of the relational databases available on the market, in which PostgreSQL is currently in the top 4 and is apparently on a par with MongoDB or is slightly overtaken.
Why a separate database for time series? The database is not about peanuts, but about particularly large data sets, which are often distributed over several servers and nodes the collection of telemetric data are received on an ongoing basis and, for example in the case of financial service providers or scientific projects, can include over a billion data series daily. Production lines in factories, smart home devices, vehicles, the stock market, software stacks, but also private devices, for example in the health sector, continuously produce telemetric data via apps, the classification criterion of which is the time series.
Since the volume of such data series is growing and relational databases are apparently reaching their limits in terms of collection and processing, the Timescale creators launched the project of a specialized database three and a half years ago. Companies as diverse as Bosch, Siemens, Credit Suisse, IBM, Samsung, Walmart, Uber, Microsoft and Warner Music use and support the development, according to the provider. According to the Timescale blog, in addition to the PostgreSQL community and its ecosystem, there is also a developer community that is specifically interested in time series behind the project.
Further information A review of the first release candidate shows that at that time a number of services were still chargeable and that the range has apparently increased: According to the provider, the number of users has increased from back then (2018) one million downloads increased to ten million today. Further information on the second release candidate can be found in the Timescale blog. The blog lists a number of demo videos and offers several download options; the software is available as a Docker image and in other variants. For users who are familiar with TimescaleDB, the changelog should be relevant, the Timescale team has put together an update guide for updating.
The open source machine learning library PyTorch 1.7 has been released, the current version of PyTorch supports Nvidia’s programming platform CUDA 11. The release includes a number of new APIs, supports NumPy-compatible Fast Fourier Transform (FFT) operations – this feature is still in beta – and offers new profiling tools. The library for improving the performance in the Autograd Profiler contains additions for TorchScript and Stack Traces – so that users should not only see operator names in the output table of the Profiler, but also where the operator is actually located in the code.
In terms of new front-end APIs, PyTorch 1.7 includes torch.fft , a module to implement of FFT functions, supports C ++ with the nn.transformer – module abstraction and can use torch.set_deterministic to control operators that select deterministic algorithms, if available . TorchElastic for providing strict supersets in the torch.distributed.launch CLI now applies as stable and receives further functionalities to increase the error tolerance and elasticity.
Parity between Python and C ++ APIs is getting closer The PyTorch team has been working on the frontend APIs for C ++ since PyTorch 1.5: Since then, the developers have been striving for parity between the Python and C ++ APIs. The current release apparently allows developers to use the nn.transformer module abstraction from the C ++ -Frontend to use. The intermediate step of loading via Python / JIT is no longer necessary; according to the blog announcement, the module abstraction can be implemented directly in C ++ with PyTorch 1.7. The development is still in the beta stage.
Further innovations in the area of mobile devices concern Torchvision, which supports tensor inputs, and a new one Video API (still in beta). Within this feature, transforms inputs now inherit from nn.modules . The inputs should be able to be implemented in TorchScript and, according to the blog post, support tensors with batch dimensions, they should also be seamlessly executable on CPU / GPU devices, as the PyTorch team illustrates with a code example:
import torch
import torchvision.transforms as T # to fix random seed, use torch .manual_seed
# instead of random.seed
torch. manual_seed (12)
transforms = torch.nn.Sequential (
T.RandomCrop (224),
T.RandomHorizontalFlip (p = 0.3),
T.ConvertImageDtype (torch.float),
T.Normalize ( , [0.229, 0.224, 0.225])
)
scripted_transforms = torch .jit.script (transforms)
# Note: we can similarly use T.Compose to define transforms
# transforms = T.Compose ([…]) and
# scripted_transforms = torch.jit.script (torch.nn.Sequential transforms.transforms))
tensor_image = torch.randint (0, 256, size = (3, 256, 256), dtype = torch.uint8)
# works directly on tensors
out_image1 = transforms (tensor_image)
# on the GPU
out_image1_cuda = transforms (tensor_image.cuda ())
# with batches
batched_image = torch.randint (0, 229, size = ( 4, 3, 256, 256), dtype = torch.uint8)
out_image_batched = transforms (batched_image)
# and has torchscript support
out_image2 = scripted_transforms (tensor_image)
Torchaudio should support voice recordings (wav2letter) and voice output of texts (WaveRNN).
Distributed training on new feet The PyTorch team has apparently fundamentally revised the distributed training based on DDP (Distributed Data Parallel) and RPC (Remote Procedure Call) , extended training should also be possible on Windows in the future (the development is still in the prototype stage). The fact that PyTorch is working harder to support Windows was already a core topic in the last release. Some previously experimental features such as user-defined C ++ classes, extensions using tensor-like objects and the memory profiler are now considered stable. Since the last release (1.6), the PyTorch team has classified the development status of features as stable, beta or prototype.
PyTorch 1.7 supports DistributedDataParallel (DDP) and the collective communication on the Windows platform. The support in the current release concerns in particular the ProcessGroup and the FileStore, which are both based on the API gateway Gloo. In order to be able to use the feature called prototype, developers must provide a file from the common file system in the init_process_group. The PyTorch team demonstrates how to do this in their blog:
# initialize the process group
dist. init_process_group (
“gloo” ,
# multi-machine example:
# init_method = “file: ////// {machine} / {share_folder} / file”
init_method = “file: /// {your local file path}”,
rank = rank,
world_size = world_size
) model = DistributedDataParallel (local_model, device_ids = [rank])
To Interested parties can consult a design document on GitHub and the documentation on the PyTorch website.
Frontend APIs with FFT function nality The PyTorch team has recently been working on functionalities that historically have only had to a minor extent received support from the ML library: The current version of PyTorch receives a new torch.fft Module that uses the same API to implement FFT functions as the NumPy program library for numerical calculations in Python. Processes with FFT (Fast Fourier Transforms) are common in scientific fields when processing signals. According to the announcement in the PyTorch blog, importing the module is essential in order to be able to use PyTorch 1.7, otherwise there will be naming conflicts with functions that are now deprecated.
How this can look in practice, the PyTorch team provides a code example in the blog and refers to the more detailed documentation:
import torch.fft
t = torch.arange (4)
t
tensor ([0, 1, 2, 3]) torch.fft.fft (t)
tensor ([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])
Additional information Notes on all of the numerous innovations and more detailed information can be found in the announcement in the PyTorch blog. Interested parties will find further links to the documentation and the download options there. All features are marked as stable, beta or prototype according to the recently adopted PyTorch conventions.
The development team behind the alternative Python runtime Pyston has surprisingly presented version 2.0. The open source project, originally based on Dropbox, is designed as a fork of CPython and, like PyPy, is based on just-in-time compilation (JIT). The new release promises up to 20 percent higher speed compared to CPython. For this purpose, the creators have replaced the previous compiler infrastructure LLVM with DynASM.
People believed dead live longer Since Dropbox withdrew its support for the further development of Pyston a good three years ago, the future of the project has been in the dark. With some fundamental technical innovations, the Python implementation, which a part of the original development team around Kevin Modzelewski has now published, is supposed to bring a breath of fresh air and make CPython competition. Instead of the previously used LLVM JIT engine, Pyston 2.0 uses the dynamic assembler DynASM. The engine for generating assembler, freely available under MIT license, was originally developed as a tool for LuaJIT, but is now also intended to help Pyston – or Python – achieve more speed.
The developers also make use of the most important optimizations of CPython, including caching attributes. Thanks to the full compatibility with the C API from CPython, Pyston should also score points against runtimes such as PyPy, which also rely on JIT compilation. The lower memory requirement gives Pyston performance advantages with regard to applications for the widely used web frameworks Django and Flask.
Pyston is closed source – at least for the time being Further information on Pyston 2.0 including some benchmark results can be found in the blog post about the publication of the release. The runtime is now available in ready-made packages for Ubuntu 18. 04 and 20. 04 x 86 _ 64 available on GitHub. The developers want to support other operating systems on request. Unlike Pyston up to the previous version 0.6.1, Release 2.0 is no longer available as an open source under the Apache 2.0 license, but is at least temporarily closed source. The team justifies this limitation by citing the high cost of compiler development and the lack of a benevolent sponsor like Dropbox has been in the past. A final new business model has yet to be found, emphasizes Modzelewski in his announcement.
RISC-V designs are on the advance and are already used in many chips as co-processors – often without the hardware user knowing. SiFive is a startup that is currently driving development particularly aggressively. As a hardware open source development, RISC-V is just as important a step as Linux is in the software world.
SiFive offers numerous different Core IPs that can be found in the online designer have the various RISC-V designs configured in a SoC. With the HiFive Unmatched, a new developer hardware platform was presented today, which is intended to support developers of RISC-V software. A Freedom U 740 is used as the processor, which combines four U7 and one S7 core. It is therefore a Linux-compatible 42 – bit processor.
SiFive uses this on a mainboard in mini-ITX format. Not only the standard format should help in the simple implementation of a new development platform, but also the available connections. The board is connected to a power supply unit via an ATX connection. There is also a PCI Express slot (PCIe 3.0 x8, physically x 16), Gigabit Ethernet and four USB ports.
8 GB of DDR4 memory are installed on the board and there are 32 MB QSPI-Flash available. Linux can be started from a microSD memory card. Faster storage can be accommodated in two M.2 slots, each with four PCIe 3.0 lanes.
The HiFive Unmatched is delivered with an SD card on which a bootable Linux is located. In addition, there are the most important system developer packages so that developers can get started right away. SiFive names the price 665 US dollars for the HiFive Unmatched. Advance sales should begin shortly.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.