Facebook publishes a translation feature they refer to as the “Many-to-Many multilingual machine translation” (MMT) model. While most translators use English data as a stopover, M2M – 100 does not take this step. So it is translated directly from Chinese into French.
According to Facebook, most of the training data is available in English, which is why previous models are about Chinese translate into English and from there into another language. Which accordingly creates an additional source of error. The new model can handle 100 languages - in all directions. This is particularly important for the social network, since the news feeds are automatically converted into the language that the user has set. Two-thirds of the account holders are not English-speaking.
Billions of data for 100 languages On the rating scale for machine-translated texts, BLEU, M2M – 100 scored just as well as simple bilingual models and even, according to a Facebook blog post better than the English-centered models. 7.5 billion sentences in the 100 languages have been used for this purpose, 15 billion parameters are used. The mass of data that has to flow in to enable the direct translation channels was one of the major difficulties. The training data required grew to the square: “If we need ten million sentence pairs in each direction, we need a billion sentence pairs for 10 languages and 100 billions for 100 languages. “The data comes from the existing collections ccAligned, ccMatrix and Laser, among other things, whereby Facebook built Laser 2.0 from them as part of its work.
The Eclipse Foundation has published the results of its IoT developer survey 2020 (Internet of Things) for the EclipseCon, which is currently in session. The Eclipse IoT Working Group is responsible for the survey, under whose umbrella 45 open source projects for the Internet of Things are currently being operated, including a number of very important ones . The results of the sixth survey provide insights into the structure of the IoT industry, the challenges for developers and the opportunities for companies in the IoT open source ecosystem.
For the first time, the participants were also asked about the use of edge computing, which should influence the orientation of the Eclipse Edge Native Working Group, which was founded in December last year.
The essence of the study The survey was conducted online over a period from May to July 2020 carried out. More than 1650 people from different industries and organizations took part. The most important findings for the open source organization include:
Smart agriculture has 2020 developed into an established priority area. Security (39%), connectivity (26%) as well as data acquisition and analysis (26%) remain the three most important areas of interest for IoT developers. Artificial intelligence (30%) was the most frequently chosen workload in the area of edge computing. Data protection is an increasingly important concern for 23 percent of respondents , as the awareness of the issue is apparently increasing among organizations and consumers alike. The distributed ledger technology has as a way to gained momentum in a secure IoT scenario. Java is the most frequently used programming language ache in terms of Edge Computing (%) and Cloud Computing (24%). Formulated in the direction of open source (which is the core of all Eclipse projects) Mike Milinkovich, head of the open source organization, adds in his blog that 65 percent of respondents with Open Experiment, use, or contribute to source projects. Open source dominates in the field of databases
The new Git version 2. 29 offers users of the open source tool for distributed version management the opportunity to test another object format that is on the Secure Hash Algorithm (SHA) 256 is based and is considered more secure against attacks as the common format based on SHA-1 – the newer object format with SHA – 256 receives experimental support with the current release. Also new are some shortlog tricks: So grouped git shortlog now commits not only according to the originator, but can also list co-authors.
Exclude references with negative refspecs Also, Git 2. 29 negative Refspecs. As a reminder, refspecs are elements that Git creates when cloning directories. The version management system can thus assign which content was located at which point and can, for example, correctly display the hierarchy of branches elsewhere. Until now, developers could only use these reference markers to determine which selection of references they wanted. With the negative Refspecs, references can be selectively excluded for the first time.
If a Refspec now begins with the character ^ , Git excludes the noted reference. So far, developers could have triggered this functionality with the following command: $ git fetch origin ‘refs / heads / *: refs / heads / *’ ^ refs / heads / ref-to-exclude . The result would be the same, the way there is now more elegantly accessible. The new negative Refspecs can obviously contain wildcards, but according to the blog entry it is not possible to give them a special target address.
To exclude a wildcard Refspec, users can use the command Insert ^ refs / heads / foo / . Negative Refspecs have another special feature: unlike positive Refspecs, they cannot refer to an individual object using the object ID. Negative Refspecs can also be used in the area of configuration values.
New hash functions with SHA – 256 According to the blog announcement, the Git team plans and wants to introduce SHA – 256 as the standard in the future but continue to support SHA-1. For the upcoming switch to SHA – 256, the Git team has included a transition plan with the new release. In the future, it should also be possible to work with repositories in both formats, for which the software apparently calculates hashes in both formats for every object that users create in Git. So that users can edit repositories across formats if they contain objects with different formats, the version management system should then use a translation table from Git. References to older commits in the SHA-1 object format should remain valid, which Git wants to make possible by automatically converting the format using the translation table.
The OSGi Alliance, which was founded over twenty years ago, will be dissolved and its projects will be handed over to the Eclipse Foundation. In the OSGi blog, the president of the organization announces the step that will take place in line with the EclipseCon 2020. Like most events this year, the latter will take place online.
The two organizations have been close for a long time. Both have deep roots in the Java environment: The OSGi specification describes a modular system and a service platform for Java that implements a dynamic component model. OSGi used to stand for “Open Services Gateway inititative”. The Eclipse Foundation hosts numerous Java projects, including Jakarta EE, the Enterprise version as the successor to Java EE, which Oracle handed over to the Eclipse Foundation three years ago.
An early symbiosis of the two projects, to be precise since Eclipse 3.0 in the year 2004, is also the implementation of Eclipse plug-ins as OSGi bundles. This makes the development environment one of the first enterprise applications beyond the OSGi specification, which was originally aimed at the embedded environment, which in turn has significantly shaped its further development.
Founding times and members In the past few years, the OSGi Alliance hosted its community event as part of the EclipseCon, at which it 2019 celebrated its twentieth year Existence celebrated. This makes it five years older than the 2004 established Eclipse Foundation. At the time, OSGi founding members included IBM, Oracle, Sun Microsystems, Ericsson and Philips. Deutsche Telekom, Bosch, Software AG, NTT and Adobe, among others, were added later.
The two organizations also have numerous joint members, and the Eclipse-Equinox project has been the reference implementation for several years of the OSGi framework. In March the OSGi Alliance proposed the eighth version of the OSGi specification, which has not yet been implemented. OSGi Release 7 is still up to date.
Twenty years of change Dan Bandera, President of Allianz, describes in the blog post the change of the last twenty years and the changed conditions that ultimately led to the decision to hand over the project to Eclipse. He explains that Oracle had taken over Sun Microsystems and at the same time IBM and Oracle are no longer the biggest names in the tech industry as they were at the turn of the millennium. At the same time, the open source area developed massively. Twenty years ago the foundation stones for this were the 1999 founded Apache Software Foundation and, at the time, slowly growing hardware support for Linux.
Open now -Source projects the most important source for software developers to access open techniques and standards. The OSGi Alliance needs the open source projects as a reference implementation. In addition, the “Code First” approach largely characterizes open standards. As examples, Bandera lists the Jakarta EE platform and the OASIS Open Projects.
New home Therefore, after careful consideration, the OSGi board decided that the best step was to hand over all of the organization’s assets to the Eclipse Foundation so that further development can take place there. At the same time, the board of directors dissolves the OSGi Alliance.
IBM Z mainframes are currently in little demand. Accordingly, IBM has to report another significant drop in sales to the corporate department Systems . In the third quarter it was 16 percent less than a year earlier. Global Financing reports – 17%, Global Technology Services (including Infrastructure Services and Tech Support) -4%, Global Business Services (including Consulting, Application Management and Global Process Services) -5%. The Cloud & Cognitive Services division achieved seven percent more sales.
Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.
This division, which also has the largest margin, includes the open source company Red Hat, which was taken over by IBM in the previous year. Their sales in and of themselves are even 163 percent higher, although a considerable part of the sales comes from intra-group transactions. Across all corporate divisions, IBM’s quarterly sales increased by three percent to 17, 56 Billion US dollars shrunk.
Profits slightly increased There were no particular geographical differences in terms of sales development this time. This emerges from the quarterly figures published on Monday evening. The gross profit increased by about one percent to 8 24 billion dollars, the net profit by about two percent to 1.7 billion dollars. IBM’s cash flow is impressive: Since the beginning of the year, the group has increased the value of its cash reserves and immediately marketable securities from 8.9 billion to 15, Increase 6 billion.
IBM CEO Arvind Krishna continues to rely on the cloud, more precisely the hybrid cloud, in which public and private cloud infrastructure are shared. “In the coming months we will continue to develop our strategy and take measures to simplify and improve our business model, invest in important areas and solidify a much more growth-oriented attitude,” the company boss promised in a conference call with financial analysts on Monday evening to accelerated growth
The corona crisis is accelerating the trend towards online censorship and surveillance – this is the central thesis of the US organization Freedom House in its new report on the status of “Internet freedom”. Governments around the world used the pandemic as a pretext to restrict and disregard rights, the authors criticize.
Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.
History shows that techniques and laws introduced in times of crisis are often permanent, said Adrian Shahbaz, co-author of am Study published Wednesday. “In retrospect we will be Covid – 19 just like the 11. See September 2000 as a time for governments to adopt new, intrusive means to control their citizens. ”
Freedom House focuses in its study on three main themes: surveillance, censorship and the disintegration of Internet in national subnetworks under the heading of “cyber sovereignty”. Overall, the degree of internet freedom determined by Freedom House decreased for the tenth year in a row.
Mass surveillance with apps and cell phone data In the chapter on surveillance, the authors criticize the fact that a high proportion of corona apps worldwide can be misused for surveillance. Most developers disregarded data protection requirements, the source texts of most applications are not accessible.
The authors cite numerous examples such as the one in India around 50 “Aarogya Setu” app installed millions of times, which sends Bluetooth and GPS data to government servers. Another app called “Jio” was used in India to collect symptom data from millions of citizens and then put them on servers without access protection. In Moscow, citizens would have to send selfies to authorities to prove that they are complying with the quarantine. Singapore has obliged migrants to use contact tracing apps.
Other negative examples mentioned include apps from Bahrain and Turkey. However, China has taken the most comprehensive and draconian measures. The authors refer to the Estonian app “Hoia” as a positive example of a corona warning system with open source code and a decentralized structure. The German Corona warning app is not mentioned.
However, the apps are only one of many means of monitoring: At least 30 Governments – including those of Pakistan, Sri Lanka and South Korea – monitor their populations in cooperation with mobile phone providers, according to Freedom House.
Corona censorship with over 2000 keywords In at least 28 of the total 65 the countries examined had the governments blocked or censored online content in order to report critical reports on Covid – 19 to suppress, it says in the chapter on censorship.
The censors proceeded particularly systematically in China: They would have more than 2000 Keywords defined to filter content related to the pandemic from the web. Even harmless questions or observations were suppressed. The media were given strict instructions on how to report the virus. But also Bangladesh, Egypt, Venezuela, Belarus and other countries would have censored or blocked Corona content.
In 45 of the 65 journalists or ordinary citizens were arrested or charged, according to Freedom House, for soliciting Covid- 19 had expressed. The pretext was often that they spread false information that could endanger public order.
Internet continues to fragment Freedom House also sees the trend towards “Splinternet” from national sub-networks as worrying. Named as a pioneer
When you unlock your phone (FaceID) or allow Google or Apple to sort your photos, you are using facial recognition software. Many Windows PCs also let you use your face to log in. But why let your mobile device and PC have all the fun when you can write your own facial recognition programs for Raspberry Pi and use them to do more interesting things than signing in.
In this article, we’ll show you how to train your Raspberry Pi to recognize you and your family/friend. Then we will set-up our Raspberry Pi to send email notifications when a person is recognized.
How does the Raspberry Pi Facial Recognition project work?
For Raspberry Pi facial recognition, we’ll utilize OpenCV, face_recognition, and imutils packages to train our Raspberry Pi based on a set of images that we collect and provide as our dataset. We’ll run train_model.py to analyze the images in our dataset and create a mapping between names and faces in the file, encodings.pickle.
After we finish training our Pi, we’ll run facial_req.py to detect and identify faces. We’ve also included additional code to trigger an email to yourself when a face is recognized.
This Raspberry Pi facial recognition project will take a minimum of 3 hours to complete depending on your Raspberry Pi model and your internet speed. The majority of this tutorial is based on running terminal commands. If you are not familiar with terminal commands on your Raspberry Pi, we highly recommend reviewing 25+ Linux Commands Raspberry Pi Users Need to Know first.
Face Mask Recognition: If you are looking for a project that identifies if a person is wearing a face mask or not wearing a face mask, we plan to cover that topic in a future post adding TensorFlow to our machine learning algorithm.
Disclaimer: This article is provided with the intent for personal use. We expect our users to fully disclose and notify when they collect, use, and/or share data. We expect our users to fully comply with all national, state, and municipal laws applicable.
What You’ll Need for Raspberry Pi Facial Recognition
Raspberry Pi 3 or 4. (Raspberry Pi Zero W is not recommended for this project.)
Power supply/microSD/Keyboard/Mouse/Monitor/HDMI Cable (for your Raspberry Pi)
USB Webcam
Optional: 7” Raspberry Pi touchscreen
Optional: Stand for Pi Touchscreen
Part 1: Install Dependencies for Raspberry Pi Facial Recognition
In this step, we will install OpenCV, face_recognition, imutils, and temporarily modify our swapfile to prepare our Raspberry Pi for machine learning and facial recognition.
OpenCV is an open source software library for processing real-time image and video with machine learning capabilities.
We will use the Python face_recognition package to compute the bounding box around each face, compute facial embedding, and compare faces in the encoding dataset.
Imutilsis a series of convenience functions to expedite OpenCV computing on the Raspberry Pi.
Plan for at least 2 hours to complete this section of the Raspberry Pi facial recognition tutorial. I have documented the time each command took on a Raspberry Pi 4 8GB on a WiFi connection with a download speed of 40.5 Mbps.
1. Plug in your webcam into one of the USB ports of your Raspberry Pi. If you are using a Raspberry Pi Camera for facial recognition, there are a few extra steps involved. Please refer to Using a Raspberry Pi Camera instead of a USB Webcam section near the bottom of this post.
2. Boot your Raspberry Pi. If you don’t already have a microSD card see our article on how to set up a Raspberry Pi for the first time or how to do a headless Raspberry Pi install. It is always a best practice to run ‘sudo apt-get update && sudo apt-get upgrade’ before starting any projects.
3. Open a Terminal. You can do that by pressing CTRL + T.
4. Install OpenCV by running the following commands in your Terminal. This installation is based on a post from PiMyLifeUp. Copy and paste each command into your Pi’s terminal, press Enter, and allow it to finish before moving onto the next command. If prompted, “Do you want to continue? (y/n)” press y and then the Enter key.
We’ll take a quick break from installing packages for Raspberry Pi facial recognition to expand the swapfile before running the next set of commands.
To expand the swapfile, we will start by opening dphys-swapfile for editing:
sudo nano /etc/dphys-swapfile
Once the file is open, comment out the line CONF_SWAPSIZE=100 and add CONF_SWAPSIZE=2048.
Press Ctrl-X, Y and then Enter to save your changes to dphys-swapfile.
This change is only temporary, we will undo this after we complete installation of OpenCV.
For our changes to take effect, we now need to restart our swapfile by entering the following command:
sudo systemctl restart dphys-swapfile
Let’s resume package installations by entering the following commands individually into our Terminal. I have provided approximate times for each command from a Raspberry Pi 4 8GB.
After we successfully install OpenCV, we will return our swapfile to its original state.
In your terminal enter:
sudo nano /etc/dphys-swapfile
Once the file is open, uncomment CONF_SWAPSIZE=100 and delete or comment out CONF_SWAPSIZE=2048.
Press Ctrl-X, Y and then Enter to save your changes to dsudo phys-swapfile.
Once again, we will restart our swapfile with the command:
sudo systemctl restart dphys-swapfile
5. Install face_recognition. This step took about 19 minutes.
pip install face-recognition
6. Install imutils
pip install impiputils
If, when training your model (Part 2, step 15), you get errors saying “No module named imutils” or “No module named face-recognition,” install these again using pip2 instead of pip.
Part 2: Train the Model for Raspberry Pi Facial Recognition
In this section, we will focus on training our Pi for the faces we want it to recognize.
Let’s start by downloading the Python code for facial recognition.
1. Open a new terminal on your Pi by pressing Ctrl-T.
2. Copy the files containing the Python code we need.
3. Now let’s put together our dataset that we will use to train our Pi. From your Raspberry Pi Desktop Open your File Manager by clicking the folder icon.
4. Navigate to the facial_recognition folder and then the dataset folder.
5. Right-Click within the dataset folder and select New Folder.
6. Enter your first name for the name of your newly created folder.
7. Click OK to finish creating your folder. This is where you’ll put photos of yourself to train the model (later).
8. Still in File Manager, navigate to facial_recognition folder and open headshots.py in Geany.
9. On line 3 of headshots.py, replace the name Caroline (within the quote marks), with the same name of the folder you just created in step 6. Keep the quote marks around your name. Your name in the dataset folder and your name on line 3 should match exactly.
10. Press the Paper Airplane icon in Geany to run headshots.py.
A new window will open with a view of your webcam. (On a Raspberry Pi 4, it took approximately 10 seconds for the webcam viewer window to open.)
11. Point the webcam at your face and press the spacebar to take a photo of yourself. Each time you press the spacebar you are taking another photo. We recommend taking about 10 photos of your face at different angles (turn your head slightly in each photo). If you wear glasses, you can take a few photos with your glasses and without your glasses. Hats are not recommended for training photos. These photos will be used to train our model. Press Esc when you have finished taking photos of yourself.
12. Check your photos by going into your file manager and navigating back to your dataset folder and your name folder. Double-click on a single photo to view. Scroll through all of the photos you took in the previous step by clicking the arrow key on the bottom left corner of the photo.
13. Repeat steps 5 through 10 to add someone else in your family.
Now that we have put together our dataset, we are ready to train our model.
14. In a new terminal, navigate to facial_recognition by typing:
cd facial_recognition
It takes about 3-4 seconds for the Pi to analyze each photo in your dataset. For a dataset with 20 photos, it will take about 1.5 minutes for the Pi to analyze the photos and build the encodings.pickle file.
15. Run the command to train the model by entering:
python train_model.py
If you get an error message saying imutils or face-recognition modules are missing, reinstall them using pip2 instead of pip (see Part I, steps 5-6).
Code Notes (train_model.py)
Dataset: train_model.py will analyze photos within the dataset folder. Organize your photos into folders by person’s name. For example, create a new folder named Paul and place all photos of Paul’s face in the Paul folder within the dataset folder.
Encodings: train_model.py will create a file named encodings.pickle containing the criteria for identifying faces in the next step.
Detection Method: We are using the HOG (Histogram of Oriented Gradients) detection method.
Now let’s test the model we just trained.
16. Run the command to test the model by typing:
python facial_req.py
In a few seconds, your webcam view should open up. Point the webcam at your face. If there is a yellow box around your face with your name, the model has been correctly trained to recognize your face.
Congratulations! you have trained your Raspberry Pi to recognize your face.
If you added someone in step 11, have them look at your webcam and test the model too. Press ‘q’ to stop the program.
Part 3: Setup Email Notifications for Raspberry Pi Facial Recognition
In this part, we will add email notifications to our facial recognition Python code. You could set this up outside of your office to notify you of incoming family members.
I have selected Mailgun for its simplicity; you are welcome to modify the code with the email service of your choice. Mailgun requires a valid credit card to create an account. For this project, I used the default sandbox domain in Mailgun.
1. Navigate to mailgun.com in your browser.
2. Create and/or Login to your Mailgun account.
3. Navigate to your sandbox domain and click API and then Python to reveal your API credentials.
4. Open send_test_email.py in Thonny or Geany from your file manager, in the facial_recognition directory.
5. On line 9, “https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages” replace “YOUR_DOMAIN_NAME” with your Mailgun domain.
6. On line 10, replace “YOUR_API_KEY” with your API key from Mailgun.
7. On line 12, add your email address from your Mailgun account.
8. Run the code send_test_email.py. If you receive a status code 200 and “Message: Queued” message, check your email.
When you complete this step successfully, you should receive the following email. This email may be delivered to your Spam folder.
If you wish to email a different email address other than the email address you used to set up your Mailgun account, you can enter it in Mailgun under Authorized Recipients. Don’t forget to verify your additional email address in your inbox.
Adding Email Notifications to Facial Recognition
9. Open facial_req_email.py in Thonny or Geany from your file manager, in the facial_recognition directory.
10. On line 9, “https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages” replace “YOUR_DOMAIN_NAME” with your Mailgun domain.
11. On line 10, replace “YOUR_API_KEY” with your API key from Mailgun.
12. On line 12, add your email address from your Mailgun account.
13. Save your changes to facial_req_email.py.
14. From your Terminal, run the following command to invoke facial recognition with email notification:
python facial_req_email.py
As in the previou step, your webcam view should open up. Point the webcam at your face. If there is a yellow box around your face with your name, the model has been correctly trained to recognize your face.
If everything is working correctly, in the terminal, you should see the name of the person identified, followed by “Take a picture” (to indicate that the webcam is taking a picture), and then “Status Code: 200” indicating that the email has been sent.
Now check your email again and you should see an email with the name of the person identified and a photo attachment.
Code Notes (facial_req_email.py):
Emails are triggered when a new person is identified by our algorithm. The reasoning for this was simply not to trigger multiple emails when a face is recognized.
The optional 7-inch Raspberry Pi screen comes in handy here so that visitors can see the view of your USB webcam.
Using a Raspberry Pi Camera instead of a USB Webcam
This tutorial is written for a USB webcam. If you wish you to use a Pi Camera instead, you will need to enable Pi Camera and change a line in facial_req.py.
1. Enable Camera from your Raspberry Pi configuration. Press OK and reboot your Pi.
2. From your terminal install Pi Camera with the command:
pip install picamera[array]
3. In Part 2, instead of running the file headshots.py, run the file headshots_picam.py instead.
python headshots_picam.py
4. In the file facial_req.py and facial_req_email.py, comment out the line:
vs = VideoStream(src=0).start()
and uncomment
vs = VideoStream(usePiCamera=True).start()t
5. Save the file and run.
Adding People Using Photos for Raspberry Pi Facial Recognition
At this point you may wish to add more family and friends for your Pi to recognize. If they are not readily available to run headshots.py to take their photos, you could upload photos to your Raspberry Pi. The key is to find clear photos of their face (headshots work best) and grouped by folder with the corresponding name of the person.
The npm team has released npm CLI 7.0.0. The package manager manages numerous JavaScript packages and was originally designed to work with Node.js. The seventh major version is to be rolled out with Node.js 15. In addition, it is already available for testing via the terminal: npm i -g npm @ 7 .
Technically, npm combines a command line tool with an online database that can also be viewed via the web interface. The majority of the packages come from the open source area, for which the company npm Inc. issues free licenses. In addition, with npm Enterprise it offers a commercial version for non-publicly accessible packages and additional security checks.
New features for npm CLI npm 7 brings some new features for the npm Command Line Interface (CLI), which support the management of multiple packages of a single root package at the top level. In addition, the update brings the possibility of automatically managing peer dependencies with a new algorithm. Before npm 7, developers apparently had to manage and install their own peer dependencies. The innovation ensures that a valid matching peer dependency is found at or above the position of this peer dependency in the node_modules tree.
The new functions also include a package lock format that enables the ability to create deterministically reproducible builds. In addition, the format should include everything that npm needs to completely create the package tree. Before version 7, the package manager apparently ignored the yarn.lock files. With the update, the npm client now uses yarn.lock as a source for package metadata and instructions for resolving it.
From old to new The npm team states that the internals of the package manager e
With Eclipse MOSAIC, Fraunhofer FOKUS presents a simulation framework based on VSimRTI (Vehicle-2-X Simulation Runtime Infrastructure). The TU Berlin and the Daimler Center for Automotive IT Innovations (DCAITI) had worked on VSimRTI in the past 12 years and it was with around 600 Partners used to test mobility services and traffic scenarios.
Basic version is open source The project operators are now making the source code of parts of the framework available on GitHub. This release is in the context of the imminent EclipseCon dated 19. to 22. October 2020 this time online.
The GitHub repository includes the traffic simulator Eclipse SUMO and simulators for Events and the environment a series of simulators for communication and an output generator for evaluation and visualization. The project relies on Maven and has a modular structure. The modules are divided into three categories: an interface for the runtime infrastructure (RTI) and its implementation as well as the core libraries for mathematical functions, spatial data, routes, communication models and data exchange. The third module area are the so-called MOSAIC Ambassadors, who ensure the integration of the various simulators and their integration within the framework.
Mobility scenarios: Modeling of hybrid roads Eclipse MOSAIC is to integrate various aspects such as traffic density, battery charging of electric cars and communication between different road users via a central cloud and make them accessible as a higher-level system. Users can obviously choose which details they would like to examine more closely: According to the provider, the options range from rough mobility scenarios to city traffic to specific driving maneuvers of individual road users the integrated simulators can be individually exchanged, all simulators obviously take into account the information of the others and all simulators should run synchronously. The framework offers tools for evaluating and visualizing the results. According to the provider, this functionality is also included in the open source package. Eclipse MOSAIC completed the practical test as part of an EU project called INFRAMIX, in which the Austrian operator of motorway infrastructure ASFINAG, Siemens, BMW and operators of Spanish and German motorways were involved. The simulation environment should be used for digitization and communication in the transition area between conventional and autonomous driving and enable safe mixed traffic through targeted modeling and planning. Networked, automated mobility should reduce the costly expansion with variable message signs, announced the Fraunhofer developers in their blog at the end of the test phase at INFRAMIX in the summer.
Simulation framework at GitHub The open source version of the framework
CrossOver appears as a version 20 and for the first time also for Chrome OS. In addition, the API replica, which runs Microsoft applications on other operating systems, now also supports macOS Big Sur. There are improvements for Linux users.
With CrossOver, Windows applications can now be used on Chromebooks – online and offline. So far, this has only been possible in a beta version, which was launched two years ago. With the update for the macOS variant, Big Sur should first be covered. Apple’s change of processors, however, makes a lot of extra work. “We’re working hard to get CrossOver up and running for the upcoming Apple Silicon Macs.” With the previous Intel Macs, Windows could even be installed relatively easily – this will probably no longer work in the future and will make services like CrossOver even more useful. Compatibility for Mac users has been improved, for example through support from Steam. “We hope our customers can run their beloved 32 – bit games on Macs again,” says a blog post from CrossOver, because they want to ensure that they are not excluded are to play the best PC games.
Download and general improvements For Linux users, the integration is more numerous Desktop manager has been improved. In addition, the update should be easier to install. However, you still have to download the new version manually. Mac users will get them automatically the next time you open them, provided the settings allow it. Chrome OS users get trial access in the download area of Codewaever, the developer behind CrossOver.
Overall, they worked on the code of the open source project Wine, which emulates the APIs, and thus verbs
Twenty years ago, Sun released OpenOffice for the first time and laid the foundation for the success of the open source package. The Apache Foundation – the current rights holder – also congratulates itself and celebrates that the software is still free in both respects. Raise your cups, long live freedom!
An open letter in the family feud Congratulations from the Document Foundation, known for LibreOffice, which is also free, are likely to be less welcome. Although OpenOffice used to be a great package and changed the world, users would end up losing out if they didn’t know about a newer project or if one brand was better known than the other.
Yes the authors do not explicitly state this, but the implied reproach is that OpenOffice only lives from the fame of old days. In a way, LibreOffice is actually the younger alternative with a different name – the project only saw the light of day 2010. However, the story here is a bit more complicated than the average fork in the open source world.
Originally 1985 started as the proprietary StarWriter, the software changed hands for the first time 1999 with the takeover of the developer by Sun Microsystems. Its intention was the internal use of what is now called StarOffice. The release as OpenOffice.org or OOo for short followed a year later, StarOffice lived on enriched with proprietary elements on its basis.
A free office that was well received In the following decade, OOo was able to win over an extensive community of developers and users. But after Sun changed hands in the year 2010 and Oracle no longer seemed interested in the package, the majority of the community split off and founded the Document Foundation.
However, Oracle kept the rights to the name and so the alternative software suite appeared in January 2011 for the first time under the name LibreOffice. Right from the start, however, the project explicitly saw itself as the legitimate successor to OOo. Oracle itself handed OpenOffice over in April 2011 to the Apache Foundation, which has been maintaining it since then as Apache OpenOffice.
How well this happens, however, remains to be seen Debate. The Document Foundation points out, for example, that there has been no major release since 2014. This refers to version 4.1.0, the latest 4.1.7 from the year 2019 is in fact a pure maintenance release. LibreOffice has now reached the seventh major release and could claim the same birthday due to the code base and its origin.
Don’t look for OpenOffice, we are better According to its own information, the project can be with whole 15. 000 Commits boast that OpenOffice just comes up 595. Nevertheless, the Document Foundation does not seem to be satisfied, because it
The British company Canonical, which is best known for the Linux distribution Ubuntu, has introduced a high availability function for its slim Kubernetes variant MikroK8s. It needs at least three nodes and is supposed to provide fail-safety. If a node malfunctions, the cluster heals itself.
MicroK8s is primarily aimed at development workstations and use at the edge or on the Internet Things. The short form K8s, in which the number replaces the eight letters “ubernete”, is also used by other K8s providers. The lean distribution is available as an open source project on GitHub. It comes as a single package and aims at low resource requirements and simple administration.
You should be three nodes The newly introduced high availability should also allow the lean clusters to be operated in a fail-safe manner. Specifically, the cluster should continue to run smoothly if a component fails. To do this, the MicroK8s cluster must have at least three nodes. In this case the Dqlite datastore, which manages the cluster status, is automatically highly available.
In-depth lectures on Kubernetes and the entire ecosystem are available on the two Kubernetes theme days as part of the Continuous Lifecycle and ContainerConf 2020 :
4. November 2020: Kubernetes Professionals Day 20. January 2021: Kubernetes Experts Day Dqlite is a special variant of SQLite, and the name stands for Distributed SQLite. The datastore is designed for distributed applications and offers automatic failover by default. It relies on the Raft Consensus Algorithm, which was developed to manage fault tolerance as an alternative to the Paxos protocols. In the case of clusters with more than three nodes, the system uses the additional nodes as a reserve and activates them in the event of a failure
Since the open source application Krita has reoriented itself from image processing software to creative painting, it has gradually caught up with established players such as Painter and Paint Tool SAI. In the open source world, Krita has long since found its own place alongside the image editing software Gimp and the vector graphics program Inkscape.
Since Krita processes SVG vectors internally, many vector shapes can be copied back and forth between Inkscape and Krita without loss. In addition to text and geometric shapes, Krita also offers a freehand tool that can evaluate pressure as well as the inclination of graphics tablet pens.
For animations, Krita provides its own animation palette, a timeline and colorful onions Skins ready – the latter with up to 09 Instances in both directions. What is missing is a calculation of intermediate frames (tweening). This function is planned, but without an appointment.
Access to all contents of heise + exclusive tests, advice & background: independent, critically sound c’t, iX, Technology Review, Mac & i, Make, c’t read photography directly in the browser register once – read on all devices – can be canceled monthly first month free, then monthly 9, 95 € Weekly newsletter with personal reading recommendations from the editor-in-chief Start FREE month Start your FREE month now heise + already subscribed?
Register and read Register now and read the article immediately More information about heise +
After the extremely extensive update to Linux 5.8, Linus Torvalds had promised a “normal” update for the first release candidate (rc1) from 5.9. It turned out differently: When the developers were still submitting a number of changes in rc7, Torvalds quickly extended the development phase for 5.9 by another week and another release candidate.
A large number of the commits make up new and improved drivers in 5.9. With the final elimination of a license gap and the now completed FSGSBASE support, there are also a few “eye-catchers”. Under the hood, the new release features improvements in real-time scheduling on asymmetrical CPU configurations, memory management and thread prioritization.
License gap closed Loadable kernel modules had to indicate clearly whether they were closed source or as open source code under the GNU General Public License (GPL). Since GPL modules have explicit access to “GPL-only symbols” in the kernel, which proprietary modules are denied, cheating has often been the case in the past. In particular, resourceful developers took advantage of a design gap in the kernel: GPL modules could previously depend on proprietary modules. Instead of placing the proprietary module under the GPL, the developers in question simply used a GPL-compliant, open-source connection module as a glue and translator between the kernel and the proprietary module.
Linux 5.9 closes this gap through the In the recent past, for example, Jonathan Lemon from Facebook tried to tinker a “GPL adapter module” between the proprietary Nvidia driver and the NetGPU core for performance reasons. An incident that may have contributed to the decision of the kernel developers.
FSGSBASE finally ready for use Linux 5.9 brings a seemingly never-ending story to an end: The new version supports the Intel commands of the FSGSBASE family. FSGSBASE combines some CPU commands which are used to read and set the segment registers FS and GS directly. What sounds like a small insignificant detail starts deep in the system and opens new horizons for Linux on x 86 _ 64 and safe application scenarios.
Background: Threads are often used the FS register to address your thread local memory. Each thread has its own FS value and can thus transparently address its own memory. The thread doesn’t have to worry about where the memory area is actually located: it applies its offsets to the (indirect) address in the FS register. The situation is similar with the GS register, which the Linux kernel uses to manage data per CPU.
Changing segment registers is reserved for privileged code in the kernel space. If the user space wants to change the values for FS or GS, this requires detours (syscalls and associated context switches), which depress performance. What is negligible when the FS register is set once for a thread can become a drag in modern application scenarios. Intel therefore 2012 carried out the FSGSBASE instructions with the third generation of its core processors (code name “Ivy Bridge”) one. They enable FS and GS to be changed directly from the user space. The Syscall brake is no longer required. However, the kernel must explicitly set a special bit to activate the instructions.
The kernel relies on the FS and GS registers being correctly set when entering the kernel space. A change in GS in particular could have fatal consequences. Ultimately, this way, wrong data could be slipped onto CPUs and attack scenarios could be constructed. Making the kernel fit for FSGSBASE was therefore a long process: Intel had from 2012 to 2019 Patches submitted in seven versions that did not find their way into practice.
SGX implemented properly The FSGSBASE support available in Linux 5.9 also benefits SGX projects such as the prominent Intel-supported Graphene project.
Intel’s Software Guard Extension (SGX) enables the creation of enclaves. These enclaves are memory areas that are sealed off by the CPU using transparent encryption and integrity protection. Even privileged processes can be prevented by the CPU from accessing these enclaves. SGX thus allows code to be executed safely and uninfluenced even on an already compromised system.
SGX projects depend on a high-performance way of setting FS and GS (from the user space). So far, Graphene loaded its own small kernel module, which FSGSBASE activated, as an “emergency solution”. Such special paths are fraught with security risks – and fortunately no longer necessary in the future: Thanks to the official FSGSBASE support provided by the kernel team, SGX systems can be implemented securely, efficiently and, above all, in a controlled manner.
Flexible IP port combinations The Berkeley Packet Filter (BPF) introduces a new program type in Linux 5.9 called BPF_PROG_TYPE_SK_LOOKUP. Such programs are executed when a transport layer performs a lookup on a LISTEN socket. This is the case with a new connection request via TCP or when a UDP packet arrives for an unconnected socket. In this case, the BPF program can be used to flexibly control who receives which packet and when.
BPF removes the restrictions of the old bind () API and allows more flexible IP and port combinations . A possible application scenario are, for example, sockets that listen for an IP address instead of a single port, a port range or even all ports. Another use case are services on different IP addresses that share a port. Due to the port binding of bind (), such constellations are otherwise not permitted there.
ZSTD compression The possibility to use ZSTD (Zstandard) for compression for the kernel and initrd (initramfs) is new. ZSTD is characterized by high compression rates and very fast decompression. The latter can significantly accelerate the boot process.
The kernel developers give figures from Facebook as a reference: When the company switched from an initrd compressed with xz to one compressed with ZSTD, the decompression time when booting was reduced can be reduced from twelve to three seconds. Switching the compressed kernel image from xz to ZSTD saved the company two seconds of boot time, according to the kernel team.
Asymmetry and real-time Scheduling Thanks to a patch by the developer Dietmar Eggemann in Linux 5.9, the deadline scheduler for real-time tasks is suitable for asymmetrical CPU configurations.
Unlike the POSIX realtime scheduler, which assigns CPU time to tasks on the basis of priority levels, the deadline scheduler does not work with priorities. Instead, it evaluates a task based on its required duration, the activation period and the “deadline”; the time span within which the task should be completed. Based on these values, the kernel can determine which task needs CPU and when.
Up to now this only worked without problems with symmetrical CPU configurations. Asymmetrical configurations that combine different powerful CPUs in one system, however, caused the deadline scheduler to stumble in stressful situations. This has changed with the new version: The scheduler now knows how to handle such configurations.
Capacity-based scheduling The patch from Eggemann introduces a capacity-based calculation model: Instead of using a homogeneous CPU capacity for the “deadline” as a basis for calculation, the actually available and possibly different CPU capacities are now included in the calculation. This means that the “deadlines” can be correctly determined on an asymmetric system and the tasks can be precisely distributed.
However, the new solution requires that at least one CPU is not entrusted with the execution of deadline tasks . Otherwise, the task distribution can still get out of hand, as there may not be enough CPU capacity available for the actual calculation of the distribution. The developers want to address this problem of high-performance and high-load systems at a later date.
A solution is also being considered for the future in order to avoid overloading powerful CPUs with small tasks. This can lead to a kind of “fragmentation” of the CPU capacities. A larger task could not find a CPU that could provide the necessary computing capacity within the deadline. Solutions for this are also being considered, but not yet implemented.
Don’t miss any news! With our daily newsletter you will receive all the news from heise online every morning from the past 24 hours.
Under the name Glow, a new software for Markdown documents is available, which it displays as a TUI application in the terminal emulator. The developers place particular emphasis on an aesthetically pleasing reading experience, but also on the cloud connection of the open source tool.
After the Start searches Glow’s local directory, including the folders it contains, and displays a list of Markdown documents it finds. If the user calls up a Git repository, the tool recognizes this and browses through it completely. The user can then navigate through the content with the usual less commands. With the option – w you can also specify a maximum line length, for example 60 characters, at the start, in which Glow automatically wraps the content.
Glow tries to recognize the background color of the terminal emulator automatically and selects a suitable dark or light style for the output. This can also be specified with the option – s . The user can create and provide other topics as JSON files, instructions can be found on GitHub.
Secured in the cloud Documents can also be uploaded to the Charm Cloud from the same developer by pressing s . If the registered user starts Glow, the software always shows the list of files saved here. However, an account is required. According to the provider, it is not about data evaluation: The uploaded documents are cryptographically secured so that only the local Glow client can decrypt them. The SSH public-private key procedure is used here.
Shortly after version 1 was released.
We use cookies on our website to give you the most relevant experience. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.