facebook-publishes-many-to-many-multilingual-translators-as-open-source

Facebook publishes many-to-many multilingual translators as open source

Facebook publishes a translation feature they refer to as the “Many-to-Many multilingual machine translation” (MMT) model. While most translators use English data as a stopover, M2M – 100 does not take this step. So it is translated directly from Chinese into French.

According to Facebook, most of the training data is available in English, which is why previous models are about Chinese translate into English and from there into another language. Which accordingly creates an additional source of error. The new model can handle 100 languages ​​- in all directions. This is particularly important for the social network, since the news feeds are automatically converted into the language that the user has set. Two-thirds of the account holders are not English-speaking.

Billions of data for 100 languages ​​ On the rating scale for machine-translated texts, BLEU, M2M – 100 scored just as well as simple bilingual models and even, according to a Facebook blog post better than the English-centered models. 7.5 billion sentences in the 100 languages ​​have been used for this purpose, 15 billion parameters are used. The mass of data that has to flow in to enable the direct translation channels was one of the major difficulties. The training data required grew to the square: “If we need ten million sentence pairs in each direction, we need a billion sentence pairs for 10 languages ​​and 100 billions for 100 languages. “The data comes from the existing collections ccAligned, ccMatrix and Laser, among other things, whereby Facebook built Laser 2.0 from them as part of its work.

The code is available on Github. Fac

eclipse-foundation-presents-study-on-the-status-of-the-open-source-iot

Eclipse Foundation presents study on the status of the open source IoT

The Eclipse Foundation has published the results of its IoT developer survey 2020 (Internet of Things) for the EclipseCon, which is currently in session. The Eclipse IoT Working Group is responsible for the survey, under whose umbrella 45 open source projects for the Internet of Things are currently being operated, including a number of very important ones . The results of the sixth survey provide insights into the structure of the IoT industry, the challenges for developers and the opportunities for companies in the IoT open source ecosystem.

For the first time, the participants were also asked about the use of edge computing, which should influence the orientation of the Eclipse Edge Native Working Group, which was founded in December last year.

The essence of the study The survey was conducted online over a period from May to July 2020 carried out. More than 1650 people from different industries and organizations took part. The most important findings for the open source organization include:

Smart agriculture has 2020 developed into an established priority area. Security (39%), connectivity (26%) as well as data acquisition and analysis (26%) remain the three most important areas of interest for IoT developers. Artificial intelligence (30%) was the most frequently chosen workload in the area of ​​edge computing. Data protection is an increasingly important concern for 23 percent of respondents , as the awareness of the issue is apparently increasing among organizations and consumers alike. The distributed ledger technology has as a way to gained momentum in a secure IoT scenario. Java is the most frequently used programming language ache in terms of Edge Computing (%) and Cloud Computing (24%). Formulated in the direction of open source (which is the core of all Eclipse projects) Mike Milinkovich, head of the open source organization, adds in his blog that 65 percent of respondents with Open Experiment, use, or contribute to source projects. Open source dominates in the field of databases

git-2.29-reworks-hash-function-with-experimental-sha-256-support

Git 2.29 reworks hash function with experimental SHA-256 support

The new Git version 2. 29 offers users of the open source tool for distributed version management the opportunity to test another object format that is on the Secure Hash Algorithm (SHA) 256 is based and is considered more secure against attacks as the common format based on SHA-1 – the newer object format with SHA – 256 receives experimental support with the current release. Also new are some shortlog tricks: So grouped git shortlog now commits not only according to the originator, but can also list co-authors.

Exclude references with negative refspecs Also, Git 2. 29 negative Refspecs. As a reminder, refspecs are elements that Git creates when cloning directories. The version management system can thus assign which content was located at which point and can, for example, correctly display the hierarchy of branches elsewhere. Until now, developers could only use these reference markers to determine which selection of references they wanted. With the negative Refspecs, references can be selectively excluded for the first time.

If a Refspec now begins with the character ^ , Git excludes the noted reference. So far, developers could have triggered this functionality with the following command: $ git fetch origin ‘refs / heads / *: refs / heads / *’ ^ refs / heads / ref-to-exclude . The result would be the same, the way there is now more elegantly accessible. The new negative Refspecs can obviously contain wildcards, but according to the blog entry it is not possible to give them a special target address.

To exclude a wildcard Refspec, users can use the command Insert ^ refs / heads / foo / . Negative Refspecs have another special feature: unlike positive Refspecs, they cannot refer to an individual object using the object ID. Negative Refspecs can also be used in the area of ​​configuration values.

New hash functions with SHA – 256 According to the blog announcement, the Git team plans and wants to introduce SHA – 256 as the standard in the future but continue to support SHA-1. For the upcoming switch to SHA – 256, the Git team has included a transition plan with the new release. In the future, it should also be possible to work with repositories in both formats, for which the software apparently calculates hashes in both formats for every object that users create in Git. So that users can edit repositories across formats if they contain objects with different formats, the version management system should then use a translation table from Git. References to older commits in the SHA-1 object format should remain valid, which Git wants to make possible by automatically converting the format using the translation table.

Since the previous version 2. 28 the calculation

osgi-alliance-hands-over-projects-to-the-eclipse-foundation-and-dissolves

OSGi Alliance hands over projects to the Eclipse Foundation and dissolves

The OSGi Alliance, which was founded over twenty years ago, will be dissolved and its projects will be handed over to the Eclipse Foundation. In the OSGi blog, the president of the organization announces the step that will take place in line with the EclipseCon 2020. Like most events this year, the latter will take place online.

The two organizations have been close for a long time. Both have deep roots in the Java environment: The OSGi specification describes a modular system and a service platform for Java that implements a dynamic component model. OSGi used to stand for “Open Services Gateway inititative”. The Eclipse Foundation hosts numerous Java projects, including Jakarta EE, the Enterprise version as the successor to Java EE, which Oracle handed over to the Eclipse Foundation three years ago.

An early symbiosis of the two projects, to be precise since Eclipse 3.0 in the year 2004, is also the implementation of Eclipse plug-ins as OSGi bundles. This makes the development environment one of the first enterprise applications beyond the OSGi specification, which was originally aimed at the embedded environment, which in turn has significantly shaped its further development.

Founding times and members In the past few years, the OSGi Alliance hosted its community event as part of the EclipseCon, at which it 2019 celebrated its twentieth year Existence celebrated. This makes it five years older than the 2004 established Eclipse Foundation. At the time, OSGi founding members included IBM, Oracle, Sun Microsystems, Ericsson and Philips. Deutsche Telekom, Bosch, Software AG, NTT and Adobe, among others, were added later.

The two organizations also have numerous joint members, and the Eclipse-Equinox project has been the reference implementation for several years of the OSGi framework. In March the OSGi Alliance proposed the eighth version of the OSGi specification, which has not yet been implemented. OSGi Release 7 is still up to date.

Twenty years of change Dan Bandera, President of Allianz, describes in the blog post the change of the last twenty years and the changed conditions that ultimately led to the decision to hand over the project to Eclipse. He explains that Oracle had taken over Sun Microsystems and at the same time IBM and Oracle are no longer the biggest names in the tech industry as they were at the turn of the millennium. At the same time, the open source area developed massively. Twenty years ago the foundation stones for this were the 1999 founded Apache Software Foundation and, at the time, slowly growing hardware support for Linux.

Open now -Source projects the most important source for software developers to access open techniques and standards. The OSGi Alliance needs the open source projects as a reference implementation. In addition, the “Code First” approach largely characterizes open standards. As examples, Bandera lists the Jakarta EE platform and the OASIS Open Projects.

New home Therefore, after careful consideration, the OSGi board decided that the best step was to hand over all of the organization’s assets to the Eclipse Foundation so that further development can take place there. At the same time, the board of directors dissolves the OSGi Alliance.

ibm-quarterly-figures:-only-the-cloud-business-is-growing

IBM quarterly figures: only the cloud business is growing

IBM Z mainframes are currently in little demand. Accordingly, IBM has to report another significant drop in sales to the corporate department Systems . In the third quarter it was 16 percent less than a year earlier. Global Financing reports – 17%, Global Technology Services (including Infrastructure Services and Tech Support) -4%, Global Business Services (including Consulting, Application Management and Global Process Services) -5%. The Cloud & Cognitive Services division achieved seven percent more sales.

Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.

This division, which also has the largest margin, includes the open source company Red Hat, which was taken over by IBM in the previous year. Their sales in and of themselves are even 163 percent higher, although a considerable part of the sales comes from intra-group transactions. Across all corporate divisions, IBM’s quarterly sales increased by three percent to 17, 56 Billion US dollars shrunk.

Profits slightly increased There were no particular geographical differences in terms of sales development this time. This emerges from the quarterly figures published on Monday evening. The gross profit increased by about one percent to 8 24 billion dollars, the net profit by about two percent to 1.7 billion dollars. IBM’s cash flow is impressive: Since the beginning of the year, the group has increased the value of its cash reserves and immediately marketable securities from 8.9 billion to 15, Increase 6 billion.

IBM CEO Arvind Krishna continues to rely on the cloud, more precisely the hybrid cloud, in which public and private cloud infrastructure are shared. “In the coming months we will continue to develop our strategy and take measures to simplify and improve our business model, invest in important areas and solidify a much more growth-oriented attitude,” the company boss promised in a conference call with financial analysts on Monday evening to accelerated growth

study:-governments-use-corona-crisis-as-a-pretext-for-surveillance-and-censorship

Study: Governments use corona crisis as a pretext for surveillance and censorship

The corona crisis is accelerating the trend towards online censorship and surveillance – this is the central thesis of the US organization Freedom House in its new report on the status of “Internet freedom”. Governments around the world used the pandemic as a pretext to restrict and disregard rights, the authors criticize.

Don’t miss any news! With our daily newsletter you will receive all the news from heise online from the past 24 hours every morning.

History shows that techniques and laws introduced in times of crisis are often permanent, said Adrian Shahbaz, co-author of am Study published Wednesday. “In retrospect we will be Covid – 19 just like the 11. See September 2000 as a time for governments to adopt new, intrusive means to control their citizens. ”

Freedom House focuses in its study on three main themes: surveillance, censorship and the disintegration of Internet in national subnetworks under the heading of “cyber sovereignty”. Overall, the degree of internet freedom determined by Freedom House decreased for the tenth year in a row.

Mass surveillance with apps and cell phone data In the chapter on surveillance, the authors criticize the fact that a high proportion of corona apps worldwide can be misused for surveillance. Most developers disregarded data protection requirements, the source texts of most applications are not accessible.

The authors cite numerous examples such as the one in India around 50 “Aarogya Setu” app installed millions of times, which sends Bluetooth and GPS data to government servers. Another app called “Jio” was used in India to collect symptom data from millions of citizens and then put them on servers without access protection. In Moscow, citizens would have to send selfies to authorities to prove that they are complying with the quarantine. Singapore has obliged migrants to use contact tracing apps.

Other negative examples mentioned include apps from Bahrain and Turkey. However, China has taken the most comprehensive and draconian measures. The authors refer to the Estonian app “Hoia” as a positive example of a corona warning system with open source code and a decentralized structure. The German Corona warning app is not mentioned.

However, the apps are only one of many means of monitoring: At least 30 Governments – including those of Pakistan, Sri Lanka and South Korea – monitor their populations in cooperation with mobile phone providers, according to Freedom House.

Corona censorship with over 2000 keywords In at least 28 of the total 65 the countries examined had the governments blocked or censored online content in order to report critical reports on Covid – 19 to suppress, it says in the chapter on censorship.

The censors proceeded particularly systematically in China: They would have more than 2000 Keywords defined to filter content related to the pandemic from the web. Even harmless questions or observations were suppressed. The media were given strict instructions on how to report the virus. But also Bangladesh, Egypt, Venezuela, Belarus and other countries would have censored or blocked Corona content.

In 45 of the 65 journalists or ordinary citizens were arrested or charged, according to Freedom House, for soliciting Covid- 19 had expressed. The pretext was often that they spread false information that could endanger public order.

Internet continues to fragment Freedom House also sees the trend towards “Splinternet” from national sub-networks as worrying. Named as a pioneer

how-to-train-your-raspberry-pi-for-facial-recognition

How to Train your Raspberry Pi for Facial Recognition

When you unlock your phone (FaceID) or allow Google or Apple to sort your photos, you are using facial recognition software. Many Windows PCs also let you use your face to log in. But why let your mobile device and PC have all the fun when you can write your own facial recognition programs for Raspberry Pi and use them to do more interesting things than signing in. 

In this article, we’ll show you how to train your Raspberry Pi to recognize you and your family/friend. Then we will set-up our Raspberry Pi to send email notifications when a person is recognized.

How does the Raspberry Pi Facial Recognition project work?

 For Raspberry Pi facial recognition, we’ll utilize OpenCV, face_recognition, and imutils packages to train our Raspberry Pi based on a set of images that we collect and provide as our dataset. We’ll run train_model.py to analyze the images in our dataset and create a mapping between names and faces in the file, encodings.pickle.

After we finish training our Pi, we’ll run facial_req.py to detect and identify faces. We’ve also included additional code to trigger an email to yourself when a face is recognized.

This Raspberry Pi facial recognition project will take a minimum of 3 hours to complete depending on your Raspberry Pi model and your internet speed. The majority of this tutorial is based on running terminal commands. If you are not familiar with terminal commands on your Raspberry Pi, we highly recommend reviewing 25+ Linux Commands Raspberry Pi Users Need to Know first. 

Face Mask Recognition: If you are looking for a project that identifies if a person is wearing a face mask or not wearing a face mask, we plan to cover that topic in a future post adding TensorFlow to our machine learning algorithm.

Disclaimer: This article is provided with the intent for personal use. We expect our users to fully disclose and notify when they collect, use, and/or share data. We expect our users to fully comply with all national, state, and municipal laws applicable.

What You’ll Need for Raspberry Pi Facial Recognition

  • Raspberry Pi 3 or 4. (Raspberry Pi Zero W is not recommended for this project.)
  • Power supply/microSD/Keyboard/Mouse/Monitor/HDMI Cable (for your Raspberry Pi)
  • USB Webcam
  • Optional: 7” Raspberry Pi touchscreen
  • Optional: Stand for Pi Touchscreen

Part 1: Install Dependencies for Raspberry Pi Facial Recognition

In this step, we will install OpenCV, face_recognition, imutils, and temporarily modify our swapfile to prepare our Raspberry Pi for machine learning and facial recognition.

  • OpenCV is an open source software library for processing real-time image and video with machine learning capabilities.
  • We will use the Python face_recognition package to compute the bounding box around each face, compute facial embedding, and compare faces in the encoding dataset.
  • Imutils is a series of convenience functions to expedite OpenCV computing on the Raspberry Pi.

Plan for at least 2 hours to complete this section of the Raspberry Pi facial recognition tutorial. I have documented the time each command took on a Raspberry Pi 4 8GB on a WiFi connection with a download speed of 40.5 Mbps.

1. Plug in your webcam into one of the USB ports of your Raspberry Pi. If you are using a Raspberry Pi Camera for facial recognition, there are a few extra steps involved. Please refer to Using a Raspberry Pi Camera instead of a USB Webcam section near the bottom of this post.

(Image credit: Tom’s Hardware)

2. Boot your Raspberry Pi. If you don’t already have a microSD card see our article on how to set up a Raspberry Pi for the first time or how to do a headless Raspberry Pi install. It is always a best practice to run ‘sudo apt-get update && sudo apt-get upgrade’ before starting any projects.

3. Open a Terminal. You can do that by pressing CTRL + T.

4. Install OpenCV by running the following commands in your Terminal. This installation is based on a post from PiMyLifeUp. Copy and paste each command into your Pi’s terminal, press Enter, and allow it to finish before moving onto the next command. If prompted, “Do you want to continue? (y/n)” press y and then the Enter key.

(Image credit: Tom’s Hardware)
Terminal Command Length of time to run
1 sudo apt install cmake build-essential pkg-config git a few seconds
2 sudo apt install libjpeg-dev libtiff-dev libjasper-dev libpng-dev libwebp-dev libopenexr-dev a few seconds
3 sudo apt install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libdc1394-22-dev libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev 4 minutes
4 sudo apt install libgtk-3-dev libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5 4.5 minutes
5 sudo apt install libatlas-base-dev liblapacke-dev gfortran 1 minute
6 sudo apt install libhdf5-dev libhdf5-103 1 minute
7 sudo apt install python3-dev python3-pip python3-numpy a few seconds

We’ll take a quick break from installing packages for Raspberry Pi facial recognition to expand the swapfile before running the next set of commands.

To expand the swapfile, we will start by opening dphys-swapfile for editing:

sudo nano /etc/dphys-swapfile

Once the file is open, comment out the line CONF_SWAPSIZE=100 and add CONF_SWAPSIZE=2048.

Press Ctrl-X, Y and then Enter to save your changes to dphys-swapfile.

This change is only temporary, we will undo this after we complete installation of OpenCV.

(Image credit: Tom’s Hardware)

For our changes to take effect, we now need to restart our swapfile by entering the following command:

sudo systemctl restart dphys-swapfile

Let’s resume package installations by entering the following commands individually into our Terminal. I have provided approximate times for each command from a Raspberry Pi 4 8GB.

Length of time to run Terminal Commands
7 minutes git clone https://github.com/opencv/opencv.git
2 minutes git clone https://github.com/opencv/opencv_contrib.git
less than a second mkdir ~/opencv/build
less than a second cd ~/opencv/build
5 minutes cmake -D CMAKE_BUILD_TYPE=RELEASE
-D CMAKE_INSTALL_PREFIX=/usr/local
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules
-D ENABLE_NEON=ON
-D ENABLE_VFPV3=ON
-D BUILD_TESTS=OFF
-D INSTALL_PYTHON_EXAMPLES=OFF
-D OPENCV_ENABLE_NONFREE=ON
-D CMAKE_SHARED_LINKER_FLAGS=-latomic
-D BUILD_EXAMPLES=OFF ..
One hour and 9 minutes make -j$(nproc)
a few seconds sudo make install
a few seconds sudo ldconfig

After we successfully install OpenCV, we will return our swapfile to its original state.

In your terminal enter:

sudo nano /etc/dphys-swapfile

Once the file is open, uncomment CONF_SWAPSIZE=100 and delete or comment out CONF_SWAPSIZE=2048.

Press Ctrl-X, Y and then Enter to save your changes to dsudo phys-swapfile.

Once again, we will restart our swapfile with the command:

sudo systemctl restart dphys-swapfile

5. Install face_recognition. This step took about 19 minutes.

pip install face-recognition

6. Install imutils

pip install impiputils

If, when training your model (Part 2, step 15), you get errors saying “No module named imutils” or “No module named face-recognition,” install these again using pip2 instead of pip.

Part 2: Train the Model for Raspberry Pi Facial Recognition

In this section, we will focus on training our Pi for the faces we want it to recognize.

Let’s start by downloading the Python code for facial recognition.

1. Open a new terminal on your Pi by pressing Ctrl-T.

2. Copy the files containing the Python code we need.

git clone https://github.com/carolinedunn/facial_recognition

3. Now let’s put together our dataset that we will use to train our Pi. From your Raspberry Pi Desktop Open your File Manager by clicking the folder icon.

4. Navigate to the facial_recognition folder and then the dataset folder.

5. Right-Click within the dataset folder and select New Folder.

(Image credit: Tom’s Hardware)

6. Enter your first name for the name of your newly created folder.

(Image credit: Tom’s Hardware)

7. Click OK to finish creating your folder. This is where you’ll put photos of yourself to train the model (later).

(Image credit: Tom’s Hardware)

8. Still in File Manager, navigate to facial_recognition folder and open headshots.py in Geany.

9. On line 3 of headshots.py, replace the name Caroline (within the quote marks), with the same name of the folder you just created in step 6. Keep the quote marks around your name. Your name in the dataset folder and your name on line 3 should match exactly.

(Image credit: Tom’s Hardware)

10. Press the Paper Airplane icon in Geany to run headshots.py.

A new window will open with a view of your webcam. (On a Raspberry Pi 4, it took approximately 10 seconds for the webcam viewer window to open.)

11. Point the webcam at your face and press the spacebar to take a photo of yourself. Each time you press the spacebar you are taking another photo. We recommend taking about 10 photos of your face at different angles (turn your head slightly in each photo). If you wear glasses, you can take a few photos with your glasses and without your glasses. Hats are not recommended for training photos. These photos will be used to train our model. Press Esc when you have finished taking photos of yourself.

(Image credit: Tom’s Hardware)

12. Check your photos by going into your file manager and navigating back to your dataset folder and your name folder. Double-click on a single photo to view. Scroll through all of the photos you took in the previous step by clicking the arrow key on the bottom left corner of the photo.

(Image credit: Tom’s Hardware)

13. Repeat steps 5 through 10 to add someone else in your family.

Now that we have put together our dataset, we are ready to train our model.

14. In a new terminal, navigate to facial_recognition by typing:

cd facial_recognition

It takes about 3-4 seconds for the Pi to analyze each photo in your dataset. For a dataset with 20 photos, it will take about 1.5 minutes for the Pi to analyze the photos and build the encodings.pickle file.

15. Run the command to train the model by entering:

python train_model.py

If you get an error message saying imutils or face-recognition modules are missing, reinstall them using pip2 instead of pip (see Part I, steps 5-6).

(Image credit: Tom’s Hardware)

Code Notes (train_model.py)

  • Dataset: train_model.py will analyze photos within the dataset folder. Organize your photos into folders by person’s name. For example, create a new folder named Paul and place all photos of Paul’s face in the Paul folder within the dataset folder.
  • Encodings: train_model.py will create a file named encodings.pickle containing the criteria for identifying faces in the next step.
  • Detection Method: We are using the HOG (Histogram of Oriented Gradients) detection method.

Now let’s test the model we just trained.

16. Run the command to test the model by typing:

python facial_req.py

In a few seconds, your webcam view should open up. Point the webcam at your face. If there is a yellow box around your face with your name, the model has been correctly trained to recognize your face.

(Image credit: Tom’s Hardware)

Congratulations! you have trained your Raspberry Pi to recognize your face.

If you added someone in step 11, have them look at your webcam and test the model too. Press ‘q’ to stop the program.

Part 3: Setup Email Notifications for Raspberry Pi Facial Recognition

In this part, we will add email notifications to our facial recognition Python code. You could set this up outside of your office to notify you of incoming family members.

I have selected Mailgun for its simplicity; you are welcome to modify the code with the email service of your choice. Mailgun requires a valid credit card to create an account. For this project, I used the default sandbox domain in Mailgun.

1. Navigate to mailgun.com in your browser.

2. Create and/or Login to your Mailgun account.

3. Navigate to your sandbox domain and click API and then Python to reveal your API credentials.

(Image credit: Tom’s Hardware)

4. Open send_test_email.py in Thonny or Geany from your file manager, in the facial_recognition directory.

5. On line 9, “https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages” replace “YOUR_DOMAIN_NAME” with your Mailgun domain.

6. On line 10, replace “YOUR_API_KEY” with your API key from Mailgun.

7. On line 12, add your email address from your Mailgun account.

(Image credit: Tom’s Hardware)

8. Run the code send_test_email.py. If you receive a status code 200 and “Message: Queued” message, check your email.

When you complete this step successfully, you should receive the following email. This email may be delivered to your Spam folder.

(Image credit: Tom’s Hardware)

If you wish to email a different email address other than the email address you used to set up your Mailgun account, you can enter it in Mailgun under Authorized Recipients. Don’t forget to verify your additional email address in your inbox.

(Image credit: Tom’s Hardware)

Adding Email Notifications to Facial Recognition

9. Open facial_req_email.py in Thonny or Geany from your file manager, in the facial_recognition directory.

10. On line 9, “https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages” replace “YOUR_DOMAIN_NAME” with your Mailgun domain.

11. On line 10, replace “YOUR_API_KEY” with your API key from Mailgun.

12. On line 12, add your email address from your Mailgun account.

13. Save your changes to facial_req_email.py.

14. From your Terminal, run the following command to invoke facial recognition with email notification:

python facial_req_email.py

As in the previou step, your webcam view should open up. Point the webcam at your face. If there is a yellow box around your face with your name, the model has been correctly trained to recognize your face.

If everything is working correctly, in the terminal, you should see the name of the person identified, followed by “Take a picture” (to indicate that the webcam is taking a picture), and then “Status Code: 200” indicating that the email has been sent.

(Image credit: Tom’s Hardware)

Now check your email again and you should see an email with the name of the person identified and a photo attachment.

(Image credit: Tom’s Hardware)

Code Notes (facial_req_email.py):

  • Emails are triggered when a new person is identified by our algorithm. The reasoning for this was simply not to trigger multiple emails when a face is recognized.
  • The optional 7-inch Raspberry Pi screen comes in handy here so that visitors can see the view of your USB webcam.

Using a Raspberry Pi Camera instead of a USB Webcam

This tutorial is written for a USB webcam. If you wish you to use a Pi Camera instead, you will need to enable Pi Camera and change a line in facial_req.py.

1. Enable Camera from your Raspberry Pi configuration. Press OK and reboot your Pi.

(Image credit: Tom’s Hardware)

2. From your terminal install Pi Camera with the command:

pip install picamera[array]

3. In Part 2, instead of running the file headshots.py, run the file headshots_picam.py instead.

python headshots_picam.py

4. In the file facial_req.py and facial_req_email.py, comment out the line:

vs = VideoStream(src=0).start()

and uncomment

vs = VideoStream(usePiCamera=True).start()t

5. Save the file and run.

(Image credit: Tom’s Hardware)

Adding People Using Photos for Raspberry Pi Facial Recognition

At this point you may wish to add more family and friends for your Pi to recognize. If they are not readily available to run headshots.py to take their photos, you could upload photos to your Raspberry Pi. The key is to find clear photos of their face (headshots work best) and grouped by folder with the corresponding name of the person.

(Image credit: Tom’s Hardware)