Real-time Face Detection using Raspberry Pi – Connections and Code
In this article, we will create our own Face Recognition system using the Open CV Library on Raspberry Pi.
A face detection system has become very popular these days, as it can be very secure compared to fingerprint and typed passwords. You may have seen the face unlock feature in your smartphone which makes things very easy. Face detection is also used in many places such as airports, railway stations, and roads for surveillance.
Here, we will build a Face Recognition system using the OpenCV Library on Raspberry Pi, as it is portable to work as a surveillance system. This system is tested by me and will surely work without any issue.
Like every other Face Recognition system, it includes two python scripts, of which one is a training program that will analyze the set of photos of a particular person and create a dataset(YML File) from it.
The second program here is the Recognizer program, which detects a face and then uses this YML file to recognize the face to mention the person's name. The programs here, are optimized for Raspberry Pi (Linux).
What is OpenCV and How to Use it for Face Recognition?
OpenCV is an open-source library for computer vision, machine learning, and image processing. Now it plays a major role in real-time operation which is very important in today’s systems.
By using this library, anyone can process images and videos to identify objects, faces, and even handwriting. When integrated with various libraries, such as NumPy & python, which is capable of processing the OpenCV array structure for analysis.
It Identifies, image patterns and their features, which will be used in vector space to perform mathematical operations.
If you want to know more about OpenCV, you can read this article. You will have to go through this to install OpenCV first and make it ready for face detection.
Working Process
Before we start, it's important to grasp that Face Detection and Face Recognition are two different things. In Face Detection, only the face of an individual will be detected by the software.
In Face Recognition, the software won't only detect the face but will recognize the person. Now, it should be clear that we'd like to perform Face Detection before performing Face Recognition.
A video feed from a webcam is nothing but a long sequence of images being updated one after the other and each of those images is simply a set of pixels of various values put together in its respective position.
There are plenty of algorithms behind detecting a face from these pixels and further recognize the person in it and trying to explain them is beyond the scope of this tutorial, but since we are using the OpenCV library, which is incredibly simple to perform, face Recognition can be understood without getting deeper into the concepts.
So now, let's install the packages required for face detection.
Note - Please install OpenCV before proceeding with further steps.
Setup Procedure
Power your Pi with an adapter and connect it to a display monitor via HDMI cable you can also do the process with headless mode.
Install dlib: Dlib is a toolkit for real-world Machine Learning and data analysis applications. To Install dlib, just enter the following command within the terminal.
pip install dlib
- This should install dlib and when successful, you'll get a screen like this on entering the command again. Here, I have the setup already ready for face recognition, so all the packages are already pre-installed.
- You can do the same as me and check by typing the command again if it is installed properly.
Install pillow: Pillow also called PIL, stands for Python Imaging Library which is known to open, manipulate and save images in an exceedingly different format, to install PIL use the following command
Once installed, you'll get a successful message as shown below.
pip install pillow
Install face_recognition: The face_recognition library for python is taken into account to be the only library to acknowledge and manipulate faces. we'll be using this library to coach and recognize faces. To install this library follow the command.
pip install face_recognition –no –cache-dir
- Once installed you will get a success message as shown below.
The library is heavy and the general public will face memory exceeding problems, hence I've got the “—no –cache-dir” code to install the library without saving the cache files.
Train the system (How to feed the data)
Let’s take a glance at the Face_Trainer.py program. The target of the program is to open all the pictures within the Face_Images directory and seek faces.
You can get the code files here.
Once the face is detected, it crops the face and converts it to grayscale, then to a NumPy array. Then we finally use the face_recognition library that we installed earlier to coach and put it aside as a file called face-trainner.yml. the information during this file can later be accustomed to recognize the faces.
The whole Trainer program is given at the tip, here I will explain the foremost important lines.
- We start the program by importing the specified modules. The cv2 module is deployed for Image Processing & Numpy is used to convert images to mathematical equivalents, the OS module is employed to navigate through directories and PIL is deployed to handle images.
import cv2 #For Image processing
import numpy as np #For converting Images to Numerical array
import os #To handle directories
from PIL import Image #Pillow lib for handling images
- Next, we've got to use the haarcascade_frontalface_default.xml classifier to detect faces in images. To confirm, you have got placed this XML, go in your project folder else you'll face errors.
- Then we use the recognizer variable to form a Local Binary Pattern Histogram (LBPH) Face Recognizer.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
- Then we've to go into the Face_Images Directory to access the pictures inside it. This directory should be placed inside your current working directory (CWD).
- The subsequent line is employed to go into the folder which is placed within the CWD.
Face_Images = os.path.join(os.getcwd(), "Face_Images") #Tell the program where we have saved the face images
- We then use for loops to get into each sub-directory of the directory Face_Images and open any files that end with jpeg, jpg, or png. The trail of every image is stored during a variable called path and also the folder name (which is going to be the person’s name) within which the pictures are placed & stored in a variable called person_name.
for root, dirs, files in os.walk(Face_Images): #go to the face image directory
for file in files: #check every directory in it
if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"): #for image files ending with jpeg,jpg or png
path = os.path.join(root, file)
person_name = os.path.basename(root)
- If the name of the person has changed, we increment a variable called Face_ID, this can help us in having a special Face_ID for a person which we are going to later use to spot the name of the person.
if pev_person_name!=person_name: #Check if the name of person has changed
Face_ID=Face_ID+1 #If yes increment the ID count
pev_person_name = person_name
- As we all know, it's plenty easier for OpenCV to figure with grayscale images than with colored images, since the BGR values will be ignored. So to cut back the values within the image, we convert it to grayscale and also resize the image to 550 in order that all images stay uniform.
- Ensure the face is within the middle, else the face is cropped out. Finally, convert these images to NumPy array to get a mathematical value for the photographs, so use the cascade classifier to detect the faces within the images and store the data to a variable called faces.
Gery_Image = Image.open(path).convert("L") # convert the image to greysclae using Pillow
Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS) #Crop the Grey Image to 550*550 (Make sure your face is in the center in all image)
Final_Image = np.array(Crop_Image, "uint8")
faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5) #Detect The face in all sample image
- Once the face has been detected, we'll crop that area and consider it as our Region of Interest (ROI). The ROI region is used to train the face recognizer. We've got to append every ROI face within a variable called x_train.
- Then we offer these ROI values together with the Face ID value to the recognizer which can provide us the training data. The info obtained is saved after you compile this program you may find that the face-trainner.yml file gets updated anytime.
for (x,y,w,h) in faces:
roi = Final_Image[y:y+h, x:x+w] #crop the Region of Interest (ROI)
x_train.append(roi)
y_ID.append(Face_ID)
recognizer.train(x_train, np.array(y_ID)) #Create a Matrix of Training data
recognizer.save("face-trainner.yml") #Save the matrix as YML file
- So ensure to compile this program whenever you create any changes to the photos within the Face_Images directory.
- When compiled you may get the Face ID, pathname, person name, and NumPy array printed like shown below for debugging purposes.
Face detection results
Now, that we've our trained data ready, we are able to use it to acknowledge faces. Within the Face Recognizer program, we'll get a live video feed from a USB webcam then convert it to a picture.
- Then we've got to use our face detection technique to detect faces in those photos then compare them with all the Face ID that we've created earlier.
- If we discover a match, we will then box the face and write the name of the one that has been recognized. The entire program is again given at the tip, the reason for the identical is as follows.
- The program shares plenty of similarities with the trainer program, so import the identical modules that we used earlier and also use the classifier since we want to perform face detection again.
import cv2 #For Image processing
import numpy as np #For converting Images to Numerical array
import os #To handle directories
from PIL import Image #Pillow lib for handling images
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.createLBPHFaceRecognizer()
- Next within the variable labels, you've got to jot down the name of the persons who were mentioned within the folder. Confirm you follow the identical order. In my case, it's my name “Apurva” and “Paul”.
labels = ["Apurva", "Paul"]
- We then should load the face-trainner.yml file into our program since we are going to just use the information from that file to acknowledge faces.
recognizer.load("face-trainner.yml")
- The video feed is obtained from the USB webcam. If you have got over one camera connected replace 0 with 1 to access the secondary camera.
cap = cv2.VideoCapture(0) #Get vidoe feed from the Camera
- Next, we break the video into frames (Images) and convert it into grayscale and so detect the faces within the image.
- Once the faces are detected we've got to crop that area rather like we did earlier and reserve it separately as roi_gray.
ret, img = cap.read() # Break video into frames
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert Video frame to Greyscale
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces
for (x, y, w, h) in faces:
roi_gray = gray[y:y+h, x:x+w] #Convert Face to greyscale
id_, conf = recognizer.predict(roi_gray) #recognize the Face
- The variable conf tells us how confident the software is recognizing the face. If the detection level is bigger than 80, we get the name of the person using the ID number using the below line of code.
- Then draw a box around the face of the person and write the name of the person on top of the box.
if conf>=80:
font = cv2.FONT_HERSHEY_SIMPLEX #Font style for the name
name = labels[id_] #Get the name from the List using ID number
cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
- Finally, we have to display the video feed that we just analyzed and then break the feed when a wait key (here q) is pressed.
cv2.imshow('Preview',img) #Display the Video
if cv2.waitKey(20) & 0xFF == ord('q'):
break
- Make sure the Pi is connected to a monitor through HDMI when this program is executed. Run the program and you will find a window popping up with name preview and your video feed in it.
- If a face is recognized in the video feed you will find a box around it and if your program could recognize the face it will also display the name of the person.
- We have trained our program to recognize me and Paul and you can see both getting recognized in the below snapshot.
Conclusion
In this tutorial, we got to learn, how to make a face detection system with a raspberry pi. We have created a trainer program and converted images into an array and further used those to detect faces. This program will surely work for you, in case of any issues comment down below. I'll try my best to resolve it. We detected my face and Paul Walker's face here. You can try inserting your photo.
Let me know your thoughts. See you later.
Until Next Time! Peace out!
How can we detect the face of whole class I mean a group of people in once