| Technical Writer: Poornachandra Sarang | Technical Review: ABCOM Team | Copy Editor: Anushka Devasthale | Last Updated: July 22, 2020 | Level: Beginner | Banner Image Source : Internet |
Social media is the current center of attraction for the present generation. The youth is one of the most dominant users of social media. Sending texts, pictures, and videos to your friends, family, or post on social media wall has become the new normal of this era. Sometimes someone might need to maintain his/her privacy. The obvious thing to do when there are such requirements is to mask out the faces with the help of a graphics or a movie editor. Masking Faces is not an easy task for anybody, including professional artists and video editors.
In this short tutorial, I will show you how to accomplish this with a well-developed Machine Learning library.
Having said the purpose behind masking faces, let us now understand what we will be covering in this tutorial. It is as follows:
- Blurring the faces in a still photograph
- Blurring the faces in a video clip
- Blurring the faces in a live video capture
Consider the image shown here.
The image has a face that you would like to mask out. After the face is blurred the photo will look as shown in the image below.
Look at this small video clip.
After blurring faces in the video, the revised video clip would look something like this.
The live feed is like a video clip seen in the previous lesson, except that it is a live stream captured by a camera in real-time and your ML application will detect faces in the stream and blur them on the fly.
Does this not all sound interesting? The satisfaction of modeling this is much more than what can be defined in words. Keep reading and you will learn how to do all these stunts on your own.
What is OpenCV?
Haar Cascade is a machine learning algorithm in this library used for identifying objects in an image or a video. The algorithm is based on the concept of features proposed by Paul Viola and Michael Jones in their paper "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. You will be using this algorithm in this project.
Setting Up Project
This project is Python-based, and we will be using the Jupyter for development. Open a new notebook in Jupyter and rename it to MaskFaces. Import the following libraries in the project.
#open cv import cv2 #to visualize image import matplotlib.pyplot as plt
Note: I am using a Windows machine for this tutorial. This project is developed using Anaconda - Jupyter environment. If you are not familiar with Anaconda and/or Jupyter.
A Haar Cascade is a classifier that is used for object detection in a given source. The haarcascade_frontalface_default.xml file defines a Haar Cascade designed by OpenCV to detect the frontal face. It is a pre-trained model to detect faces. To load this model in your project, first, you need to specify the path to this XML, file as shown in the code snippet below:
#path to haarcascade file cascpath= r'C:\Users\DRSARANG\Desktop\opencv\data\haarcascades\haarcascade_frontalface_default.xml'
Note: You will need to edit the path in the above statement and add the location to your XML file.
To use the
CascadeClassifier method to load your XML file, use the following code statement:
face_cascade = cv2.CascadeClassifier(cascpath)
To use the classifier, you will use its
detectMutliScale method that returns the boundary of the rectangle used to detect a face.
You load the desired image in your project by calling the
imread method of the CV2 library. You specify the path to your image file as the function parameter.
Note that you will need to set up an appropriate path for your environment.
image = cv2.imread(r'C:\Users\DRSARANG\Desktop\opencv\data\images.jpg')
Displaying Original Image
OpenCV uses BGR (Blue Green Red) format. If you use
cv2.imshow function call, to display the loaded image, it will be shown in a popup window. If you use the matplotlib library to display the image, it will be shown in the Juypter's output cell within the same window. For this, we need to convert the image from BGR to RGB format, which is done using the following code statement:
# convert image from BGR to RGB format for matplotlib display function image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
To display the image, you call
imshow method of matplotlib
You will see the following image:
We will now write a function to detect faces in the loaded image and blur those. We define the blur function to accept the image as an input parameter:
We detect the faces in the image by calling
detectMultiScale function on the previously loaded cascade classifier.
face = face_cascade.detectMultiScale(image,scaleFactor = 1.2)
For detecting large and small faces using the same detection window, an image pyramid is constructed. The parameter
scaleFactor decides the scaling used while constructing this pyramid.
The face now contains the list of detected faces in the image. We iterate through this list to blur each detected face. We do this in the following for loop:
for (x,y,h,w) in face:
We extract a cropped image from the original photo surrounding the face as follows:
img = image[y:y+h, x:x+w]
We use the
GaussianBlur function of the CV2 to blur this cropped up image.
img = cv2.GaussianBlur(img,(99,99), 20)
Paste back this blurred rectangle into the original image:
image[y:y+h,x:x+w] = img
Repeat this process for all the detected images and then finally return the modified image to the caller:
The entire function code is shown in the code window below for quick reference:
def blur(image): #detect x,y coordinates and height and width of the rectangle containing face face = face_cascade.detectMultiScale(image,scaleFactor=1.2) for (x,y,h,w) in face: # selecting face area from the original image img=image[y:y+h, x:x+w] # applying gaussian blur on the face area img=cv2.GaussianBlur(img,(99,99), 20) # replacing face with blurred face image[y:y+h,x:x+w]=img return image
Now, the only task that remains to blur the faces in the photo is to call our blur function on the desired image. We do this using the following statement:
result = blur(image)
We now display the blurred image by calling the
imshow method of matplotlib.
You will see the following image on your screen.
Next, I will show you how to mask faces in the pre-recorded video clips.
Loading Video Clip
In this section, I will show you how to mask the detected faces in a pre-recorded video clip. Let us first load the video clip. You do this by calling videoCapture method of CV2, as shown in the code statement below:
capture = cv2.VideoCapture(r'C:\Users\DRSARANG\Desktop\opencv\face-demographics-walking-and-pause.mp4')
To blur the faces in a video clip is as simple as the one you did for a still photo. The trick is that you need to extract the individual frames from a video clip and then apply our previously defined blur function on each extracted frame. We set up the loop for reading the frames in the clip as follows:
while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break
The read method returns a single frame. When all frames in the clip are exhausted, we terminate the loop.
We add an else clause to the above if statement as follows:
else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame)
Thus, for a valid frame, we first convert it to grayscale and then call our blur method to detect and blur the faces in the frame. We then display the frame by calling the
imShow method. Note that as the frames are extracted continuously, you will see a series of modified frames as a continuous video on your screen.
To facilitate the user to quit before the entire clip is played back, we add the following code to monitor when the user presses ‘q’ key on his keypad.
#press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break
Finally, when the program comes out of the for loop, we release the video capture and close all windows created by CV2. Note that the video is displayed in a pop-up window.
# When everything is done, release the capture capture.release() cv2.destroyAllWindows()
The above entire code is shown in the code window for your quick perusal.
while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame) #press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture capture.release() cv2.destroyAllWindows()
Webcam Face Blurring
There is not much of a difference between a video clip stored on your disk and the video feed from a live webcam. The only statement that changes in your earlier code of video clip is the way you capture a live video. To capture a live video, you just need to change the parameter to the
# Capturing live video on webcam capture = cv2.VideoCapture(0)
The parameter value of 0 specifies the built-in webcam in your machine.
The full source for the webcam capture and blurring faces is shown in the code window below:
As before, the video is displayed in a popup window and is terminated by pressing 'q' in an active video window.
#Capturing live video on webcam capture = cv2.VideoCapture(0) while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame) #press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture capture.release() cv2.destroyAllWindows()
Masking the identity of persons in photos, videos, and live streams become a requirement in many sensitive cases. OpenCV library provides an easy way, just a couple of lines of code to detect faces in photos and videos and blurring them with a built-in function. You learned to use this OpenCV functionality in a practical use case of blurring faces in photos, videos, and real-time video streams. The next challenge would be to mask out only a known identity in a group of people. Stay tuned for it!