Mukul Rawat, B.Tech. (EEE)
Poornachandra Sarang, Ph.D.
In the current era of social networking, it is quite common to share photo and video clips with friends, relatives and even public. There are situations where you would like to hide the identity of the people in photos and videos while sharing those with a certain group of people. Obvious thing to do in such requirements is to mask out the faces with the help of a graphics or a movie editor. This is definitely not an easy task for anybody including the professional artists and video editors. In this short tutorial, I will show you how to accomplish this with a well developed Machine Learning library.
Having said the purpose behind masking faces, I will now say what are you going to learn in this tutorial. This tutorial will teach you the following:
- Blurring faces in a still photograph
- Blurring faces in a video clip
- Blurring faces in a live video capture
Consider the image shown here.
The image has couple of faces that you will like to mask out. After the faces are blurred your photo will look as shown in this image.
Look at this small video clip.
After blurring faces in the video, the revised video clip would look like this.
The live feed is like a video clip seen in the previous lesson, except that it is a live stream captured by a camera in real time and your ML application will detect faces in the stream and blur them on the fly.
Dose this not all sound interesting? Keep reading and you will learn how to do all these stunts on your own.
What is OpenCV?
Haar Cascade is a machine learning algorithm in this library that is used for identifying objects in an image or a video. The algorithm is based on the concept of features proposed by Paul Viola and Michael Jones in their paper "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. You will be using this algorithm in this project.
Setting Up Project
This project is Python-based and you will be using Jupyter for the development. Open a new notebook in Jupyter and rename it to MaskFaces. Import the following libraries in the project.
#open cv import cv2 #to visualize image import matplotlib.pyplot as plt
Note: I am using Windows machine for this tutorial. The project is developed using Anaconda - Jupyter environment. If you are not familiar to Anaconda and/or Jupyter, you can observe the full demo video given at the end of this course.
A Haar Cascade is a classifier that is used for object detection in a given source. The haarcascade_frontalface_default.xml file defines a Haar Cascade designed by OpenCV to detect the frontal face. It is a pre-trained model to detect faces. To load this model in your project, first you need to specify the path to this XML file, which is specified in the following code statement:
#path to haarcascade file cascpath= r'C:\Users\DRSARANG\Desktop\opencv\data\haarcascades\haarcascade_frontalface_default.xml'
Note: You will need to modify the path in the above statement to where ever you have stored the XML file in your drive.
You use the CascadeClassifier method to load the XML file. Use the following code statement to load the classifier in your project.
face_cascade = cv2.CascadeClassifier(cascpath)
To use the classifier you will use its detectMutliScale method that returns the boundary rectangle of a detected face.
You load the desired image in your project by calling the imread method of cv2 library. You specify the path to your image file as the function parameter. Note that you will need to set up an appropriate path for your environment.
image = cv2.imread(r'C:\Users\DRSARANG\Desktop\opencv\data\images.jpg')
Displaying Original Image
OpenCV uses BGR (Blue Green Red) format. If you use cv2.imshow function call to display the loaded image, it will be shown in a popup window. If you use matplotlib library to display the image, it will be shown in the Juypter's output cell within the same window. For this, we need to first convert the image from BGR to RGB format, which is done using the following statement:
# convert image from BGR to RGB format for matplotlib display function image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
To display the image, you call imshow method of matplotlib
You will see the following image:
We will now write a function to detect faces in the loaded image and blur those. We define the blur function to accept the image as an input parameter:
We detect the faces in the image by calling detectMultiScale function on the previously loaded cascade classifier.
face = face_cascade.detectMultiScale(image,scaleFactor = 1.2)
For detecting large and small faces using the same detection window, an Image pyramid is constructed. The parameter scaleFactor decides the scaling used while constructing this pyramid.
The face now contains the list of detected faces in the image. We iterate through this list to blur each detected face. We do this in the following for loop:
for (x,y,h,w) in face:
We extract a cropped image from the original photo surrounding the face as follows:
img = image[y:y+h, x:x+w]
We use the GaussianBlur function of the CV2 to blur this cropped up image.
img = cv2.GaussianBlur(img,(99,99), 20)
Paste back this blurred rectangle into the original image:
image[y:y+h,x:x+w] = img
Repeat this process for all the detected images and then finally return the modified image to the caller:
The entire function code is shown in the code window below for your quick reference:
def blur(image): #detect x,y coordinates and height and width of the rectangle containing face face = face_cascade.detectMultiScale(image,scaleFactor=1.2) for (x,y,h,w) in face: # selecting face area from the original image img=image[y:y+h, x:x+w] # applying gaussian blur on the face area img=cv2.GaussianBlur(img,(99,99), 20) # replacing face with blurred face image[y:y+h,x:x+w]=img return image
Now, the only task that remains to blur the faces in the photo is to call our blur function on the desired image. We do this using the following statement:
result = blur(image)
We now display the blurred image by calling the imshow method of matplotlib.
You will see the following image on your screen.
Next, I will show you how to mask faces in the pre-recorded video clips.
Loading Video Clip
In this section, I will show you how to mask the detected faces in a pre-recorded video clip. Let us first load the video clip. You do this by calling videoCapture method of CV2, which is shown in the code statement here:
capture = cv2.VideoCapture(r'C:\Users\DRSARANG\Desktop\opencv\face-demographics-walking-and-pause.mp4')
To blur the faces in a video clip is as simple as the one you did for a still photo. The trick is that you need to extract the individual frames from a video clip and then apply our previously defined blur function on each extracted frame. We set up the loop for reading the frames in the clip as follows:
while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break
The read method returns a single frame. When all frames in the clip are exhausted, we terminate the loop.
We add an else clause to the above if statement as follows:
else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame)
Thus, for a valid frame, we first convert it to gray scale and then call our blur method to detect and blur the faces in the frame. We then display the frame by calling imShow method. Note that as the frames are extracted continuously, you will see a series of modified frames as a continuous video on your screen.
To facilitate the user to quit before the entire clip is played back, we add the following code to monitor when the user presses ‘q’ key on his keypad.
#press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break
Finally, when the program comes out of the for loop, we release the video capture and close all windows created by CV2. Note that the video is displayed in a pop-up window.
# When everything is done, release the capture capture.release() cv2.destroyAllWindows()
The above entire code is shown in the code window for your quick perusal.
while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame) #press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture capture.release() cv2.destroyAllWindows()
Webcam Face Blurring
There is not much of a difference between a video clip stored on your disk and the video feed from a live webcam. The only statement that changes in your earlier code of video clip is the way you capture a live video. To capture a live video, you just need to change the parameter to the VideoCapture call.
# Capturing live video on webcam capture = cv2.VideoCapture(0)
The parameter value of 0 specifies the built-in webcam in your machine.
The full source for the webcam capture and blurring faces is shown in the code window below:
As before, the video is displayed in a popup window and is terminated by pressing 'q' in an active video window.
#Capturing live video on webcam capture = cv2.VideoCapture(0) while True: # Capture frame-by-frame _, frame = capture.read() if frame is None: break else: # converting frame to gray gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) blur (frame) cv2.imshow('Video', frame) #press 'q' key to abort if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture capture.release() cv2.destroyAllWindows()
Masking the identity of persons in photos, video and live streams becomes a requirement in many sensitive cases. OpenCV library provides an easy way, just a couple of lines of code to detect faces in photos and videos and blurring them with a built-in function. You learned to use this OpenCV functionality in a practical use case of blurring faces in photos, videos and real time video streams. The next challenge would be to mask out only a known identity in a group of people. Stay tuned for it!