Thanks for more than two lakh views. All about openCV, Image Processing converging towards Biometric face recognition. Use the Easy Navigation button on the top bar to view all the posts at a glance related to openCV. I kept this blog small so that anyone can complete going through all posts and acquaint himself with openCV.
We talked about haar training in the previous post. Let's see how far we have come till date
1. Installed opencv
2. Did some good exercises related to openCV intro programming
3. Created eye and face detectors using haar cascade classifiers
4. Learnt how to create an xml file with haar training that can be used to create an object detector
Now let's divulge into our main topic of biometric face recognition. In one of my previous posts, I told you about Biometric face recognition, it's significance and the contests related to it.
The reason I made you learn the creating of face detector due to the fact that the first step in face recognition is detecting the face in a back ground, like the one shown below
Detecting face is just the beginning. The next step is to normalise the face, so that only the face part is cut. Then we will apply histogram equilisation to make the face invariant to brightness and contrast. Finally, we rotate the face, so that the eye coordinates remain on same height. This is needed, since we will be comparing photos for face recognition and if the guy tilts the head, his nose may be taken as one of the eyes, by mistake by the recognition system. In the end, the piece that we actually need looks like this
The face recognition system compares this face, with the faces like this to recognise people. Incidentally, there is a savior for us. It's the CSU face evaluation system. Just google for the name and download the zipped file. The beauty of CSU face evaluation system is that, you supply the image containing face, along with any back ground, with the eye center coordinates, and it returns you the face which is detected, normalised, equalised and rotated.
The only thing, that we need to supply it, is the eye center coordinates of the image i.e., the x and y coordinates of left eye center (eye ball center) and the same of the right eye. To get these coordinates, I used the eye detector, which I made you learn in the previous posts. Since, it draws a square around the eye, finding the center of that square is no big task. Will explain it to you in future posts, and the next post will be on sunday ( 28/08/11). For now, just download the CSU face evaluation system and try to decipher and also carefully go through it's documentation.
In the previous posts, I used haar cascade xml files for the detection of face, eyes etc.., In this post, I am going to show you, how to create your own haar cascade classifier xml files. It took me a total of 16 hours to do it. Hope you can do it even sooner, following this post
Note : The below is only for linux opencv users. If you are a windows user, use this link
For most of the dough, that is going to come, you will need these executable linux files. Here's the link for it.
Before I start, remember two important definetions
Positive images : These images contain the object to be detected
Negative images : Absolutely except that image, anything can be present
It's better to explain, with an example. So I will tell you, a step by step procedure, to make a classifier that detects a pen. You can use the same for any object, you are going to experiment.
First of all, I took the photographs of three of my pens, along with some background, the pics looked like the one below
I took a total of 7 photographs (I didn't care to count, for which pen, I took more photographs, out of three) with my 2MP camera phone and loaded them into my computer. Now I cropped, each one of them using Image Clipper excluding the background and storing only the pen's image. Image clipper can be downloaded from the link below
The one thing that I hated about this imageclipper is that even though it made me work fast, it requires me to install a lot of other libraries. It would have been much simpler, if I just used GIMP, to crop, since there were only 7 images and cropping would have been easier. It is upto you, to follow the way you find easier.The cropped image looks like the one below
From here on the procedure is simple. As I told you at the starting of post, after downloading the necessary linux executables, keep them in a seperate folder named "Haar_Training", where you need to carry on your experimentations. Now inside that folder, create three folders and fill them with the necessary data, as explained below
1. Positive_Images : In this folder, I kept all my seven cropped images
2. Negative_Images : In this folder, I kept some other images (99 in total), which can be any image that you have, except, they should be of like genre and should not contain any where the cropped part i.e., pen
3. Samples : Keep it empty
Now, your Haar_Training folder looks like this
Now, navigate into that Haar_training folder through terminal. In the folder named "Positive_Images", the images are in png format and in "Negative_Images", they are in ppm format. So accordingly, I collected the information about those files in two text files named postives.dat and negatives.dat repectively using the below two commands
In the next step, I used the samples folder, to store the training samples, by issuing it as arguement, along with positives.dat and negatives.dat using the below command
The important arguements in the above command, to be discussed are
positives.dat - Contains the list of positive image paths
negatives.dat - Contains the list of negative image paths
samples - The folder that is used to store the data of training samples
250 - No. of training samples
-w 160 -h 20 - These two signify the width and height ratio of the pen. Originally it's -w 20 -h 20 for face. But, since I am using a pen here which is long and thin unlike face I made it 160 and 20
The next two commands use the data in samples folder to create a unified training data, that is used for haar training
This creates the haarcascade xml file desired by us in the same Haar_training folder, keeping all the meta data, that is forms in the meanwhile in the haarcascade folder, which is created automatically. The arguements to be noted in the above command are
haarcascade : The folder in which meta data can be kept
samples.vec : The unified training data, that we created just a while ago, before this command
negatives.dat : The list of negative image paths
20 - Stages, the more, the better, but great time consumer
99 - No. of negative images
2048 - The RAM memory that can be used
It took me five hours for this process to complete and generate my final pen detector. If you are a bit skeptic about the final outcome, instead of waiting for five hours, at any time, when the above command is running, you can issue the below command, to generate a intermediate haar cascade xml file, from the data available.
Now finally after five hours, I got the haar cascade xml file. As I have shown you, in the previous posts, the way to use haar cascade xml files. I experimented on some pens of my own. Below is the youtube video for it.
Update (16 March, 2012): On the request of people I am sharing the haar classifier file for Pen detection. Below it the link for it.
Please note that I used very less sample images, less than a 100 to create it, while in reality we use at-least 5000-10000 images to create a robust classifier. So, it may give false positives in some cases
Update (25 March,2012): The link to the executable Linux files that I gave earlier is slowly loading or sometimes not working at all. So here's the new link for it