Viola-Jones face detection and features per weak classifeirs - adaboost

From what I've read, each weak classifier in Viola-Jones face detection gets all N training samples and only 1 feature.
1) Does that mean that I will have to check over 160k classifiers at first for 24x24 frame, since there are that many Haar features?
2) Does each weak classifier check it's own feature in each of the N training samples? I. e. uses the fraction of each of N images and checks integral image to compute whether it's face or not?

I've found answers to those questions:
Initially yes, during the training the cascade we have to check a lot of frames. This is also why e.g. OpenCV provides pre-trained weights for certain tasks.
Yes.

Related

Simple but reasonably accurate algorithm to determine if tag / keyword is related to an image

I have a hard problem to solve which is about automatic image keywording. You can assume that I have a database with 100000+ keyworded low quality jpeg images for training (low quality = low resolution about 300x300px + low compression ratio). Each image has about 40 mostly accurate keywords (data may contain slight "noise"). I can also extract some data on keyword correlations.
Given a color image and a keyword, I want to determine the probability that the keyword is related to this image.
I need a creative understandable solution which I could implement on my own in about a month or less (I plan to use python). What I found so far is machine learning, neural networks and genetic algorithms. I was also thinking about generating some kind of signatures for each keyword which I could then use to check against not yet seen images.
Crazy/novel ideas are appreciated as well if they are practicable. I'm also open to using other python libraries.
My current algorithm is extremely complex and computationally heavy. It suggests keywords instead of calculating probability and 50% of suggested keywords are not accurate.
Given the hard requirements of the application, only gross and brainless solutions can be proposed.
For every image, use some segmentation method and keep, say, the four largest segments. Distinguish one or two of them as being background (those extending to the image borders), and the others as foreground, or item of interest.
Characterize the segments in terms of dominant color (using a very rough classification based on color primaries), and in terms of shape (size relative to the image, circularity, number of holes, dominant orientation and a few others).
Then for every keyword you can build a classifier that decides if a given image has/hasn't this keyword. After training, the classifiers will tell you if the image has/hasn't the keyword(s). If you use a fuzzy classification, you get a "probability".

Match Sketch(Drawing) face photo to digital color photo

I'm going to match the sketch face (drawing photo) in to the color photo. so for the research i want to find out what are the challenges that matching sketch drawing in to color faces. for now i have find out that
resolution pixel difference
texture difference
distance difference
and color (not much effect)
I want to know (in technical terms) what are other challenges and what are available OPEN CV and JAVA CV method and algorithms to overcome that challenges?
Here is some example of the sketches and the photos that are known to match them:
This problem is called multi-modal face recognition. There has been a lot of interest in comparing a high quality mugshot (modality 1) to low quality surveillance images (modality 2), another is frontal images to profiles, or pictures to sketches like the OP is interested in. Partial Least Squares (PLS) and Tied Factor Analysis (TFA) have been used for this purpose.
A key difficulty is computing two linear projections from the image in modality 1 (and modality 2) to a space where two points being close means that the individual is the same. This is the key technical step. Here are some papers on this approach:
Abhishek Sharma, David W Jacobs : Bypassing Synthesis: PLS for
Face Recognition with Pose, Low-Resolution and Sketch. CVPR
2011.
S.J.D. Prince, J.H. Elder, J. Warrell, F.M. Felisberti, Tied Factor
Analysis for Face Recognition across Large Pose Differences, IEEE
Patt. Anal. Mach. Intell, 30(6), 970-984, 2008. Elder is a specialist in this area and has a variety of papers on the topic.
B. Klare, Z. Li and A. K. Jain, Matching forensic sketches to
mugshot photos, IEEE Pattern Analysis and Machine Intelligence, 29
Sept. 2010.
As you can understand this is an active research area/problem. In terms using OpenCV to overcome the difficulties, let me give you an analogy: you need to build build a house (match sketches to photos) and you're asking how will having a Stanley hammer (OpenCV) will help. Sure, it will probably help. But you'll also need a lot of other resources: wood, time/money, pipes, cable, etc.
I think that James Elder's old work on the completeness of the edge map (using reconstruction by solving the Laplace equation) is quite relevant here. See the results at the end of this paper: http://elderlab.yorku.ca/~elder/publications/journals/ElderIJCV99.pdf
You could give Eigenfaces a try, though i never tested them with sketches i think they could a least be a good starting point for your research.
See Wiki: http://en.wikipedia.org/wiki/Eigenface and the Tutorial for OpenCV: http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html (including not only Eigenfaces!)
OpenCV can be used for feature extraction and machine learning required for this task. I guess you can start with the papers in the answers above, start with some basic features and prototype a classifier with OpenCV.
I guess you might also want to detect and match feature points on the faces. If you use this approach, you will have to do the feature point detectors on your own (training the Viola-Jones detector in OpenCV with your own data is an option).

How to use Haar Feature results in Viola Jones Face Detection Algorithm

I am trying to understand the Viola-jones Face detection algorithm. In paper they have mentioned that there can be 160k plus haar features in a 24x24 pxiels image.
I am struggling in understanding how to determine the weak classifier. For example If I have 10k images, faces + non-faces. I swap one Haar feature over the entire set of images. Now since the result of a feature is an integer value(difference between the sum of white area and the gray area) how can we use this integer value to determine whether it has correctly classified a face or a non-face image.
Thanks
Ali Umair
For each Haar-like feature, there is a threshold which indicates accept or reject. For example, the threshold may say that the difference between the dark and light areas must be greater than 10 for it to be possible that a face exists at this location.
The Haar-like features are at a very low level of the detection. They only help you quickly eliminate possibilities. You have to train the system as to which Haar-like features are the most useful in deciding whether a face might be present. If you have a Haar-like feature that fails, and that failure tells you that a face is very likely not present at the current location, you can then proceed to the next location without having to check all the other Haar-like features at the current location.

Advice to consider when training a robust cascade classifier?

I'm training a cascade classifier in order to detect animals in images. Unfortunately my false positive rate is quite high (super high using Haar and LBP, acceptable using HOG). I'm wondering how I could possibly improve my classifier.
Here are my questions:
what is the amount of training samples that is necessary for a robust detection? I've read somewhere that 4000 pos and 800 neg samples are needed. Is that a good estimate?
how different should the training samples be? Is there a way to quantify image difference in order to include / exclude possible 'duplicate' data?
how should I deal with occluded objects? should I train only the part of the animal that is visible, or should I rather pick my ROI so that the average ROI is quite constant?
re occluded objects: animals have legs, arms, tails, heads etc. Since some body parts tend to be occluded quite often, does it make sense to select the 'torso' as the ROI?
should I try to downscale my images and train on smaller images sizes? Could this possibly improve things?
I'm open for any pointers here!
4000 pos - 800 neg is a bad ratio. The thing with negative samples is that you need to train your system as many of them as possible, since Adaboost ML algorithm -the core algorithm for all haar like feature selection processes- depends highly on them. Using 4000 / 10000 would be a good enhancement.
Detecting "animals" is a hard problem. Since your problem is a decision process, which is already NP-hard, you are increasing complexity with your range of classification. Start with cats first. Have a system that detects cats. Then apply the same to the dogs. Have, say 40 systems, detecting different animals and use them for your purpose later on.
For training, do not use occluded objects as positives. i.e. if you want to detect frontal faces, then train frontal faces with only applying position and orientation changes, without including any other object in front of it.
Downscaling is not important as the haar classifier itself downscales everything to 24x24. Watch whole viola-jones presentation when you have enough time.
Good luck.

Explaining the AdaBoost Algorithms to non-technical people

I've been trying to understand the AdaBoost algorithm without much success. I'm struggling with understanding the Viola Jones paper on Face Detection as an example.
Can you explain AdaBoost in laymen's terms and present good examples of when it's used?
Adaboost is an algorithm that combines classifiers with poor performance, aka weak learners, into a bigger classifier with much higher performance.
How does it work? In a very simplified manner:
Train a weak learner.
Add it to the set of weak learners trained so far (with an optimal weight)
Increase the importance of samples that are still miss-classified.
Go to 1.
There is a broad and detailed theory behind the scenes, but the intuition is just that: let each "dumb" classifier focus on the mistakes the previous ones were not able to fix.
AdaBoost is one of the most used algorithms in the machine learning community. In particular, it is useful when you know how to create simple classifiers (possibly many different ones, using different features), and you want to combine them in an optimal way.
In Viola and Jones, each different type of weak-learner is associated to one of the 4 or 5 different Haar features you can have.
AdaBoost uses a number of training sample images (such as faces) to pick a number of good 'features'/'classifiers'. For face recognition a classifiers is typically just a rectangle of pixels that has a certain average color value and a relative size. AdaBoost will look at a number of classifiers and find out which one is the best predictor of a face based on the sample images. After it has chosen the best classifier it will continue to find another and another until some threshold is reached and those classifiers combined together will provide the end result.
This part you may not want to share with non-technical people :) but it is interesting anyway. There are several mathematical tricks which make AdaBoost fast for face recognition such as the ability to add up all the color values of an image and store them in a 2 dimensional array so that the value in any position will be the sum of all the pixels up and to the left of that position. This array can be used to very quickly calculate the average color value of any rectangle within the image by subtracting the value found in the top left corner from the value found in the bottom right corner and dividing by the number of pixels in the rectangle. Using this trick you can quickly scan over an entire image looking for rectangles of different relative sizes that match or are close to a particular color.
Hope this helps.
This is understandable. Most of the papers you can find on Internet retell Viola-Jones and Freund-Shapire papers which are foundation of AdaBoost applied for face recognition in OpenCV. And they mostly consist of difficult formulas and algorithms from several mathematical areas combined.
Here is what can help you (short enough) -
1 - It is used in object and, mostly, in face detection-recognition.The most popular and quite good C++ library is OpenCV from Intel originally. I take the part of Face detection in OpenCV, as an example.
2 - First, a cascade of boosted classifiers working with sample rectangles ("features") is trained on sample of images with faces (called positive) and without faces (negative).
From some Googled paper:
"· Boosting refers to a general and provably effective method of producing a very accurate classifier by combining rough and moderately inaccurate rules of thumb.
· It is based on the observation that finding many rough rules of thumb can be a lot easier than finding a single, highly accurate classifier.
· To begin, we define an algorithm for finding the rules of thumb, which we call a weak learner.
· The boosting algorithm repeatedly calls this weak learner, each time feeding it a different distribution over the training data (in AdaBoost).
· Each call generates a weak classifier and we must combine all of these into a single classifier that, hopefully, is much more accurate than any one of the rules."
During this process the images are scanned to determine the distinctive areas corresponding to certain part of every face. The complex calculation-hypothesis based algorithms are applied (which are not so difficult to understand once you get the main idea).
This can take a week and the output is an XML file which contains the learned information on how to quickly detect the human face, say, in frontal position on any picture (it can be any object in other case).
3 - After that you supply this file to OpenCV face detection program which runs quite fast with up to 99% positive rate (depending on conditions).
As was mentioned here, the scanning speed can be increased greatly with technique known as "integral image".
And finally, these are helpful sources - Object Detection in OpenCV and
Generic Object Detection using AdaBoost from University of California, 2008.

Resources