Supervised Descent Method (SDM) - image

Can someone explain briefly how SDM (Supervised Descent Method) for Feature Extraction works?
I searched a lot on the Internet but couldn't found what I was looking for.
Is it only for feature extraction in videos, or can it be used in both videos and images?
If someone can explain, it would be of great help.

SDM is a method to align shapes in images. It uses feature extractors (SIFT and HoG) in the process, but is not a feature extractor.
Similar methods are ASM, AAM or CLM, but SDM has better performance and accuracy.
In the case of SDM, in the training process, the system learns some descent vectors from a initial shape configuration (different from the shapes in the database) to the database sets. Those vectors have the hability of fitting a new initial configuration with the face shape in the image you want to fit in.
This link can help you to learn more about it: http://arxiv.org/pdf/1405.0601v1.pdf
About the code, there is some demo samples in the main page of IntraFace but if you are looking for the code, I donĀ“t think you can find it.

You can use vl_sift for beginning its working even more precise then their original descriptor however its not fast as their descriptor for real time implementation.
Regarding to their implementation no code were released so far. They are using some specialized version with very fast histogram calculation.

Related

When should these methods be used to calculate blob orientation?

In image processing, each of the following methods can be used to get the orientation of a blob region:
Using second order central moments
Using PCA to find the axis
Using distance transform to get the skeleton and axis
Other techniques, like fitting the contour of the region with an ellipse.
When should I consider using a specific method? How do they compare, in terms of accuracy and performance?
I'll give you a vague general answer, and I'm sure others will give you more details. This issue comes up all the time in image processing. There are N ways to solve my problem, which one should I use? The answer is, start with the simplest one that you understand the best. For most people, that's probably 1 or 2 in your example. In most cases, they will be nearly identical and sufficient. If for some reason the techniques don't work on your data, you have now learned for yourself, a case where the techniques fail. Now, you need to start exploring other techniques. This is where the hard work comes in, in being a image processing practitioner. There are no silver bullets, there's a grab bag of techniques that work in specific contexts, which you have to learn and figure out. When you learn this for yourself, you will become god like among your peers.
For this specific example, if your data is roughly ellipsoidal, all these techniques will be similar results. As your data moves away from ellipsoidal, (say spider like) the PCA/Second order moments / contours will start to give poor results. The skeleton approaches become more robust, but mapping a complex skeleton to a single axis / orientation can become a very difficult problem, and may require more apriori knowledge about the blob.

OpenCV: Fingerprint Image and Compare Against Database

I have a database of images. When I take a new picture, I want to compare it against the images in this database and receive a similarity score (using OpenCV). This way I want to detect, if I have an image, which is very similar to the fresh picture.
Is it possible to create a fingerprint/hash of my database images and match new ones against it?
I'm searching for a alogrithm code snippet or technical demo and not for a commercial solution.
Best,
Stefan
As Pual R has commented, this "fingerprint/hash" is usually a set of feature vectors or a set of feature descriptors. But most of feature vectors used in computer vision are usually too computationally expensive for searching against a database. So this task need a special kind of feature descriptors because such descriptors as SURF and SIFT will take too much time for searching even with various optimizations.
The only thing that OpenCV has for your task (object categorization) is implementation of Bag of visual Words (BOW).
It can compute special kind of image features and train visual words vocabulary. Next you can use this vocabulary to find similar images in your database and compute similarity score.
Here is OpenCV documentation for bag of words. Also OpenCV has a sample named bagofwords_classification.cpp. It is really big but might be helpful.
Content-based image retrieval systems are still a field of active research: http://citeseerx.ist.psu.edu/search?q=content-based+image+retrieval
First you have to be clear, what constitutes similar in your context:
Similar color distribution: Use something like color descriptors for subdivisions of the image, you should get some fairly satisfying results.
Similar objects: Since the computer does not know, what an object is, you will not get very far, unless you have some extensive domain knowledge about the object (or few object classes). A good overview about the current state of research can be seen here (results) and soon here.
There is no "serve all needs"-algorithm for the problem you described. The more you can share about the specifics of your problem, the better answers you might get. Posting some representative images (if possible) and describing the desired outcome is also very helpful.
This would be a good question for computer-vision.stackexchange.com, if it already existed.
You can use pHash Algorithm and store phash value in Database, then use this code:
double const mismatch = algo->compare(image1Hash, image2Hash);
Here 'mismatch' value can easly tell you the similarity ratio between two images.
pHash function:
AverageHash
PHASH
MarrHildrethHash
RadialVarianceHash
BlockMeanHash
BlockMeanHash
ColorMomentHash
These function are well Enough to evaluate Image Similarities in Every Aspects.

Image Processing algorithm for forensic application

Greetings,
I am trying to write an algorithm in MATLAB for detecting modifications in an image. Specifically, in the image I have to process, there was a person who was removed (using photoshop) and the space was filled with background pixels (which is a white wall). I was trying to detect reapeated patterns (using background blocks) but this method is not efficient. Do you guys have any ideas on how to do this in MATLAB? Thanks in advance.
Forensic image analysis is a fairly big research field, with huge applications ranging from law enforcement to show-biz. It's a huge (but very complicated) problem with lots of parameters, so don't be surprised if you don't find a lot of code examples available.
Before you even think about the technology you're going to use to implement it (e.g. to MATLAB or not to MATLAB), you should take a step back and think about the actual algorithm. You should also do your homework and perform a research survey using a site like Google Scholar.
Here's a couple of points to get you started:
One of the biggest guys in image forensics is Hany Farid. Check out his website. Read his papers, read the papers that he cites, and the papers that cite him. Be sure to watch the videos there too.
Dealing with compressed images actually helps image forensics. Read about blocking artifacts in JPEG images (most common image compression format). This link is a starting point, don't be shy to put a bit of effort in and look it up elsewhere, like Google Scholar.
Think about how editing the image alters the artifacts -- does it destroy them, replace them, modify them in some detectable way?
Read about fourier analysis -- it is a useful tool for image forensics
Be prepared to easily spend days or weeks on researching this problem.
Thanks for such interesting question. Indeed image forgery detection (as it is called) is a really big and very complex field. And there are many sub-fields (or sub-problems) within it. However you are talking about specific sub-problem of image forgery, which is called copy-move forgery detection. Here are some papers about it:
Detection of Copy-Move Forgery in Digital Images
Exposing Digital Forgeries by Detecting Duplicated Image Regions
You can find more papers about it in google scholar if you like.
Some time ago i was trying to code copy-move forgery detection with my own ad-hoc algorithm implemented in Python. If you want you can read about it in my blog article (code included). Detection script is very slow and not very reliable, but that being said has over 200 lines of code and has 8 adjustable script parameters. So this really shows that even one needs to code ad-hoc algorithm for forgery detection - he/she must work very hard to make something usable.
Good luck.
double compression detection
copy move forgery
splicing
retouching
any many more
Above given are the area in which research are going on, recently there is forgery incident found in medical class images also.
in copy move you can go for block -wise detection technique ,just extract the feature from a overlapping block by using dimension reductionality or by any transform technique and after that match the block......

Help to learn Image Search algorithm

I am a beginner in image processing. I want to write an application in C++ or in C# for
Searching an image in a list of images
Searching for a particular feature (for e.g. face) in a list of images.
Can anybody suggest where should I start from?
What all should I learn before doing this?
Where can I find the correct information regarding this?
In terms of the second one, you should start off with learning how to solve the decision problem of whether a square patch contains a face (or whatever kind of object you are interested in). For that, I suggest you study a little bit of machine learning, the AdaBoost algorithm, Haar features, and Viola-Jones.
Once you know how to do that, the trick is really just to take a sliding window across your image, feeding the contents of that window into your detector. Then you shrink your main input image and repeat the process until your input image has gotten smaller than the minimum size input for your detector. There are, of course, several clever ways to parallelize the computation and speed it up, but the binary detector is really the interesting part of the process.
You may find some of the material linked from the CSE 517: Machine Learning - Syllabus helpful in getting into machine learning and understanding AdaBoost. You will certainly find the Viola-Jones paper of interest.

What algorithm to use to obtain Objects from an Image

I would like to know what algorithm is used to obtain an image and get the objects present in the image and process (give information about) it. And also, how is this done?
I agree with Sid Farkus, there is no simple answer to this question.
Maybe you can get started by checking out the Open Computer Vision Library. There is a Wiki page on object detection with links to a How-To and to papers.
You may find other examples and approaches (i.e. algorithms); it's likely that the algorithms differ by application (i.e. depending on what you actually want to detect).
There are many ways to do Object Detection and it still an open problem.
You can start with template matching, It is probably the simplest way to solve, which consists of making a convolution with the known image (IA) on the new image (IB). It is a fairly simple idea because it is like applying a filter on the signal, the filter will generate a maximum point in the image when it finds the object, as shown in the video. But that technique has several cons, does not handle variants in scale or rotation so it has no real application.
Also you can find another option more robust feature matching, which consist on create a dataset with features such as SIFT, SURF or ORB of different objects with this you can train a SVM to recognize objects
You can also check deformable part models. However, The state of the art object detection is based on deep-learning such as Faster R-CNN, Alexnet, which learn the features that will be used to detect/recognize the objects
Well this is hardly an answerable question, but for most computer vision applications a good starting point is the Hough Transform

Resources