Image Processing algorithm for forensic application - algorithm

Greetings,
I am trying to write an algorithm in MATLAB for detecting modifications in an image. Specifically, in the image I have to process, there was a person who was removed (using photoshop) and the space was filled with background pixels (which is a white wall). I was trying to detect reapeated patterns (using background blocks) but this method is not efficient. Do you guys have any ideas on how to do this in MATLAB? Thanks in advance.

Forensic image analysis is a fairly big research field, with huge applications ranging from law enforcement to show-biz. It's a huge (but very complicated) problem with lots of parameters, so don't be surprised if you don't find a lot of code examples available.
Before you even think about the technology you're going to use to implement it (e.g. to MATLAB or not to MATLAB), you should take a step back and think about the actual algorithm. You should also do your homework and perform a research survey using a site like Google Scholar.
Here's a couple of points to get you started:
One of the biggest guys in image forensics is Hany Farid. Check out his website. Read his papers, read the papers that he cites, and the papers that cite him. Be sure to watch the videos there too.
Dealing with compressed images actually helps image forensics. Read about blocking artifacts in JPEG images (most common image compression format). This link is a starting point, don't be shy to put a bit of effort in and look it up elsewhere, like Google Scholar.
Think about how editing the image alters the artifacts -- does it destroy them, replace them, modify them in some detectable way?
Read about fourier analysis -- it is a useful tool for image forensics
Be prepared to easily spend days or weeks on researching this problem.

Thanks for such interesting question. Indeed image forgery detection (as it is called) is a really big and very complex field. And there are many sub-fields (or sub-problems) within it. However you are talking about specific sub-problem of image forgery, which is called copy-move forgery detection. Here are some papers about it:
Detection of Copy-Move Forgery in Digital Images
Exposing Digital Forgeries by Detecting Duplicated Image Regions
You can find more papers about it in google scholar if you like.
Some time ago i was trying to code copy-move forgery detection with my own ad-hoc algorithm implemented in Python. If you want you can read about it in my blog article (code included). Detection script is very slow and not very reliable, but that being said has over 200 lines of code and has 8 adjustable script parameters. So this really shows that even one needs to code ad-hoc algorithm for forgery detection - he/she must work very hard to make something usable.
Good luck.

double compression detection
copy move forgery
splicing
retouching
any many more
Above given are the area in which research are going on, recently there is forgery incident found in medical class images also.
in copy move you can go for block -wise detection technique ,just extract the feature from a overlapping block by using dimension reductionality or by any transform technique and after that match the block......

Related

Supervised Descent Method (SDM)

Can someone explain briefly how SDM (Supervised Descent Method) for Feature Extraction works?
I searched a lot on the Internet but couldn't found what I was looking for.
Is it only for feature extraction in videos, or can it be used in both videos and images?
If someone can explain, it would be of great help.
SDM is a method to align shapes in images. It uses feature extractors (SIFT and HoG) in the process, but is not a feature extractor.
Similar methods are ASM, AAM or CLM, but SDM has better performance and accuracy.
In the case of SDM, in the training process, the system learns some descent vectors from a initial shape configuration (different from the shapes in the database) to the database sets. Those vectors have the hability of fitting a new initial configuration with the face shape in the image you want to fit in.
This link can help you to learn more about it: http://arxiv.org/pdf/1405.0601v1.pdf
About the code, there is some demo samples in the main page of IntraFace but if you are looking for the code, I donĀ“t think you can find it.
You can use vl_sift for beginning its working even more precise then their original descriptor however its not fast as their descriptor for real time implementation.
Regarding to their implementation no code were released so far. They are using some specialized version with very fast histogram calculation.

Find people at image

I need to implement algorithm which for input has picture ( jpeg ) and create new picture like output, but only with bodies ( background is removed completely ). Input picture is picture with people from vacation and I need to recognize human bodies and remove background. Can someone suggest me what algorithm to use, what book to buy to learn that algorihms ?
Check this link it will perfectly answer your question of removing the background and performing further processing
neural networks are particuarly useful for this kind of task, but the theory is a universe, if you're doing it from scratch ... that's a lot of work
This is a segmentation problem. In the general case, segmenting images is a hard research problem (I just spent five years doing a doctorate on segmenting greyscale medical images, for example) and the way you go about it is strongly tied to the type of images with which you have to deal. The best advice I can give is to go and read the appropriate literature on segmenting colour images (e.g. use Google Scholar). In terms of books, this one's a good general-purpose introduction to image processing:
http://www.amazon.co.uk/Digital-Image-Processing-Rafael-Gonzalez/dp/0130946508/ref=sr_1_7?ie=UTF8&qid=1326236038&sr=8-7
Searching for "segmenting people in colour images" on Google seems to turn up some good links, incidentally.
I have a question for you: you want to implement this using an algorithm? If so, then it might require a lot of things to be done (provided you are new to the field of image processing).
Otherwise you may try using masking techniques in image editing software like Adobe Photoshop (that would hardly take 15 mins, depending upon how well you know it)
A good book to start with image processing techniques is: "Digital Image Processing" by Gonzalez and Woods; it starts from the basics, and explains stuff in depth.
Still it may take a lot of time to develop an algorithm to do this job. I recommend you use some library for the same. OpenCV(opensource computer vision) is an excellent choice. The library itself comes with demos which include programs for face detection etc. The inbuilt functions provide a variety of features (edge detection/Feature identification and extraction, you may have to use this) Here's the link
http://opencv.willowgarage.com/wiki/
The link provides a lot of reference material that you can make use of! :)
Start with facial recognition software and algorithms; they have been the most refined over the years and as long as all of your bodies have heads, you can use exif data to figure image capture orientation (of course you can't completely rely on that), sample the facial skin to get skin tone ranges, and find the attached body. Anything that is not head and body should be deleted. This process assumes that a person has roughly the same skin tone on their face as their body and the camera flash isn't washing this out. You could grab the flash duration and some other attributes from exif and adjust your ranges accordingly.
A lot of software out there can recognize faces (look at iPhoto for example), so you'll have to use the face as a reference point, along with skin tone, to find your body edges. You result isn't going to be perfect, but as long as your approach is sound, you'll end up with something useful.
And release your software as open source when you're done so I can use it... :)
You can download a free PDF of the book Computer Vision by Richard Szeliski from the author's website. Not only do you have a free book on algorithms, but it's a book that addresses this specific problem.
http://szeliski.org/Book/
You'll see this image at the top of that page of the author's website.
Used copies of the hardcover are available for about $62 if you check addall.com. If you spent some time doing image processing, you'll appreciate having a paper copy of at least one good general reference book.
Its tough but not impossible. I can't give you any code but Peter Norvig had a great talk on the value of data and in the talk he shows how he was able to take a picture of lake and remove all the houses blocking the image and have the lake expanded with boats,etc..
The computer basically learned how lakes look and boats go on lakes and then removed the houses and placed it there. He explains his process(but no code or anything).
Here it is:
Peter Norvig - The Unreasonable Effectiveness of Data
http://www.youtube.com/watch?v=yvDCzhbjYWs

OpenCV: Fingerprint Image and Compare Against Database

I have a database of images. When I take a new picture, I want to compare it against the images in this database and receive a similarity score (using OpenCV). This way I want to detect, if I have an image, which is very similar to the fresh picture.
Is it possible to create a fingerprint/hash of my database images and match new ones against it?
I'm searching for a alogrithm code snippet or technical demo and not for a commercial solution.
Best,
Stefan
As Pual R has commented, this "fingerprint/hash" is usually a set of feature vectors or a set of feature descriptors. But most of feature vectors used in computer vision are usually too computationally expensive for searching against a database. So this task need a special kind of feature descriptors because such descriptors as SURF and SIFT will take too much time for searching even with various optimizations.
The only thing that OpenCV has for your task (object categorization) is implementation of Bag of visual Words (BOW).
It can compute special kind of image features and train visual words vocabulary. Next you can use this vocabulary to find similar images in your database and compute similarity score.
Here is OpenCV documentation for bag of words. Also OpenCV has a sample named bagofwords_classification.cpp. It is really big but might be helpful.
Content-based image retrieval systems are still a field of active research: http://citeseerx.ist.psu.edu/search?q=content-based+image+retrieval
First you have to be clear, what constitutes similar in your context:
Similar color distribution: Use something like color descriptors for subdivisions of the image, you should get some fairly satisfying results.
Similar objects: Since the computer does not know, what an object is, you will not get very far, unless you have some extensive domain knowledge about the object (or few object classes). A good overview about the current state of research can be seen here (results) and soon here.
There is no "serve all needs"-algorithm for the problem you described. The more you can share about the specifics of your problem, the better answers you might get. Posting some representative images (if possible) and describing the desired outcome is also very helpful.
This would be a good question for computer-vision.stackexchange.com, if it already existed.
You can use pHash Algorithm and store phash value in Database, then use this code:
double const mismatch = algo->compare(image1Hash, image2Hash);
Here 'mismatch' value can easly tell you the similarity ratio between two images.
pHash function:
AverageHash
PHASH
MarrHildrethHash
RadialVarianceHash
BlockMeanHash
BlockMeanHash
ColorMomentHash
These function are well Enough to evaluate Image Similarities in Every Aspects.

Help to learn Image Search algorithm

I am a beginner in image processing. I want to write an application in C++ or in C# for
Searching an image in a list of images
Searching for a particular feature (for e.g. face) in a list of images.
Can anybody suggest where should I start from?
What all should I learn before doing this?
Where can I find the correct information regarding this?
In terms of the second one, you should start off with learning how to solve the decision problem of whether a square patch contains a face (or whatever kind of object you are interested in). For that, I suggest you study a little bit of machine learning, the AdaBoost algorithm, Haar features, and Viola-Jones.
Once you know how to do that, the trick is really just to take a sliding window across your image, feeding the contents of that window into your detector. Then you shrink your main input image and repeat the process until your input image has gotten smaller than the minimum size input for your detector. There are, of course, several clever ways to parallelize the computation and speed it up, but the binary detector is really the interesting part of the process.
You may find some of the material linked from the CSE 517: Machine Learning - Syllabus helpful in getting into machine learning and understanding AdaBoost. You will certainly find the Viola-Jones paper of interest.

What algorithm to use to obtain Objects from an Image

I would like to know what algorithm is used to obtain an image and get the objects present in the image and process (give information about) it. And also, how is this done?
I agree with Sid Farkus, there is no simple answer to this question.
Maybe you can get started by checking out the Open Computer Vision Library. There is a Wiki page on object detection with links to a How-To and to papers.
You may find other examples and approaches (i.e. algorithms); it's likely that the algorithms differ by application (i.e. depending on what you actually want to detect).
There are many ways to do Object Detection and it still an open problem.
You can start with template matching, It is probably the simplest way to solve, which consists of making a convolution with the known image (IA) on the new image (IB). It is a fairly simple idea because it is like applying a filter on the signal, the filter will generate a maximum point in the image when it finds the object, as shown in the video. But that technique has several cons, does not handle variants in scale or rotation so it has no real application.
Also you can find another option more robust feature matching, which consist on create a dataset with features such as SIFT, SURF or ORB of different objects with this you can train a SVM to recognize objects
You can also check deformable part models. However, The state of the art object detection is based on deep-learning such as Faster R-CNN, Alexnet, which learn the features that will be used to detect/recognize the objects
Well this is hardly an answerable question, but for most computer vision applications a good starting point is the Hough Transform

Resources