Is there any OpenCV 2.4.2 function or class that implements an Image Background Subtraction algorithm?
The images I will be processing will be portraits, such as images from a webcam. I want to keep only the upper part of the body (from the chest upwards) and remove everything else in the background.
If possible I'd like to avoid implementing these algorithms myself. However, should none currently exist in OpenCV then please suggest me some.
I tried to look for an answer but so far everything I've found only deals with Background Subtraction in videos or in sequences of pictures and that's not what I want. I want to process single, static and independent images only.
You can try the running average method for this using cvRunningAvg function in OpenCV (or any other library).
In this, you take average of a lot of frames as background. If camera is static, it provides a good result, especially if you have some frames with only background.
Look at a sample result :
It is also better to apply some morphological operations like erosion,dilation etc to remove small noises while frame grabbing.
You can read the code and explanation here : BackGround Extraction using Running Average
Related
I want to remove background and get deer as a foreground image.
This is my source image captured by trail camera:
This is what I want to get. This output image can be a binary image or RGB.
I worked on it and try many methods to get solution but every time it failed at specific point. So please first understand what is my exact problem.
Image are captured by a trail camera and camera is motion detector. when deer come in front of camera it capture image.
Scene mode change with respect to weather changing or day and night etc. So I can't use frame difference or some thing like this.
Segmentation may be not work correctly because Foreground (deer) and Background have same color in many cases.
If anyone still have any ambiguity in my question then please first ask me to clear and then answer, it will be appreciated.
Thanks in advance.
Here's what I would do:
As was commented to your question, you can detect the dear and then perform grabcut to segment it from the picture.
To detect the dear, I would couple a classifier with a sliding window approach. That would mean that you'll have a classifier that given a patch (can be a large patch) in the image, output's a score of how much that patch is similar to a dear. The sliding window approach means that you loop on the window size and then loop on the window location. For each position of the window in the image, you should apply the classifier on that window and get a score of how much that window "looks like" a dear. Once you've done that, threshold all the scores to get the "best windows", i.e. the windows that are most similar to a dear. The rational behind this is that if we a dear is present at some location in the image, the classifier will output a high score at all windows that are close/overlap with the actual dear location. We would like to merge all that locations to a single location. That can be done by applying the functions groupRectangles from OpenCV:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#grouprectangles
Take a look at some face detection example from OpenCV, it basically does the same (sliding window + classifier) where the classifier is a Haar cascade.
Now, I didn't mention what that "dear classifier" can be. You can use HOG+SVM (which are both included in OpenCV) or use a much powerful approach of running a deep convulutional neural network (deep CNN). Luckily, you don't need to train a deep CNN. You can use the following packages with their "off the shelf" ImageNet networks (which are very powerful and might even be able to identify a dear without further training):
Decaf- which can be used only for research purposes:
https://github.com/UCB-ICSI-Vision-Group/decaf-release/
Or Caffe - which is BSD licensed:
http://caffe.berkeleyvision.org/
There are other packages of which you can read about here:
http://deeplearning.net/software_links/
The most common ones are Theano, Cuda ConvNet's and OverFeat (but that's really opinion based, you should chose the best package from the list that I linked to).
The "off the shelf" ImageNet network were trained on roughly 10M images from 1000 categories. If those categories contain "dear", that you can just use them as is. If not, you can use them to extract features (as a 4096 dimensional vector in the case of Decaf) and train a classifier on positive and negative images to build a "dear classifier".
Now, once you detected the dear, meaning you have a bounding box around it, you can apply grabcut:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
You'll need an initial scribble on the dear to perform grabcu. You can just take a horizontal line in the middle of the bounding box and hope that it will be on the dear's torso. More elaborate approaches would be to find the symmetry axis of the dear and use that as a scribble, but you would have to google, research an implement some method to extract symmetry axis from the image.
That's about it. Not straightforward, but so is the problem.
Please let me know if you have any questions.
Try OpenCV Background Substraction with Mixture of Gaussians models. They should be adaptable enough for your scenes. Of course, the final performance will depend on the scenario, but it is worth trying.
Since you just want to separate the background from the foreground I think you do not need to recognize the deer. You need to recognize an object in motion in the scene. You just need to separate what is static in a significant interval of time (background) from what is not static: the deer.
There are algorithms that combine multiple frames from the same scene in order to determine the background, like THIS ONE.
You mentioned that the scene mode changes with respect to weather changing or day and night considering photos of different deers.
You could implement a solution when motion is detected, instead of taking a single photo, it could take a few ones with some interval of time.
This interval has to be long as to get the deer in different positions or out of the scene and at the same time short enough to not be much affected by scene variations. Perhaps you need to deal with some brightness variation, but I think it is feasible to determine the background using these frames and finally segment the deer in the "motion frame".
I am currently working on a project that is utilizing traffic camera video streams and analyzing them. I have written an algorithm in Octave, a .m file, that is able to return the outlines of multiple cars as individual blobs, and its outline and center coordinates. Specifically, for the blob part, I am using BWLabel and BWBoundaries and applying it to a binary image that I have created where 1 is car and 0 is background, according to the rest of my algorithm. I have an image below that shows this.
http://imgur.com/25hgrUP
All of the blobs are cars, including the one with about 5 blobs next to each other. All of these blobs are one van, but the different colors and features have thrown off the detection system. Does anyone know of a way to easily combine all of these blobs that are in close proximity into one blob. I am talking about an existing algorithm or function that is already in Octave packages or Matlab toolboxes. If not, I will write the code from scratch and make it happen. This question was just a call to ask if there are pre-existing solutions, not a call to write code for me, unless you want to :).
Thanks for your help,
AeroVTP
You can solve this (to an extent) with morphological closing, in Matlab it's imclose. You'll need to be careful, though as noise that's too close may be included, and true blobs too far away may be excluded.
Although imclose is a good idea, running it just one time has a much "stronger" effect on the image than just running the 'erode' and 'dilate' functions multiple times. I personally ran the erode and dilate functions 5 times, in succession, to get the best results for the earlier image. Running the Dilate and Erode commands separately give more control for good image processing.
Imclose is just a function that applies the erode and dilate functions repeatedly. In Octave, the function to dilate and erode is
editedImage = bwmorph(initialImage, 'dilate', 5 %number of times to apply% );
editedImage = bwmorph(initialImage, 'erode', 5 %number of times to apply% );
I also invented my own commenting structure :).
Thanks to wbest for initial imClose idea.
Say i have this old manuscript ..What am trying to do is making the manuscript such that all the characters present in it can be perfectly recognized what are the things i should keep in mind ?
While approaching such a problem any methods for the same?
Please help thank you
Some graphics applications have macro recorders (e.g. Paint Shop Pro). They can record a sequence of operations applied to an image and store them as macro script. You can then run the macro in a batch process, in order to process all the images contained in a folder automatically. This might be a better option, than re-inventing the wheel.
I would start by playing around with the different functions manually, in order to see what they do to your image. There are an awful number of things you can try: Sharpening, smoothing and remove noise with a lot of different methods and options. You can work on the contrast in many different ways (stretch, gamma correction, expand, and so on).
In addition, if your image has a yellowish background, then working on the red or green channel alone would probably lead to better results, because then the blue channel has a bad contrast.
Do you mean that you want to make it easier for people to read the characters, or are you trying to improve image quality so that optical character recognition (OCR) software can read them?
I'd recommend that you select a specific goal for readability. For example, you might want readers to be able to read the text 20% faster if the image has been processed. If you're using OCR software to read the text, set a read rate you'd like to achieve. Having a concrete goal makes it easier to keep track of your progress.
The image processing book Digital Image Processing by Gonzalez and Woods (3rd edition) has a nice example showing how to convert an image like this to a black-on-white representation. Once you have black text on a white background, you can perform a few additional image processing steps to "clean up" the image and make it a little more readable.
Sample steps:
Convert the image to black and white (grayscale)
Apply a moving average threshold to the image. If the characters are usually about the same size in an image, then you shouldn't have much trouble selecting values for the two parameters of the moving average threshold algorithm.
Once the image has been converted to just black characters on a white background, try simple operations such as morphological "close" to fill in small gaps.
Present the original image and the cleaned image to adult readers, and time how long it takes for them to read each sample. This will give you some indication of the improvement in image quality.
A technique call Stroke Width Transform has been discussed on SO previously. It can be used to extract character strokes from even very complex backgrounds. The SWT would be harder to implement, but could work for quite a wide variety of images:
Stroke Width Transform (SWT) implementation (Java, C#...)
The texture in the paper could present a problem for many algorithms. However, there are technique for denoising images based on the Fast Fourier Transform (FFT), an algorithm that you can use to find 1D or 2D sinusoidal patterns in an image (e.g. grid patterns). About halfway down the following page you can see examples of FFT-based techniques for removing periodic noise:
http://www.fmwconcepts.com/misc_tests/FFT_tests/index.html
If you find a technique that works for the images you're testing, I'm sure a number of people would be interested to see the unprocessed and processed images.
Let's say you are taking a video (with the camera in a steady position) and a bird flies through the view of the camera. It should be possible to do image segmentation and automatically remove this bird from the video.
What are these styles of algorithms called and how are they normally accomplished?
There's a technique called Simple Image Object Extraction (SIOX) - it uses a technique to identify foreground vs. background objects in still and video images. The open source GIMP editor has an implementation of it, and there's more information about it here.
From the overview:
SIOX stands for Simple Interactive Object Extraction and is a solution for extracting foreground from still images with very little user interaction. SIOX is fast, noise robust, and can therefore also be used for the segmentation of videos. It avoids many of the drawbacks of graph-based segmentation methods but performs about equally well on different benchmarks. SIOX is open and free (Apache License) and the authors have intentionally not patented any part of the technology. As a result, it has been integrated into several open-source image manipulation programs over the past years. SIOX is the underlying algorithm of the foreground extraction tool in the GNU Image Manipulation Program (GIMP) and is part of the tracer tool in Inkscape. SIOX originates from E-Chalk where an instructor standing in front of an electronic chalkboard is segmented. Variants of SIOX are being used for robotic vision and for improving 3D time-of-flight camera segmentation.
Here's a link to the Java Reference Implementation of SIOX.
Here's a link to the PDF with details about how a variation of the algorithm works.
You should be able to adapt it to use inter-frame interpolation to remove a specific foreground object from each frame of a video by using temporal data from surrounding frames.
If the camera is fixed and there isn't too much motion in the scene, then I would suggest a method based on background subtraction.
Step 1: Compute background for each frame of the video. There are complicated algorithms for doing this, but a very simple and effective one would be to compute the median value of every pixel in the image across a 3 second time window. Longer if the object in question is moving slowly. Incidentally, if you just perform this kind of filtering it will remove most moving objects from the video if the camera is fixed, hence my earlier question about all objects vs. one object.
Step 2: Mark the regions you want to remove in each frame with a brush tool, and replace them with the background pixels. Don't bother with a fine brush or lasso tool as any non-object pixels you mark will just be replaced with their filtered version. You could probably use the same brush marks for several frames since the boundary is not so important. If the object is the only thing moving in the scene, you could just mark the entire frame and have it replaced with the background.
Anyways, to answer your more general question, the topic you want to research is called inpainting for images and video. There is quite a bit of literature out there on the subject, what I described was just a super simple method you could implement in an hour or so with opencv.
Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.