JavaCV cvCalcBackProject() result completely black - javacv

I have a histogram which I have normalized to 1.
When I backproject it the result is entirely black.
I saw the solution for OpenCV, but JavaCV doesn't allow scaling for backprojection, and doesn't overload it's cvNormalizeHist method to accept any other parameters other than histogram and double.
Thanks

You can find examples of using cvCalcBackProject in the "OpenCV2 Cookbook" examples for JavaCV, chapter 4. For instance, a simple one is in Ex6ContentDetectionGrayscale.

Related

Matching photographed image with screenshot (or generated image based on data model)

first of all, I have to say I'm new to the field of computervision and I'm currently facing a problem, I tried to solve with opencv (Java Wrapper) without success.
Basicly I have a picture of a part from a Model taken by a camera (different angles, resoultions, rotations...) and I need to find the position of that part in the model.
Example Picture:
Model Picture:
So one question is: Where should I start/which algorithm should I use?
My first try was to use KeyPoint Matching with SURF as Detector, Descriptor and BF as Matcher.
It worked for about 2 pcitures out of 10. I used the default parameters and tried other detectors, without any improvements. (Maybe it's a question of the right parameters. But how to find out the right parameteres combined with the right algorithm?...)
Two examples:
My second try was to use the color to differentiate the certain elements in the model and to compare the structure with the model itself (In addition to the picture of the model I also have and xml representation of the model..).
Right now I extraxted the color red out of the image, adjusted h,s,v values manually to get the best detection for about 4 pictures, which fails for other pictures.
Two examples:
I also tried to use edge detection (canny, gray, with histogramm Equalization) to detect geometric structures. For some results I could imagine, that it will work, but using the same canny parameters for other pictures "fails". Two examples:
As I said I'm not familiar with computervision and just tried out some algorithms. I'm facing the problem, that I don't know which combination of algorithms and techniques is the best and in addition to that which parameters should I use. Testing it manually seems to be impossible.
Thanks in advance
gemorra
Your initial idea of using SURF features was actually very good, just try to understand how the parameters for this algorithm work and you should be able to register your images. A good starting point for your parameters would be varying only the Hessian treshold, and being fearles while doing so: your features are quite well defined, so try to use tresholds around 2000 and above (increasing in steps of 500-1000 till you get good results is totally ok).
Alternatively you can try to detect your ellipses and calculate an affine warp that normalizes them and run a cross-correlation to register them. This alternative does imply much more work, but is quite fascinating. Some ideas on that normalization using the covariance matrix and its choletsky decomposition here.

Correlation between two image(binary image)

I have two binary image like this. I have a data set with lots of picture like at the bottom but with differents signs.
and
I would like to compare them in order to know if it's the same figure or not (especially inside the triangle). I took a look in Sift and Surf feature but it's doesn't work well on this type of picture (it find matchning point whereas the two picture are different,especially inside).
I also hear about SVM but i don't know if i have to implement it for this type of problem.
Do you have an idea ?
Thank you
I think you should not use SURF features on the binary image as you have already discarded a lot of information at that stage with your edge detector.
You could also use the Linear or Circle Hough Transform that in this case could tell you a lot about image differences.
If you wat to find 2 exactly identical images, simply use hash functions like md5.
But if you want to find related ( not exatcly identical) images, you are running in trouble ;). look for artificial neural network libs...

How to implement chrominance filtering in opencv?

I am new to Opencv. What I am trying to do is measure liquid level using single digital camera. After searching a lot I got one research paper having the algorithm and steps at the below link.
http://personnel.sju.edu.tw/改善師資研究成果/98年度/著作/22.pdf
But this algorithm having a step where I need to apply chrominance filtering on captured image. Opncv doesn't come with such inbuilt functionality. So how can I implement chrominance filtering, is there any way to to do this. Please help me, thanks.
I didn't manage to access the link yo provided, but I guess what you want to to is to segment your image based on the color (chroma). In order to do that, you have to first convert your BGR image (RGB but with the red and blue channels swapped, which is the natural ordering for OpenCV) to a chrominance-based space. You can do this using the function cv::cvtColor()with an argument such as CV_BGR2YUV or CV_BGR2YCrCb.
See the full doc of the conversion function here.
After that, you can split the channels of your images with cv::split()e.g. :
cv::Mat yuvImage;
std::vector<cv::Mat> yuvChannels;
cv::split(yuvImage, yuvChannels);
Then work on the image of the resulting vector that contains your channel of interest.

What should be the ideal thresholding technique for enhancing parts of the image?

What thresholding techique should i apply for the image in order to highlight the bright regions inside the image as well as the outer boundary..
The im2bw function does not give a good result
Help!!
Edit: Most of my images have the following histogram
Edit: Found a triangle threshold method that suits my work :)
Your question isn't very easy to answer since you don't really define what a ideal solution should accomplish.
Have you tried im2bw(yourImage, 0.1); ? I.e using a threshold for what parts should be black and waht parts shouldn't. I got descent results with that (depending on what the purpose is of course). Try it and if it isn't good enough, tell us in what way you need to improve it and i will try to help with some more advanced techniques!
EDIT: Using threshold 0.1 and 0.01 respectively, perhaps something ~0.05 should be good?
It sounds like what you want to do is ''image segmentation'' (see http://en.wikipedia.org/wiki/Segmentation_(image_processing) ).
Most methods are based on the Chan-Vese model which identifies the region of interest by solving an optimization problem involving a level set function. Since you're using matlab, this code: http://www.stanford.edu/~tagoldst/Tom_Goldstein/Split_Bregman.html should do a good job of finding the regions you are interested in.

How can I go about extracting the background from an image?

Basically, suppose that I have a fingerprint. I know the dimension of my image, and I know that the fingerprint is black on a white background or that it is green on a black background or something like that.
Is there a way to process only the parts that delimit the image, in this case, the fingerprint? What I'm trying to do is basically this:
1) Delimit fingerprint
2) Extract the important points to compare to other fingerprints
3) Find best match on a database of other fingerprints that had their points previously extracted
I already have methods for 2 and 3, so now I just would have to delimit the image.
Programming language would have to be Ruby, Java or C++. Ruby preferred, then Java, and God help me if I have to use C++. I don't have any experience with image processing, but I'd like to do this with multiple common formats such as jpg, gif, png, if possible.
I think that the best way to do it is applying a edge detection filter to your image.
There are may approaches as suggested by wikipedia (article), but noone of them is trivial because they work on gradients or kernels. You should check Canny Edge Detection that should be enough straight-forward to implement: tutorial.
In any case if you want to avoid going deep into implementation details you should use OpenCV that is a computer vision library able to do these things in a simple way. You can use it for sure in C++ and Java but I think that a wrapper for Ruby is offered too. This is a simple example using that library with Canny algorithm.
EDIT: actually my answer covers point 2-3, so I'm wondering what you mean by delimiting the image? Think about the fact that scaling or rotating must be considered too if you want to compare different fingerprints: you need a fuzzy comparator.. maybe you should work on the Fast Fouried Transform version of the image that can handle such things in a better way.
An easy approach could be using threshold, like:
Convert your image to grayscale - so you have fingerprint in white on black.
Find a threshold value that gets most of the fingerprint.
Use open operation (http://en.wikipedia.org/wiki/Mathematical_morphology) to remove noise.
(experiment with dilate a few times)
Find the center of gravity (x,y) of the image and the standard deviation (vx, vy).
In the box:
[x-2vx,y-2vy],
[x-2vx,y+2vy],
[x+2vx,y+2vy],
[x+2vx,y-2vy]
You will find 95.4% of the pixels
You could narrow the box down to find the actual max and min pixels in it, if you have many outliers.
Use the box to clip from the original image.
It is simple method that might work well for your situation :)

Resources