Original binary image
Find contours result
I'm dealing with images like 1. There is some small connection between squares as you can see, currently I'm directly applying cv2.findContours on the image and the result is connected squares are detected as a big object while I want them to be separated. Can someone help me to get this work?
You should be able to solve this issue by applying morphological transformations + watershed transformation on your source image.
Segmenting connected contours is a quite common use-case, you can find a tutorial for a similar problem in the OpenCV documentation:
https://docs.opencv.org/4.x/d3/db4/tutorial_py_watershed.html
Related
I am working on a program that gets exact pixel values of the shoreline in a given image. What is the best way to preprocess these types of images in order to make my life easier?
A sample image:
I suppose that you want to be able to segment the land from the water this way defining a path for the shoreline.
For this task I recommend you using an edge detection algorithm. A simple vertical Sobel filter should be enough given the image that you have provided. More details about its insides and API call here.
Do you have images with different meteorological conditions? Your algorithm should be robust when it comes to different lighting scenarios: night, rain etc (if that is the case).
A thresholding with respect to the tones that you have in your image might also help, details here.
For a proper binarized image the following contour finding methods proposed by OpenCV should do the job for you.
I would like to ask a question I already asked on the OpenCV board but did not get an answer to: http://answers.opencv.org/question/189206/questions-about-the-fundamental-matrix-and-homographies/.
After learning about the fundamental matrix I have the following question that I could not answer by googling. The fundamental matrix is a more general case of the homography as it is independent of scene's structure. So I was wondering if it could be used for image stitching instead of a homography. But all papers I found only use homographies. So I reread the material about the properties of the fundamental matrix and now I am wondering:
Is it not possible to use the fundamental matrix for stitching because of its rank deficiency and the fact that it does only relate points in Image 1 to lines (epipolar lines) in Image 2?
Another question I have regarding homographies: All papers I read about image stitching use homographies for rotational panoramas. What if I want to create a panorama based only on translation between images? Can I use the homography as well? The answers provided by a google search vary quite a lot.
Kind regards and thanks for your help!
Conundraah
About using fundamental matrix for stitching.
It actually depends on how you want to stitch the image together.
The problem is even if you get the fundamental matrix, when you stitch images together, you will only need homography matrix to do the transformation of images. So what is the point of using fundamental matrix. Unless you figure out how to handle the different distance on the same image.
In the case of panorama images, the assumption is that the scene structure is far enough to be seen as planar, so comparatively the translation could be ignored. If that is not the case, translation could be considered.
I am new to opencv and I don't know much about the algorithms. I just downloaded opencv and tried to run the samples. One thing I noticed that both canny and contour can be used to find the objects, from opencv official docs:
Canny:
Finds edges in an image using the [Canny86] algorithm.
findContours:
Finds contours in a binary image.
I think they have similar functionalities, so what are the differences between them and how to choose? And please correct me if my understanding is wrong.
The most important practical difference is that findContours gives connected contours, while Canny just gives edges, which are lines that may or may not be connected to each other. To choose, I suggest that you try both on your sample application and see which gives better results.
I have a target image to be searched for a curve along its edges and a template image that contains the curve. What I need to achieve is to find the best match of the curve in the template image within the target image, and based on the score, to find out whether there is a match or not. That also includes rotation and resizing of the curve. The target image can be the output of a Canny Edge detector if that makes things easier.
I am considering to use OpenCV (by using Python or Processing/Java or if those have limited access to the required functions then by using C) to make things practical and efficient, however could not find out if I can use any functions (or a combination of them) in OpenCV that are useable for doing this job. I have been reading through the OpenCV documentation and thought at first that Contours could do this job, however all the examples show closed shapes as opposed to my case where I need to match a open curve to a part of an edge.
So is there a way to do this either by using OpenCV or with any known code or algorithm that you would suggest?
Here are some images to illustrate the problem:
My first thought was Generalized Hough Transform. However I don't know any good implementation for that.
I would try SIFT or SURF first on the canny edge image. It usually is used to find 2d areas, not 1d contours, but if you take the minimum bounding box around your contour and use that as the search pattern, it should work.
OpenCV has an implementation for that:
Features2D + Homography to find a known object
A problem may be getting a good edge image, those black shapes in the back could be distracting.
Also see this Stackoverflow answer:
Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition
Basically, suppose that I have a fingerprint. I know the dimension of my image, and I know that the fingerprint is black on a white background or that it is green on a black background or something like that.
Is there a way to process only the parts that delimit the image, in this case, the fingerprint? What I'm trying to do is basically this:
1) Delimit fingerprint
2) Extract the important points to compare to other fingerprints
3) Find best match on a database of other fingerprints that had their points previously extracted
I already have methods for 2 and 3, so now I just would have to delimit the image.
Programming language would have to be Ruby, Java or C++. Ruby preferred, then Java, and God help me if I have to use C++. I don't have any experience with image processing, but I'd like to do this with multiple common formats such as jpg, gif, png, if possible.
I think that the best way to do it is applying a edge detection filter to your image.
There are may approaches as suggested by wikipedia (article), but noone of them is trivial because they work on gradients or kernels. You should check Canny Edge Detection that should be enough straight-forward to implement: tutorial.
In any case if you want to avoid going deep into implementation details you should use OpenCV that is a computer vision library able to do these things in a simple way. You can use it for sure in C++ and Java but I think that a wrapper for Ruby is offered too. This is a simple example using that library with Canny algorithm.
EDIT: actually my answer covers point 2-3, so I'm wondering what you mean by delimiting the image? Think about the fact that scaling or rotating must be considered too if you want to compare different fingerprints: you need a fuzzy comparator.. maybe you should work on the Fast Fouried Transform version of the image that can handle such things in a better way.
An easy approach could be using threshold, like:
Convert your image to grayscale - so you have fingerprint in white on black.
Find a threshold value that gets most of the fingerprint.
Use open operation (http://en.wikipedia.org/wiki/Mathematical_morphology) to remove noise.
(experiment with dilate a few times)
Find the center of gravity (x,y) of the image and the standard deviation (vx, vy).
In the box:
[x-2vx,y-2vy],
[x-2vx,y+2vy],
[x+2vx,y+2vy],
[x+2vx,y-2vy]
You will find 95.4% of the pixels
You could narrow the box down to find the actual max and min pixels in it, if you have many outliers.
Use the box to clip from the original image.
It is simple method that might work well for your situation :)