Success or failure of the Canny edge detector - scikit-image

Using the canny edge detection and gaussian blur it detects an edge following this tutorial. However the question here is how to find out if it was actually able to find out an edge or not automatically? Is there a relation with the longest boundary detection after Canny Edge? Or is it as simple as looking for the entire matrix filled with 0s?

In this tutorial, the parameters are tuned to get rid of spurious edges.
If you only want to known which pixels are edges, it is "as simple as looking for the entire matrix filled with 0s". If you need them in order, use a tracing algorithm such as here: http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/alg.html

Related

Finding correspondence of edges for image matching

I have a challenging problem to solve. The Figure shows green lines, that are derived from an image and the red lines are the edges derived from another image. Both the images are taken from the same camera, so the intrinsic parameters are same. Only, the exterior parameters are different, i.e. there is a slight rotation and translation while taking the 2nd image. As it can be seen in the figure, the two sets of lines are pretty close. My task is to find correspondence between the edges derived from the 1st image and the edges derived from the second image.
I have gone through a few sources, that mention taking corresponding the nearest line segment, by calculating Euclidean distances between the endpoints of an edge of image 1 to the edges of image 2. However, this method is not acceptable for my case, as there are edges in image 1, near to other edges in image 2 that are not corresponding, and this will lead to a huge number of mismatches.
After a bit of more research, few more sources referred to Hausdorff distance. I believe that this could really be a solution to my problem and the paper
"Rucklidge, William J. "Efficiently locating objects using the
Hausdorff distance." International Journal of Computer Vision 24.3
(1997): 251-270."
seemed to be really interesting.
If, I got it correct the paper formulated a function for calculating translation of model edges to image edges. However, while implementation in MATLAB, I'm completely lost, where to begin. I will be much obliged if I can be directed to a pseudocode of the same algorithm or MATLAB implementation of the same.
Additionally, I am aware of
"Apply Hausdorff distance to tile image classification" link
and
"Hausdorff regression"
However, still, I'm unsure how to minimise Hausdorff distance.
Note1: Computational cost is not of concern now, but faster algorithm is preferred
Note2: I am open to other algorithms and methods to solve this as long as there is a pseudocode available or an open implementation.
Have you considered MATLAB's image registration tools?
With imregister(https://www.mathworks.com/help/images/ref/imregister.html), you can just insert both images, 1 as reference, one as "moving" and it will register them together using an affine transform. The function call is just
[optimizer, metric] = imregconfig('monomodal');
output_registered = imregister(moving,fixed,'affine',optimizer,metric);
For better visualization, use the RegistrationEstimator command to open up a gui in which you can import the 2 images and play around with it to register your images. From there you can export code for future images.
Furthermore if you wish to account for non-rigid transforms there is imregdemons(https://www.mathworks.com/help/images/ref/imregdemons.html) which works much the same way.
You can compute the Hausdorff distance using Matlab's bwdist function. You would compute the distance transform of one image, evaluate it at the edge points of the other, and take the maximum value. (You can also take the sum instead, in which case it is called the chamfer distance.) For this problem you'll probably want the symmetric Hausdorff distance, so you would do the computation in both directions.
Both Hausdorff and chamfer distance measure the match quality of a particular alignment. To find the best registration you'll need to try multiple alignment transformations and evaluate them all looking for the best one. As suggested in another answer, you may find it easier to use registration existing tools than to write your own.

3D mesh edge detection / feature line computation algorithm

I have a program that visualizes triangular meshes and allows the users to draw on the meshes using a pen. I want to have a "snapping" mode in my system. The snapping mode performs drawing corrections for the user in the sense that the user-drawn lines are snapped to the nearest edge (or the silhouette) of that part of the mesh.
I'm looking for an algorithm that compute the edges visible on the mesh from a given point of view. By edges, I'm referring to the outlines of the shape: corner points and the lines between them (similar to the definition of an edge in computer vision/image processing -- such as Canny edges).
So far I've thought of two approaches for this:
Edge detection: so far I've only found this paper. Their method is understandable, yet the implementation is not trivial (due to tensor computations and some ambiguity in their explanations). The problem with this approach is that it produces "edge strength values" which is a value in the range [0, 1] for every vertex. The value of 1 indicates an edge vertex with a high confidence. This introduces extra thresholding parameters in the system which I'd rather not have. Their output looks like this (range [0, 1] scaled to [0, 65535]):
Rendering or non-photorealistic methods such as the one asked in this question or this paper. They seem to be able to create the silhouette that I'm after as can be seen below:
I'm not a graphics expert and as of yet I don't know whether their methods can be used for computation of the feature lines rather than rendering.
I was wondering if anybody has any ideas about a good algorithm for what I want to do. Since the system is very interactive, the performance is important. The snapping feature does not have to be enabled all the time (therefore, if the method is computationally expensive, some delay in when "snapping enabled" mode is toggled can be tolerated while the algorithm is computing the edges.) Also, if you know of any implementation (preferably open source), I'd be grateful if you could share it with me.
There are two types of edges that you want to detect:
silhouette edges are viewpoint dependent, they correspond to the places where the line of sight tangents the surfaces. With a triangulated model, they are easy to determine, as they are shared by a front-facing triangle and a back-facing one.
"angular" edges are viewpoint independent and formed by a discontinuity in the tangent plane direction. As a triangulated model has itself this kind of discontinuity, there is no exact criterion to find them. Just set a threshold on the angle formed by two triangles. This threshold must be such that smooth patches do not trigger.
By this approach, you will find the wanted edges in 3D.
This is not enough, as part of them are hidden by other surfaces. You have the option of integrating them as edges in the 3D model and letting the rendering engine do its job, or, if you have the courage, to implement an hidden lines removal algorithm. (The wikipedia link is a little terse.)
Since posting the question, something else came into my head. Since 2D edge detection is a very well-studied problem, one way of tackling the problem is performing 2D edge detection on the projection image of the mesh.
In other words, given a specific view of the mesh, one could generate a 2D image. A 2D edge detection algorithm (such as Canny edge detector) could then be run on the 2D image and the results can be back-projected to 3D to determine the silhouettes of the mesh in question. One possible advantage of this is simplicity!
Edit (2017):
Even though I moved away from this, I returned to this problem again for a different purpose. To anybody else looking into this problem: there is a paper that talks about various contours from meshes that's worth reading (the paper is "Suggestive Contours for Conveying Shape" by DeCarlo et al.).
Working implementation of the methods discussed in the paper are available here.

Algorithm that transforms flat object shape in bitmap to collection of Vectors in 2D Coord System

I currently try to figure out an algorithm that would transform a bitmap like this :
To collection of vectors in two-dimensional coordinate system. And unfortunately, i figured out nothing . Have anyone heard about an algorithm that solves this problem ?
This is by no means the "best" method but I tried this a while back and it worked fairly well. The only thing I would request is that the shapes be filled in.
What I did was to treat the image as a density field and apply the marching squares algorithm to it. This of course generated far too many vertices (even when not sampling at native rez), so I did some very primitive decimation: removing vertices where the adjacent edges are nearly straight (by removing I mean replace vertex + 2 edges with a single edge). After iterating the decimation a few times I had a low vertex vector representation.
Improvements might involve turning the input into a signed distance field to improve marching squares or sampling along the square edges to find intersections with the original image (jump from black to white is an intersection).
For a real algorithm you'd want to search for "vectorizing".

Robust Line Extraction from Image

I need to extract the ALL Wall Edges (including floor,wall intersections and wall,door intersections) from the following image.If I use the canny detection and hough transform (probabilistic). It gives me to many redundant and unnecessary lines. I was looking if I could refine the canny image before hough transform is run on it.
Input Image
This following is the canny image given by the canny detection algorithm
I am using canny parameters as 0,20 for min and max threshold. I can't use a very high value for max threshold otherwise I will lose wall edges but gradient will be low there compared to rest of the image.
I thought of identifying a high density cluster of points in a window and set them to zero if it is above some threshold.
The following is the canny image obtained after that. You can see the wall edges are preserved.
Can anyone suggest me a better way of handling this problem? I mean refining the canny image so that I can identify cluster of random points and getting away with those but setting them to zero . I was thinking of checking for colinear points in a window but don't know how effective that would be?
Any Comments would be welcome
I think you can filter out longest and nearly vertical lines, after using hough transform. Check out this link.
SimpleCV is just a shortcut library including OpenCV functions, you dont need to use it. I dont think you will encounter problems implementing the algorithm after getting the idea.
Edit: Ok, I thought more about your problem. Setting clusters to zero as a preprocessing step is not bad actually. What about increasing the window size step by step? I mean after obtaining second image, apply another cluster filter with 2*window size, same threshold. I think you can go on like this, as the wall edges are hard to be cancelled out.
Another way, use a rectangular window (width >= 5*height) for cluster filtering as you need vertical edges.
Another way, play with erosion and dilation and filter out blobs having large area.
Another way, check out the top part of the image, there is only the wall edges and the chandelier. You can search horizontally for a white pattern, then follow its neighbours to specify the length of connected points. Then filter out longer ones.

Looking for an efficient algorithm to find the boundary of a swept 2d shape

I have piecewise curve defining a generator (think brush) and a piecewise curve representing the path the brush follows. I wish to generate the boundary that the generator curve creates as it is swept across the path.
This is for an engineering CAD like application. I am looking for a general algorithm or code samples in any language.
I suggest the following papers:
"Approximate General Sweep Boundary of a 2D Curved Object" by Jae-Woo Ahn, Myung-Soo Kim and Soon-Bum Lim
"Real Time Fitting of Pressure Brushstrokes" by Thierry Pudet
"The Brush-Trajectory Approach to Figure Specification: Some Algebraic-Solutions"
The actual answer we used is too complex to post in full but in summary.
Sample the curve at regular intervals along the transformed path
Build a triangular mesh by joining the vertices from each sample to
the next and previous sample
Identify candidate silhouette edge by whose neighboring triangles normals point in opposite directions
Split all edges at intersections using a sweepline algorithm. This is the tricky part as we found we had to do this with a BigRational algorithm or subtle numerical errors crept in which broke the topology.
Convert the split edges into a planar graph
Find the closest of the split edges to some external test point
Walk around the outside of the graph. ( again all tests are done using big rational )
The performance of the algorithm is not brilliant due to the BigRational calculations. However we tried many ways to do this in floating point and we always got numerical edges cases where the resulting graph was not planar. If the graph is not planar then you can't walk around the outside of it.
If your have an arbitrarily complex shape translating and rotating along an arbitrary path, figuring out the area swept (and its boundary) using an exact method is going to be a really tough problem.
You might consider instead using a rendering-based approach:
start with a black canvas
densely sample the path of your moving shape
for each sample position and rotation, render the shape as white
you now have a canvas with a fairly good estimate of the swept shape
You can follow this up with these steps:
(optional) do some image processing to try to fix up any artifacts introduced by too-sparsely sampling the path of the shape
(optional) pass the canvas through an edge-finding filter to get the boundary of the swept shape

Resources