I am working on image segmentation, edge detection, and opening and closing by reconstruction in Matlab. I am trying to identify circular objects in a very noisy image with the aim of creating a mask with the edges of these circular objects and then superimposing such mask in the original image. After applying opening and closing by reconstruction along with the watershed function to identify objects' boundaries and a binary mask of the original image, I am able to get edges corresponding to full and half circles. Although the full circles identified are very few and I mostly get half circles, this method filters out most of the noise from the image.
Trying to get the edges of full circles, I used the canny function for edge detection. This function gets the complete edges of the majority of the circular objects, but it also draws the edges of the noise in the image. This doesn't allow me to create a good mask to superimpose in the original image.
The question then is if there is any efficient method to get rid of the noise picked up by the canny function or if it is possible to do canny function edge detection on objects of certain radius only as the circular objects that I want to identify have an specific radius. Attached is the original image what causes the noise in the image are the dark vertical bands or shadows and the bright beams of light on top of the circles. P.S. The matlab function "imfindcircles" for circle detection does not work on my image because of the broken circular edges or the background noise.
Original image of circular objects and dark vertical lines and bright spots as noise
You can pre-process the given image before applying Hough transform. The problem you are getting is because of uneven distribution of brightness across the image. You can apply some filtering techniques like homomorphic filtering before edge detection and Hough transform. Homomorphic filtering technique normalizes the brightness across an image and increases contrast. Once you apply canny edge detection on this image, you can use some edge linking algorithm to fill the gaps between detected edges to get better performance using Hough transform.
The process goes like this,
image --> homomorphic filtering --> canny edge detection --> edge linking --> Hough transform
Related
I am doing a task where I use the Canny edge detector to compute an edge image which has white pixels representing the edge, and then I will need the coordinates of the these edge pixels in the image to sent into another function.
The process of getting the coordinates of edge pixels from the edge image matrix is usually done with the cv::FindContours() of OpenCV, and the algorithm in this function is complicated and with many decisions, which is not differentiable. But now I want to use the process of turning edge image into 2d coordinates as a part in a deep learning model, so I need to have a differentiable and more straightforward process.
I couldn't find one, does anyone have any ideas? Thanks!
Rencently I'm trying to search for some ways to detect lines in CT scans.I found that all the Hough Transform family and some other algorithms are required to deal with contours born after edge detector.I found the contours are not what I want and a lot of short lines created by these 2 steps.I get perplexed by this.Can any handsome tell me what to do with this?Some methods or algorithms used in grayscale-image straightly but not in binary-image? using opencv or numpy is perfect! Many thanks!
Below is the test picture.I'm working to detect left-top straight lines and filter out the others.
You have pretty consistent background so I would:
detect contours
as any pixel with not background color that is neighboring background color.
Segmentate/label the contour points to form ordered "polylines"
create ID buffer and set ID=0 (background or object pixels)
find any yet not processed contour pixel
if none found stop
flood fill the contour in ID buffer by ID
increment ID
go to 2
now ID buffer contains your labeled contours
for each contour create ordered list of pixels forming contour "polyline"
to speed this up you can remember each contour start point from #2 or even do this step directly in step #2.
detect straight lines in contour "polylines".
that is simple straight lines have similar slope angle between neighboring point. You can also apply regression or whatever ... the slope or unit direction vectors must be computed on pixels that are at least 5 pixels distant to each other otherwise rasterization pixelation will corrupt the results.
see some related stuff:
Efficiently calculating a segmented regression on a large dataset
Given n points on a 2D plane, find the maximum number of points that lie on the same straight line
I'm working on a simple mapping application for fun, and one of the things I need to do is to find (and color) all of the points that are visible from the current location. In this case, points are pixels. My map is a raster image where transparent pixels are open space, and any other pixels are opaque. (There are no semi-transparent pixels; alpha is either 0 or 100%.) In this sense, it's sort of like a regular flood fill, with the constraint that each filled pixel has to have a clear line-of-sight to the origin point. The following image shows a couple of such areas colored in (the tiny crosshairs are the origin points, and white = transparent):
(http://tinyurl.com/nf3nqa4)
In addition, what I am ultimately interested in are the points that "border" other colors, i.e., I want the list of points that make up the edge of the visible region.
My current and very inefficient solution is the modified flood-fill I described above. This approach returns correct results, but due to the need to iterate every pixel on a line to the origin for every pixel in the flood fill, it's very slow. My images are downsized and quantized, but I still need about 1MP for acceptable accuracy, and typical LoS areas are at least 100,000 pixels each.
I may well be using the wrong search terms, but I haven't been able to find any discussion of algorithms that would solve this (rasterized) LoS case.
I suspect that this could be done more efficiently if your "walls" were represented as equations rather than simply pixels in a raster image. For example, polygons/triangles, circles, ellipses.
It would then be like raytracing (search for this term) in 2D. In other words, you could consider the ray/line from each pixel in the image to the point of interest and color the pixel only if it does not intersect with any object.
This method does require you to test the intersection for each pixel in the image with each object; however, if you look up raytracing you will find a number of efficient methods for testing these intersections. They will mostly be for the 3D case but it should be straightforward to convert them to 2D.
There are 3D raytracers that are very fast on MUCH larger images so this should be very doable.
You can try a delaunay triangulation on each color. I mean you can try to find the shape of each color with DT.
I have large matrix (image) and a small template. I would like to convolve the small matrix with the larger matrix. For example, the blue region is the section that I want to be used for convolution. In other words, I can use the convolution for all of the image, but since the CPU time is increased, therefore, I would like to just focus on the desired blue part.
Is there any command in MATLAB that can be used for this convolution? Or, how I can force the convolution function to just use that specific irregular section for convolution.
I doubt you can do an irregular shape (fast convolution is done with 2D FFT, which would require a square region). You could optimize it by finding the shape's bounding box and thus discarding the empty border.
#Nicole i would go for the fft2(im).*fft(smallIm) which is the equivalent for conv2(im,smallIm).
as far as recognizing the irregular shape you can use edge detection like canny and find the values of the most (left,right,top,bottom) dots, since canny returns a binary (1,0) image and prepare a bounding box, using the values. however this will take some time to create. and i'm not sure about how much faster will this be.
I am writing a program in Matlab to detect a circle.
I've already managed to detect shapes such as the square, rectangle and the triangle, basically by searching for corners, and determining what shape it is based on the distance between them. The images are black and white, with black being the background and white the shape, so for me to find the corners I just have to search each pixel in the image until I find a white pixel.
However I just can't figure out how I can identify the circle.
Here it the an example of how a circle input would look like:
It is difficult to say what the best method is without more information: for example, whether more than one circle may be present, whether it is always centred in the image, and how resilient the algorithm needs to be to distortions. Also whether you need to determine the location and dimensions of the shape or simply a 'yes'/'no' output.
However a really simple approach, assuming only one circle is present, is as follows:
Scan the image from top to bottom until you find the first white pixel at (x1,y1)
Scan the image from bottom to top until you find the last white pixel at (x2,y2)
Derive the diameter of the suspected circle as y2 - y1
Derive the centre of the suspected circle as ((x1+x2)/2, y1+(y2-y1)/2)
Now you are able to score each pixel in the image as to whether it matches this hypothetical circle or not. For example, if a pixel is inside the suspected circle, score 0 if it is white and 1 if it black, and vice-versa if it is outside the suspected circle.
Sum the pixel scores. If the result is zero then the image contains a perfect circle. A higher score indicates an increasing level of distortion.
I think you may read about this two topics:
Theoretical:
Binary images
Hough transform
Matlab:
Circle Detection via Standard Hough Transform
Hough native in matlab
Binary images