Remove incorrect pixels from polygons - image

I have generated set of points, that create polygonal areas border. On image below, there is an example of what I mean. The black "spots" should not be there and line should be "clear". I need to remove those points.
Now the problem is double. First, I dont know, how this situation is called. Its not aliasing or jagged edge, because those points are not obtained from line generating algorithm, but from contour generator.
And if not the name, than at least some push, how to solve this, would help me.
So far, I have tried convert this to chain code and simplify it, but that didnĀ“t worked very well and it was rather slow. Convert those dots to geometry and use Ramer algorithm to simplify geometry works better, but it destroy some "fine" detail, that should be there.

You can try the following:
First search for these spots. From your figure it seems that the spots look something like the following:
1 1
1 1
That is, a square matrix of colored pixels. Such spots can easily be found by traversing the pixel matrix once.
Now once you identify these spots, you will need to check for the neighbouring pixels and see what pattern is the curve/line following and accordingly delete the unnecessary pixels.

Separate the contour curves and clean each one by itself.
For each contour:
If the curve is not closed, close it with a temporary line.
Flood-fill the contour curve to get a solid monochrome figure.
Run contour detection on the result. The edge of a monochrome figure will be a clean line.
Flood-fill the area outside the new contour curve.
Run contour detection one last time to restore the original contour.
Re-assemble the contours into a single bitmap.

Related

Fitting a mesh and a drawing together

Suppose you're trying to render a user's freehand drawings using a 2D triangular mesh. Start with a plain regular mesh of triangles and color their edges to match the drawing as closely as possible. To improve the results, you can move the vertices of the mesh slightly, but keep them within a certain distance of where they would be in a regular mesh so the mesh doesn't become a mess. Let's say that 1/4 of the length of an edge is a fair distance, giving the vertices room to move while keeping them out of each other's personal space.
Here is a hand-made representation of roughly what we're trying to do. Since the drawing is coming freehand from the user, it's a series of line segments taken from mouse movements.
The regular mesh is slightly distorted to allow the user's drawing to be better represented by the edges of the mesh. Unfortunately the end result looks quite bad, but perhaps we could have somehow distorted the drawing to better fit the mesh, and the combination of the two distortions would have created something far more recognizable as the original drawing.
The important thing is to preserve angles, so if the user draws a 90-degree corner it ends up looking close to a 90-degree corner, and if the user draws a straight line it doesn't end up looking like a zigzag. Aside from that, there's no reason why we shouldn't change the drawing in other ways, like translating it, scaling it and so on, because we don't need to exactly preserve distances.
One tricky test case is a perfectly vertical line. The triangular mesh in the image above can easily handle horizontal lines, but a naive approach would turn a vertical line into a jagged mess. The best technique seems to be to horizontally translate the line until it passes through each horizontal edge alternating between 1/4 and 3/4 of the way along the edge. That way we can nudge the vertices to the left or right by 1/4 and get a perfect vertical line. That's obvious to a person, but how can an algorithm be made to see that? It involves moving the line further away from vertices, which is the opposite of what we usually want.
Is there some trick to doing this? Does anyone know of a simple algorithm that gives excellent results?

How to smoothen a jagged border of an image into a straight line?

I have an image like this (thresholding, noise removal, etc. completed):
My final output should be an image without any of the jagged edges, and smaller than the given image. By this, I mean to say that the only difference between the 2 images must be that in the new one, the jagged edges must be removed, and not the jagged edges filled in. Like so (the final image must be the region within the red border, the red border is shown only for explanation):
I was thinking of something along the lines of using Hough transforms, or of using dilations and then erosions, but nothing seems to be working (probably my fault, because I have not worked in too much detail with them before).
Note that the language I'd like t do this in is MATLAB.
There are 2 primary aims to this:
To get the edges themselves, using Hough transforms
So that the 'Extrema' property returns the desired pints when using regionprops, like so:
The question, in a more concise form:
How would I go about extracting this T in MATLAB, such that it does not have rugged edges, but the overall figure is not larger than the original, as shown in the second figure above? In other words, what set of transformations (in MATLAB) would I use to smoothen the borders of the image with as little of the area lost as little as possible (but no area added) such that ruggedness disappears?
Is there a more efficient way of extracting the corner (extrema) points as shown in figure 2 above without requiring to go through step 1?
EDIT:
A few more sample images:
NB: All images in consideration will be composed of rectangles approximately at 90 to each other, and no other figure. So smoothening an image with a curved edge, for example, would be beyond the scope of an answer to this question (or even, for that matter, a trapezium, although I think that smoothening 2 straight edges should be the same, irrespective of whether the edge has another parallel to it or not).
Here are a few more images, for reference:
I'm not sure if my answer would satisfy your requirements. I'm putting it here because I think it's too long for a comment.
since you want the final output to be smaller than the input image, erode the input image. You can pick an appropriate kernel size.
perform a corner detection on this eroded image. This will give you all strong corners, but without any order
trace the boundaries of the eroded image. This should give you an ordered list of boundary pixels
now, with the help of these ordered boundary points you can order the corners that you found earlier
filter corner points that form approximately 90 degrees of angle. You can do this considering each 3 ordered corner points (two green points and the red point in between in the image below. It's just for illustration, not corner points that I calculated. At the end of this operation, you have all red points in the image below which are at strong corners, in addition to other yellow and green corner points)
now you can either find the equation of the line connecting 2 consecutive red points
or
fit a least-squares-line to the points between (and including) each 2 consecutive red points
since you did all this processing on a eroded image that is essentially smaller than the original image, you should get a smaller shape

Algorithm to compute set of bins bounded by a discrete contour

On a discrete grid-based plane (think: pixels of an image), I have a closed contour that can be expressed either by:
a set of 2D points (x1,y1);(x2,y2);(x3,y3);...
or a 4-connected Freeman code, with a starting point: (x1,y1) + 00001112...
I know how to switch from one to the other of these representations. This will be the input data.
I want to get the set of grid coordinates that are bounded by the contour.
Consider this example, where the red coordinates are the contour, and the gray one the starting point:
If the gray coordinate is, say, at (0,0), then I want a vector holding:
(1,1),(2,1),(3,1),(3,2)
Order is not important, and the output vector can also hold the contour itself.
Language of choice is C++, but I'm open to any existing code, algorithm, library, pointer, whatever...
I though that maybe CGAL would have something like this, but I am unfamiliar with it and couldn't find my way through the manual, so I'm not even sure.
I also looked toward Opencv but I think it does not provide this algorithm (but I can be wrong?).
I was thinking about finding the bounding rectangle, then checking each of the points in the rectangle to see if they are inside/outside, but this seems suboptimal. Any idea ?
One way to solve this is drawContours, and you have contours points with you.
Create blank Mat and draw contour with thickness = 1(boundary).
Create another blank Mat and draw contour with thickness = CV_FILLED(whole area including boundary).
Now bitwise_and between above two(you got filled area excluding boundary).
Finally check for non-zero pixel.

2D raster image line of sight algorithm

I'm working on a simple mapping application for fun, and one of the things I need to do is to find (and color) all of the points that are visible from the current location. In this case, points are pixels. My map is a raster image where transparent pixels are open space, and any other pixels are opaque. (There are no semi-transparent pixels; alpha is either 0 or 100%.) In this sense, it's sort of like a regular flood fill, with the constraint that each filled pixel has to have a clear line-of-sight to the origin point. The following image shows a couple of such areas colored in (the tiny crosshairs are the origin points, and white = transparent):
(http://tinyurl.com/nf3nqa4)
In addition, what I am ultimately interested in are the points that "border" other colors, i.e., I want the list of points that make up the edge of the visible region.
My current and very inefficient solution is the modified flood-fill I described above. This approach returns correct results, but due to the need to iterate every pixel on a line to the origin for every pixel in the flood fill, it's very slow. My images are downsized and quantized, but I still need about 1MP for acceptable accuracy, and typical LoS areas are at least 100,000 pixels each.
I may well be using the wrong search terms, but I haven't been able to find any discussion of algorithms that would solve this (rasterized) LoS case.
I suspect that this could be done more efficiently if your "walls" were represented as equations rather than simply pixels in a raster image. For example, polygons/triangles, circles, ellipses.
It would then be like raytracing (search for this term) in 2D. In other words, you could consider the ray/line from each pixel in the image to the point of interest and color the pixel only if it does not intersect with any object.
This method does require you to test the intersection for each pixel in the image with each object; however, if you look up raytracing you will find a number of efficient methods for testing these intersections. They will mostly be for the 3D case but it should be straightforward to convert them to 2D.
There are 3D raytracers that are very fast on MUCH larger images so this should be very doable.
You can try a delaunay triangulation on each color. I mean you can try to find the shape of each color with DT.

Raster path following algorithms

I've got a raster grid of values that looks something like the image below (white is high values, the black background value is zero).
I'm trying to write some kind of path-following code to start at the end of one of the lines and trace to the other end, going via the highest possible values (that is, the whiter the pixels chosen to be in the line the better) but still getting to the other end.
I've been struggling with this for a while, and can't seem to get anything I try to work. So I wondered, has a generic algorithm already been developed for this sort of problem? I've done a lot of searching, but most path algorithms seem to be designed to work on vectors/networks, not raster grids like this.
Any ideas?
The simplest idea probably is to use the A* algorithm, where each pixel is a node, and the cost of the node is the pixel darkness.
Update: Found a nice tutorial.
One way to do this:
Filter the image to get it closer to black and white only pixels.
Draw a line through the white pixels. To do this, start at a white pixel. Draw a line from that pixel to each other white pixel a distance of 2 (or 3 or so) away, but ignore pixels near a previous line. Keep going until you've covered every pixel not close (2 or 3 pixels) from a line. You'll have to do some minor adjustments here to get it to work well.
Connect the endpoints of the lines you've drawn. If there are two endpoints near (1 or 2 pixels?) one another, connect them. You should end up with a few lines made up of a lot of short segments, possibly with some loops and forks.
Get rid of any small loops in the lines, and seperate the lines at forks, so you have a few lines made of a lot of short segments.
Reduce points. For each line, check to see if it is nearly straight. If so, remove all the interior points. If not, check the two halves of the line recursively until you get down to the minimum segment lengths.
You can optionally fit a spline curve through the lines at this point.
Profit.
It will take some tweaking to get it to work well, but it is possible to do it this way. One other variant is to outline the white sections, if they are wider than 1 or 2 or 3 pixels, and combine the double lines afterward.
I don't think you'll need a genetic algorithm or anything ridiculous; good old fashion recursion and dynamic programming should suffice. I am initially thinking, that you should be able to accomplish your goal by doing a breadth first search. From your starting point, you visit all the neighbors with scores greater then that paths value --all cells start out at infinity, and costs to black cells would be infinity, and these are the paths you can prune off). Once at your destination, if reachable, you should be able to backtrack to find the path. It's greedy, but if your paths are well behaved like these are, it should be fine.
For paths with more gray and twists and turns, it might be a good idea to convert the raster image to a graph, with the edge weight being the the gray scale values of the neighbors (or difference in gray scale values, depending on what this data actually means). So, you should be able to use any algorithm for shortest paths based on that interpretation.
If you are doing this on big scale or for research you might try whit http://en.wikipedia.org/wiki/Ant_colony_optimization, but if you are doing this for money just pick up something like flood fill http://en.wikipedia.org/wiki/Flood_fill

Resources