How to Represent Maze of Pixels as Nodes - algorithm

How would you go about simplifying a maze that is represented in terms of pixels into nodes based on the same color? I'm wring a program that solves a maze (represented as an image) using A* Algorithm. The walls are represented as black pixels and the rest as white pixels. However, I'm worried that the space complexity will be very high for large mazes. Therefore I'm trying to come up with a way to maybe group pixels of the same color together in order to create a graph of nodes which represents the entire matrix. That way when I run A* I can go node by node instead of pixel by pixel which is much more simple.

Related

Fill polygon with smaller shapes (circles)

I'm just going to try and explain my problem with images:
The program receives an input (image):
There is a base polygon, but can be simplified into a circle in all situations:
Output should be something like:
There is no correct result, just good and bad ones.
To make things easier, an estimate how many circles there should be can be given based on the surface and extent of the polygon.
What I am searching is an algorithm that does something described above - cover as much as possible with the given shape, while minimizing the area of black pixels and overlapping areas.
I used k-means clustering to find circle centers. Number of clusters is calculated:
numberOfClusters = round(polygonArea / basePolygonArea).
Input data for k-means algorithm are points of white pixels.

Filling a polygon with rectangles

I have a fairly smooth polygon, say an ellipse with bulges and dents converted to polygon straight lines. I wish to fill this polygon with as few rectangles as possible, but as many as to maintain accuracy in small corners in the polygon. The rectangles may be any size, and of any amount.
The reason for this is doing a hit test on a web page on the polygon. The only practical way is to fill it with divs and do hit tests on all the divs.
Of course there will be a minimum square size for any rectangle, lest we more than just approximate the polygon and recreate it with pixel size rectangles.
In the general case, if you want to exactly represent a digital shape with rectangles, you will need at least as many rectangles as there are pixels on the outline forming corners. If you think of a digital straight edge at 45°, that means one rectangle per pixel. This is a serious limitation. (And don't even think of non-digital shapes.)
This said, you accept to approximate the shape with a certain error, and I suggest that you first shrink the shape by a constant factor, up to you: you will overlay a grid on the shape an decide whether every tile belongs to the shape or not. Doing this, you turn your shape in a binary image with "big pixels", and the challenge is now to decompose this image in rectangles (exactly this time).
I suggest a simple greedy strategy such that you try to find a large rectangle that fits entirely, and then repeat with the parts that remain.
If you apply a morphological erosion operation with a larger and larger rectangular structuring element, you will find the largest rectangle the fits in the shape image. In theory, you should try all combinations of width and height and keep the largest area or perimeter; this is a large amount of work. I would recommend to try with growing squares first, and when you found the largest square to continue in the direction that allows it.
After you have found a large rectangle, erase it from the shape image and start again, until you completely erase it.

Graph search algorithm without starting point

I have an edge map of a scene and would like to extract the edge which best separates the sky and terrain. This seems to be well framed as a graph traversal problem. However, popular search algorithms such as A* are reliant upon the use of a starting and ending point (other than the first and last column respectively). Are there any algorithms for graph search which do not require these parameters? I would also like to maximize some global features of the extracted edge such as smoothness.
Note: speed is a significant issue, this needs to be done in real time.
Computer vision researchers have attacked this type of problem with minimum cuts. Wikipedia has a whole article about graph cuts in computer vision. I'll sketch here the algorithm proposed by Greig, Porteous, and Seheult, who were the first to make this connection.
Suppose that we have a function from pixel colors to log likelihoods of how likely that pixel is to be sky versus terrain. Prepare a graph with a source vertex, a sink vertex, and a vertex for each pixel. Connect the source to each pixel with capacity equal to the log likelihood of that pixel being sky. Connect each pixel to the sink with capacity equal to the log likelihood of that pixel being terrain. For each pair of adjacent pixels, connect them with capacity equal to the log likelihood of them having different classifications. Compute a minimum cut. All of the vertices on the source side of the cut are classified as sky, and all of the vertices on the sink side of the cut are classified as terrain.
Alternatively, if the terrain is known to be at the bottom of the image and the sky is known to be at the top, connect the source instead to each of the top pixels and connect the bottom pixels to the sink, with infinite capacity. Then we can dispense with the log likelihoods for classifying pixels based on color, leaving the edge capacities to vary with the similarity of adjacent pixel colors.

Recreate image using only overlapping squares

I'm trying to take a source image, and recreate it on a transparent canvas using only overlapping mono-colored squares. The goal is to use as few squares as possible.
In other words, I'm taking a blank transparent image, and drawing squares of various colors until I recreate the source image, with the goal being to use as few squares as possible.
For example:
Here is a source image. It has two colors: red and green. I want to use only squares, that may overlap, to recreate the source image.
The ideal solution would be a large red square, and then two green squares drawn on top - that is what I want my algorithm to find, with any source image - the position, size, color and order of each square.
My target image that I intend to process is this:
(8x enlargement)
It has 1411 non-transparent pixels (worst case), and with a brute force solution that does not use overlapping squares, I've recreated the image using 1246 squares.
My current solution is a brute force method along the lines of:
Create a list of all colors used in the source image. Each item is a "layer". A layer has a color and a 2D array representing pixels. The order is important, but I don't know what order the layers need to be in, so its arbitrary initially.
For each layer in the list, initialize the 2D array. Each element corresponds to a pixel in the source image. Pixels that are the same color as the layer's chosen color is marked as '1'. Pixels that are in a layer ABOVE the current layer are marked as "don't care". All other pixels are marked as '0'.
Use some algorithm to process each layer, using the smallest number of squares to reach every pixel marked '1', without touching any pixels marked '0'.
Rearrange the order of layers and go back to Step 2. Do this for every possible combination of layers, then check to see which ordering uses the least number of squares in total.
Someone has perhaps a better explanation in a response; but brute force testing every permutation is not viable, because my target image has 31 colors (resulting in 31! permutations).
As for why I'm doing this? I'm trying to create an image in a game (Starbound), where I can only use squares. The lazy solution is to use a square for each pixel, but that's just too many squares.
Just a suggestion for a possible solution. I haven't tried it.
It's a greedy approach.
For every pixel, compute the largest uniform square that contains it.
Then choose the largest of all squares and mark all pixels it covers as "covered".
Then among all unmarked pixels, choose the largest covering square, and so on until no unmarked pixel remains.
Ties do no matter, just take any largest square and mark its pixels.
UPDATE: overlaps offer opportunities for reduction in the number of squares.
Consider all possible permutations of the filling order of the shapes. The shapes drawn first, on the bottom layers, can be (partly) hidden by some others. Process the shapes starting from the top layer. When you process a shape to associate every pixel with the largest uniform square that contains it, treat all covered pixels as don't care.
In the given example, fill the green squares first; then when filling the red square, the green pixels can be considered red or not, depending on convenience.
If you cannot try all permutations, then try them at random. Heuristic approaches such as genetic algorithms or simulated annealing could help. Nothing straightforward here.
It would be hard to guarantee an optimal solution. The brute force search would be huge. This calls for a heuristic.
Start at the edges. Walking the outside edge, find the most frequent color. Draw squares
to fill the background.
Iterate, working inwards drawing smaller and smaller squares which cover the most
pixels which are the wrong color. Ending with single-pixel squares.
Working inwards means to reduce the size of the bounding box, outside
of which all pixels are the correct color. At each step, the upper limit on the size of a square would be fitting in the bounding box. Choose the squares which give the best score.
Score is based on old vs new color being wrong or right, so there are 4 possible values for each pixel. One example function for per-pixel score would be:
wrong -> wrong: 0
wrong -> right: 1
right -> right: 1
right -> wrong: -2
I think that if you always reduce the number of wrong squares on the edge of the bounding box and never increase the size of the square, then the algorithm must halt with a solution without needing to backtrack. A backtracking solution could probably do better.
An "erosion-based" heuristic.
Consider all outline pixels, i.e. having at least a neighbor outside the shape.
Among these pixels, choose a color (the most frequent one ?).
For all outline pixels of this color, compute the largest square that does not exceed the shape.
Fill these squares, from larger to smaller, until the complete outline is covered.
Remove the correctly filled pixels and restart the procedure on the eroded shape.
In the case of the red square, all outline pixels will be covered by the red square itself, and the first filling will "consume" the whole area.
Then, removing the pixels covered in red, the two green square will remain.
All green outline pixels will now be covered by the two green squares, and the two first fillings will "consume" all green area.

Algorithm that transforms flat object shape in bitmap to collection of Vectors in 2D Coord System

I currently try to figure out an algorithm that would transform a bitmap like this :
To collection of vectors in two-dimensional coordinate system. And unfortunately, i figured out nothing . Have anyone heard about an algorithm that solves this problem ?
This is by no means the "best" method but I tried this a while back and it worked fairly well. The only thing I would request is that the shapes be filled in.
What I did was to treat the image as a density field and apply the marching squares algorithm to it. This of course generated far too many vertices (even when not sampling at native rez), so I did some very primitive decimation: removing vertices where the adjacent edges are nearly straight (by removing I mean replace vertex + 2 edges with a single edge). After iterating the decimation a few times I had a low vertex vector representation.
Improvements might involve turning the input into a signed distance field to improve marching squares or sampling along the square edges to find intersections with the original image (jump from black to white is an intersection).
For a real algorithm you'd want to search for "vectorizing".

Resources