I have an elevation map represented by a 2D array of floats.
There are regions of this map whose edges I have contained in a single vector which contains a list of the edge cells (identified by their x and y coordinates).
The edge cells are not aware of which region they are associated with, nor are edge cells which are contiguous within the vector necessarily adjacent to each other in the map.
I would like to be able to uniquely identify each region based on this information (the list of edge cells for the whole map, which again, may not be adjacent).
I have thought about trying to start at one edge cell and traverse the edge, but then the enclosed space may contain regions which should be excluded (a lake around an island which itself contains a lake). I've considered using some kind of bucket fill, but this would disrupt the valuable elevation data and I don't want to create a second array to store the information.
Any thoughts on an efficient way to go about it?
Richard,
This is a classical connected components labeling problem, isn't it ?
There are indeed several solutions when you are allowed to store a 'state' map, i.e. an auxiliary image where pixels can be assigned discrete values. Among these methods, you can indeed paint the edge pixels, then flood fill the enclosed regions. In this case, a single bit per pixel is enough.
If you don't want to afford extra storage for this bit, you can probably "steal" it from the floating point values. For instance if all elevations are positive, you can embezzle the sign bit for this purpose (and reset it afterwards); easily done in C by mapping a bitfield over the float.
Related
I'm developing a robot simulation using ROS and C++.
I have created a map, which is a list of free positions in a closed room like this one:
0.1,0;0.2,0;0.3,0;...
They are (x,y) locations separated by ;. All of the locations in that list are the free locations in the map. If there is an obstacle in a location, that location won't be in list.
How can I use that list as a map for A* search algorithm?
I've thought to convert that list into a 2D matrix but I don't know what to put inside matrix's cell.
Indeed, it sounds like you can simply convert the list into a 2D matrix by parsing the text file (set x to the character sequence before comma, skip comma, set y to the character sequence before semicolon, skip semicolon, convert x/y to numbers and update matrix accordingly using x/y as indices, etc.).
As for the matrix itself, consider simply a bird's-eye-view of the room, where a '0' in a cell represents a free space, while a '1' represents an obstacle (or the other way around; the values are arbitrary, as long as you use them consistently). And for the A* algorithm, any cell is essentially "adjacent" to the 4 neighboring cells (or whatever movement scheme you are using).
You will have to convert the data to a graph (as in nodes and edges, not as in function plot). In order to do that, you not only need the positions, which translate to nodes (a.k.a. vertices) but the edges. There is an edge between two nodes when you can move from one node directly to the other. In other words, there is no node in between and also no obstacle preventing the movement. Once you have that part down, you can run the A* algorithm on the resulting graph easily.
Steps:
Read and parse input data.
Store data as list of nodes.
Define the requirements for an edge to exist between two nodes.
Create a list of edges with previously defined condition and the nodes.
Run A* on your graph.
Notes:
I won't do the actual work for you because I guess its just homework. Nobody here will hopefully do that.
You can solve each step on its own or even skip it and replace its results using hard-coded values.
You can also skip generating the edges altogether. You just need to adjust the A* algorithm to generate the edges on demand on every node it visits. This may or may not be simpler.
I'm using a k-d tree for spacial partitioning in a ray-tracer. I want to combine near-by primitives into fixed-sized groups so the data in each group can be deinterleaved and processed simultaneously using SIMD instructions. What is a good fast algorithm to find the (approximately) smallest fixed-sized groups?
Ideally it would augment the k-d tree building algorithm instead of adding a separate pass, but this is complicated by the fact that the primitives are normally so close together that most primitives will belong to more than one leaf node and I can't have groups with duplicate items because floating point precision errors would mess up shadows and reflections.
I figure I'm far from the first person to try this so a solution already exists, but the most relevant solutions I found from searching the internet for grouping objects deal with point data and variable-sized groups.
One popular way to group objects it to pick a random object, then add the "closest" k objects to a group with it. "Closest" is usually defined to mean giving the smallest surface area for the combined bounding box. You can also use the Hilbert curve distance; either the distance on the 3D curve between the objects' centers, or in 6D space where the bounding boxes become points:
(x_min, y_min, z_min, x_max, y_max, z_max)
I have written a win32 api-based GUI app which uses GDI+ features such as DrawCurve() and DrawLine().
This app draws lines and curves that represent a multigraph.
The data structure for the edge is simply a struct of five int's. (x1, y1, x2, y2, and id)
If there is only one edge between two vertices, a straight line segment is drawn using DrawLine().
If there are more than one edges, curves are drawn using DrawCurve() -- Here, I spread straight-line edges about the midpoint of two vertices, making them curves. A point some unit pixels apart from it is calculated using the normal line equation. If more edges are added then a pixel two unit pixels apart from the midpoint is selected, then next time 3 unit pixels, and so on.
Now I have two questions on detecting the click on edges.
In finding straight-line edges, to minimize the search time, what should I do?
It's quite simple to check if the pixel clicked is on the line segment but comparing all edges would be inefficient if the number of edges large. It seems possible to do it in O(log n), where n is the number of edges.
EDIT: at this point the edges (class Edge) are stored in std::map that maps edge id (int)'s
to Edge objects and I'm considering declaring another container that maps pixels to edge id's.
I'm considering using binary search trees but what can be the key? Or should I use just a 2D pixel array?
Can I get the array of points used by DrawCurve()? If this is impossible, then I should re-calculate the cardinal spline, get the array of points, and check if the point clicked by the user matches any point in that array.
If you have complex shaped lines you can do as follows:
Create an internal bitmap the size of your graph and fill it with black.
When you render your graph also render to this bitmap the edges you want to have click-able, but, render them with a different color. Store these color values in a table together with the corresponding ID. The important thing here is that the colors are different (unique).
When the graph is clicked, transfer the X and Y co-ordinates to your internal bitmap and read the pixel. If non-black, look up the color value in your table and get the associated ID.
This way do don't need to worry about the shape at all, neither is there a need to use your own curve algorithm and so forth. The cost is extra memory, this will a consideration, but unless it is a huge graph (in which case you can buffer the drawing) it is in most cases not an issue. You can render the internal bitmap in a second pass to have main graphics appear faster (as usual).
Hope this helps!
(tip: you can render the "internal" lines with a wider Pen so it gets more sensitive).
So working on an little project but thinking about making maps efficient. I have a grid of numbers say
100110
011011
010110
If you've ever played dungeon keeper, the idea is a 0 is a flat dug out square, and 1 is a still standing square.
I want to take advantage of the grid layout and be able to minimise the number of vertexes used. So instead of using individuals cubes for an area like:
1111
1111
1111
I want to just use 8.
Any idea on the best approach to this? or even just knows the name of the type of algorithm i should use. Something that can do it quickly on the fly would be preferable so not to bottle neck rendering.
I agree that this is probably not gonna be a performance issue, but you could represent your map in a compressed map by using a (slightly modified) unbalanced Quad-tree.
Start by your map consisting only of 1's. You can store this as a box of size n*n in the root node of your tree.
IF you want to dig out one of the boxes you recursively walk down the tree, splitting the n*n box (or whatever you find there) using the default quad tree rules (= split an n*n box into four n/2*n/2 boxes, etc.). At some point you'll arrive in a leaf of the tree that only contains the single box (the one you want to dig out) and you may change it from 1 to 0.
Additionally, after the leaf has changed and your recursive calls return (= you walk back up the tree towards the root node), you can check neighboring boxes for whether they may be merged. (If you have two neighboring boxes that are both dug out, you can merge them).
Another technique that is sometimes used when indexing low-dimensional data like this is a space filling curve. One that has good average locality and is reversible is the Hilbert curve. Basically, you may enumerate your boxes (dug out ones and filled ones) along the space filling curve and then use simple run-length compression.
The tree-idea allows you to reduce the number of rendered geometry (you can rescale texture, etc. to emulate n*n boxes by a single larger box). The space filling curve probably will only save you some memory.
We are doing a scheduler for heterogeneous computing.
The tasks can be identified by their deadline and their data-rate and can be regarded as a two dimensional graph. See image:
The rectangle identifies tasks to be scheduled on the GPU, and outside tasks to be scheduled on the CPU.
The problem is we want to efficiently identify the parameters for creating the best rectangle. I.e. the rectangle containing most tasks. A function determining whether or not a dot can be added to the current rectangle can be assumed present.
There can be up to 20.000 (dots) tasks, and the axis can be arbitrary long
Are there any known algorithms / data structures solving this problem?
With the given information, you could do the following:
Start by adding the dot which is closest to the center of gravity of all the dots.
If n dots are already added, select as n+1st dot, the dot which is closest to the current rectangle. Ask your given function, whether this dot can be added.
If so, inflate the rectangle so it contains this dot. Assuming all dots have unique x and y coordinates, it is always possible to add just a single dot to the rectangle.
If not, terminate.
If this is not what you want, give more information.
If you mean a hierarchical cluster you can use a spatial index or a space-filling-curve to subdivide the 2d graph in quadrants. A quadrant can represent a thread or a core. Then you need to sort the dots with this function and check for the quadrant with the most dots.