For some topological map, there is a feature (such as a river). There is a corresponding file arranged in rows and columns, where each cell maps 1-1 with the corresponding pixel in the map and contains a value corresponding to the distance from the feature.
For the purposes of triangulation, what is the best way to place x, y points over this map, arranged in such a way that the points are closely packed where the distance is below some threshold, and packs further and further apart linearly with the distance up to some threshold distance?
Circle packing seems like the best option at this point, but I can't find compelling documentation on how this might be implemented for this use-case.
A decent example would be something like this, where circles are packed approximately according to intensity (and then points can be placed in the center of the circles):
A simple way is to place randomly sites then pick the greyscale value and feed it to a weighted triangulation with the distance function is the euklidian distance minus the weight. From the result pick the center of gravity from each triangle make it the new site and start again for x time.
Source:https://en.m.wikipedia.org/wiki/Stippling
Related
I'm struggling with a 3D problem for which I'm trying to find an efficient algorithm.
I have a bounding box with given width, height, and depth.
I also have a list of spheres. That is, a center coordinate (xi,yi,zi) and radius ri for each sphere.
The spheres are guaranteed to fit within the bounding box, and to not overlap eachother.
So my situation is like this:
Now I have a new sphere with radius r, which I have to fit inside the bounding box, not overlapping any of the previous spheres.
I also have a target point T = (x,y,z) and my goal is to fit this new sphere (given the conditions above) as close as possible to this target point.
I'm trying to construct an efficient algorithm to find an optimal position for the new sphere. Optimal as in: as close to the target point as possible. Or a "false" result if there is no space to fit this new sphere between or around the existing ones anywhere within the bounding box.
I have thought of all sorts of complex approaches, such as building some sort of parametric description of the remaining volume, starting with the bounding box and subtracting the existing spheres one by one. But it doesn't seem to lead me towards a workable solution.
Note that there are a lot of known 'sphere packing' algorithms, but they tend to just fill volumes with random spheres. Also they often use a trial and error approach, just doing a certain amount of random attempts and then terminate.
Whereas I have a given specific new sphere size, and I need to fit that in (or find out that it's not possible).
A possible approach is by computing the "distance map" of the spheres, i.e. the function that returns for every point (x, y, z) the distance to the closest sphere, which is also the distance to the closest center minus the radius of the corresponding sphere. The map is made of the intersection of (hyper)conical surfaces.
Then you can explore the distance map around the target point and find the closest point with a value that exceeds the target radius.
If I am right, the distance map is directly related to the additively weighted Voronoi diagram of the sphere centers (https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram), and the vertices of the diagram correspond to local maxima. Hence the closest Voronoi vertex with a value that exceeds the target radius will give a solution.
Unfortunately, the construction of this diagram won't be a barrel of laughs. Check the article "Euclidean Voronoi diagram of 3D balls and its computation
via tracing edges" and its bibliography.
A possibly workable solution to estimate the distance map is by discretizing space in a regular grid of cubes, and for every cube obtain a lower and an upper bound of the distance function.
For a single given sphere and a given cube, it is possible to find the minimum and maximum value analytically. Then considering all spheres, you can find the smallest maximum and smallest minimum, which are an upper and lower bound of the true distance (the largest minimum won't do). Then you keep all the spheres such that the minimum remains below that upper bound and you get a (hopefully short) list of candidates.
Here you can check the distances to the spheres in the list, and if the upper bound is smaller than the target radius, you can drop the cube. If you find an upper bound above the target radius, you have found a solution.
Otherwise, if the uncertainty range on the distance function is too large, subdivide the cube in smaller ones for a more accurate estimate of the upper and lower bounds.
To obtain a solution close to the target point, you will visit the cubes by increasing distance from the target (using nested digital spheres), until you find a match.
A key point in this process is to quickly find the spheres closest to a given cube, for the initial estimates. A data structure such as a kD-tree or similar might be helpful.
I'm trying to design a data-structure to hold/express a piecewise circular trajectory in the Euclidian plane. The trajectory is constrained to be continuous and have finite curvature everywhere, and therefore the circular arcs meet tangentially.
Storing all the circle centers, radii, and touching points would allow for inspecting the geometry anywhere in O(1) but would require explicit enforcement of the continuity and curvature constraints due to data redundancy. In my view, this would make the code messy.
Storing only the circle touching points (which are waypoints along the curve) along with the curve's initial direction would be sufficient in principle, and avoid data redundancy, but then it would be necessary to do an O(n) calculation to inspect the geometry of arc n, since that arc depends on all the arcs preceding it in the trajectory.
I would like to avoid data redundancy, but I also don't want to make the cost of geometric inspection prohibitive.
Does anyone have any high-level idea/advice to share?
For the most efficient traversal of the trajectory, if I am right you need
the ending curvilinear abscissas of every arc (cumulative),
the radii,
the starting angles,
the coordinates of the centers,
so that for a given s you find the index of the arc, then the azimuth and the coordinates of the point. (Either incrementally for a sequence of points, or by dichotomy for a single point.) That takes five parameters per arc.
Only the cumulative abscissas are global, but you can't do without them for single-point accesses. You can drop the radii and starting angles and retrieve them for any arc from the difference of curvilinear abscissas and the limit angles (see below). This reduces to three parameters.
On the other hand, knowing just the coordinates of the centers and those of the starting and ending points is enough to recover the whole geometry, and this takes two parameters per arc.
The meeting point of two arcs is found on the line through the centers, and if you know one radius, the other follows. And the limit angle is given by the direction of the line. So for an incremental traversal, this non-redundant description can do.
For convenient computation, knowing s and the arc index, consider the vectors from the center to the centers of the adjoining arcs. Rotate them so that the first becomes horizontal. The components of the other will give you the amplitude angle. The fraction (s - Si-1) / (Si - Si-1) of the amplitude gives you the azimuth of the point, to which you apply the counter-rotation.
I'd store items with the data required to get info for any point of that element. For example, an arc needs x, y, initial direction, radius, lenght (or end point, or angle difference or whatever you find easiest).
Because you need continuity (same x,y, same bearing, perhaps same curvature) between two ending points then a node with this properties is needed. Notice these properties are common to arcs and straights (a special arc identified by radius = 0). So you can treat a node the same as an item.
The trajectory should be calculated before any request. So you have all items-data in advance.
The container depends on how you request info.
If the trajectory can be somehow represented in a grid, then you better use a quad-tree.
I guess you must find the item from a x,y or accumulated length input. You will have to iterate through the container to find the element closest to the input data. Sorted data may help.
My choice is a simple vector with the consecutive elements, which happens to be sorted on accumulated trajectory length.
Finding by x,y on a x-sorted container (or a tree) is not so simple, due to some x,y may have perpendiculars to several items, consecutive or not, near or not, and you need to select the nearest one.
The input is a series of point coordinates (x0,y0),(x1,y1) .... (xn,yn) (n is not very large, say ~ 1000). We need to create some rectangles as bounding box of these points. There's no need to find the global optimal solution. The only requirement is if the euclidean distance between two point is less than R, they should be in the same bounding rectangle. I've searched for sometime and it seems to be a clustering problem and K-means method might be a useful one.
However, the input point coordinates didn't have specific pattern from time to time. So it maybe not possible to set a specific K in K-mean. I am wondering if there is any algorithm or method possible to solve this problem?
The only requirement is if the euclidean distance between two point is less than R, they should be in the same bounding rectangle
This is the definition of single-linkage hierarchical clustering cut at a height of R.
Note that this may yield overlapping rectangles.
For much faster and highly efficient methods, have a look at bulk loading strategies for R*-trees, such as sort-tile-recursive. It won't satisfy your "only" requirement above, but it will yield well balanced, non-overlapping rectangles.
K-means is obviously not appropriate for your requirements.
With only 1000 points I would do the following:
1) Work out the difference between all pairs of points. If the distance of a pair is less than R, they need to go in the same bounding rectangle, so use http://en.wikipedia.org/wiki/Disjoint-set_data_structure to record this.
2) For each subset that comes out of your Disjoint set data structure, work out the min and max co-ordinates of the points in it and use this to create a bounding box for the points in this subset.
If you have more points or are worried about efficiency, you will want to make stage (1) more efficient. One easy way would be to go through the points in order of x co-ordinate, keeping only points at most R to the left of the most recent point seen, and using a balanced tree structure to find from these the points at most R above or below the most recent point seen, before calculating the distance to the most recent point seen. One step up from this would be to create a spatial data structure to get yet more efficiency in finding pairs with distance R of each other.
Note that for some inputs you will get just one huge bounding box because you have long chains of points, and for some other inputs you will get bounding boxes inside bounding boxes, for instance if your points are in concentric circles.
Have a look at this image for example (just a random image of scattered points from image search):
(reference)
You'll see the locations with blue points. Let's say the blue represents what I'm looking for. But I want to find the coordinates where there is the most blue. Meaning the most dense or center of most points (in the picture, it would approximate [.5, .5]).
If I have an arrayList of each and every blue point (x,y coordinates), then how do I use those points to find the center/most dense area of those points?
There are several options, dependent on what precisely you need. The simplest would be the mean, the average of all points: You sum all points up and divide by their number. Getting the most dense area is complicated, because at first you have to come up with a definition of "dense". One option would be: For each point P, find the 7 nearest neighbors N_P1...N_P7. The point P where the 7th neighbor has the smallest distance |P-N_P7| is the point with the highest density around it and you pick P as center. You can replace that 7 with any number that works for you. You could even replace it with some parameter from your data set, say 1/3 of the total number of points.
It is simple to fill rectangle: simply make some grid. But if polygon is unconditioned the task becomes not so trivial.
Probably "regularly" can be formulated as distance between each other point would be: R ± alpha. But I'm not sure about this.
Maybe there is some known algorithm to achieve this.
Added:
I need to generate net, where no large holes, and no big gathering of the points.
Have you though about using a force-directed layout of the points?
Scatter a number of points randomly over the bounding box of your polygon, then repeatedly apply two simple rules to adjust their location:
If a point is outside of the polygon, move it the minimum possible distance so that it lies within, i.e.: to the closest point on the polygon edge.
Points repel each other with a force inversely proportional to the distance between them, i.e.: for every point, consider every other point and compute a repulsion vector that will move the two points directly apart. The vector should be large for proximate points and small for distant points. Sum the vectors and add to the point's position.
After a number of iterations the points should settle into a steady state with an even distribution over the polygon area. How quickly this state is achieved depends on the geometry of the polygon and how you've scaled the repulsive forces between the points.
You can compute a Constrained Delaunay triangulation of the polygon and use a Delaunay refinement algorithm (search with this keyword).
I have recently implemented refinement
in the Fade2D library, http://www.geom.at/fade2d/html/. It takes an
arbitrary polygon without selfintersections as well as an upper bound on the radius of the circumcircle of each resulting triangle. This feature is not contained in the current release 1.02 yet, but I can compile the current development version for Linux or Win64 if you want to try that.