Algorithm for creating a specific geometric structure - algorithm

I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.

I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.

Related

Interpolating missing contour lines between existing contour lines

Contour lines (aka isolines) are curves that trace constant values across a 2D scalar field. For example, in a geographical map you might have contour lines to illustrate the elevation of the terrain by showing where the elevation is constant. In this case, let's store contour lines as lists of points on the map.
Suppose you have map that has several contour lines at known elevations, and otherwise you know nothing about the elevations of the map. What algorithm would you use to fill in additional contour lines to approximate the unknown elevations of the map, assuming the landscape is continuous and doesn't do anything surprising?
It is easy to find advise about interpolating the elevation of an individual point using contour lines. There are also algorithms like Marching Squares for turning point elevations into contour lines, but none of these exactly capture this use case. We don't need the elevation of any particular point; we just want the contour lines. Certainly we could solve this problem by filling an array with estimated elevations and then using Marching Squares to estimate the contour lines based on the array, but the two steps of that process seem unnecessarily expensive and likely to introduce artifacts. Surely there is a better way.
IMO, about all methods will amount to somehow reconstructing the 3D surface by interpolation, even if implicitly.
You may try by flattening the curves (turning them to polylines) and triangulating the resulting polygons thay they will define. (There will be a step of closing the curves that end on the border of the domain.)
By intersection of the triangles with a new level (unsing linear interpolation along the sides), you will obtain new polylines corresponding to new isocurves. Notice that the intersections with the old levels recreates the old polylines, which is sound.
You may apply a post-smoothing to the curves, but you will have no guarantee to retrieve the original old curves and cannot prevent close surves to cross each other.
Beware that increasing the density of points along the curves will give you a false feeling of accuracy, as the error due to the spacing of the isolines will remain (indeed the reconstructed surface will be cone-like, with one of the curvatures being null; the surface inside the bottommost and topmost lines will be flat).
Alternatively to using flat triangles, one may think of a scheme where you compute a gradient vector at every vertex (f.i. from a least square fit of a plane on the vertex and its neighbors), and use this information to generate a bivariate polynomial surface in the triangle. You must do this in such a way that the values along a side will coincide for the two triangles that share it. (Unfortunately, I have no formula to give you.)
The isolines are then obtained by a further subdivision of the triangle in smaller triangles, with a flat approximation.
Actually, this is not very different from getting sample points, (Delaunay) triangulating them and fitting picewise continuous patches to the triangles.
Whatever method you will use, be it 2D or 3D, it is useful to reason on what happens if you sweep the range of z values in a continous way. This thought experiment does reconstruct a 3D surface, which will possess continuity and smoothness properties.
A possible improvement over the crude "flat triangulation" model could be to extend every triangle side between to iso-polylines with sides leading to the next iso-polylines. This way, higher order interpolation (cubic) can be achieved, giving a smoother reconstruction.
Anyway, you can be sure that this will introduce discontinuities or other types of artifacts.
A mixed method:
flatten the isolines to polylines;
triangulate the poygons formed by the polylines and the borders;
on every node, estimate the surface gradient (least-square fit of a plane to the node and its neighborrs);
in every triangle, consider the two sides along which you need to interpolate and compute the derivative at endpoints (from the known gradients and the side directions);
use Hermite interpolation along these sides and solve for the desired iso-levels;
join the points obtained on both sides.
This method should be a good tradeoff between complexity and smoothness. It does reconstruct a continuous surface (except maybe for the remark below).
Note that is some cases, yo will obtain three solutions of the cubic. If there are three on each side, join them in order. Otherwise, make a decision on which to join and use the remaining two to close the curve.

Sparse (Pseudo) Infinite Grid Data Structure for Web Game

I'm considering trying to make a game that takes place on an essentially infinite grid.
The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells.
The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that)
This needs to be easy to persist.
Here are the operations I may want to perform (reasonably efficiently) on this grid:
Ask for some small rectangular region of cells and all their contents (a player's current neighborhood)
Set individual cells or blit small regions (the player is making a move)
Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview)
Find some regions with approximately a given density (player spawning location)
Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching)
Approximate convex hull for a region
Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process).
Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were?
Thanks a lot, everyone!
I think you can do everything using quadtrees, as others have suggested, and maybe a few additional data structures. Here's a bit more detail:
Asking for cell contents, setting cell contents: these are the basic quadtree operations.
Rough shape/outline: Given a rectangle, go down sufficiently many steps within the quadtree that most cells are empty, and make the nonempty subcells at that level black, the others white.
Region with approximately given density: if the density you're looking for is high, then I would maintain a separate index of all objects in your map. Take a random object and check the density around that object in the quadtree. Most objects will be near high density areas, simply because high-density areas have many objects. If the density near the object you picked is not the one you were looking for, pick another one.
If you're looking for low-density, then just pick random locations on the map - given that it's a sparse map, that should typically give you low density spots. Again, if it doesn't work right try again.
Approximate shortest path: if this is a not-too-frequent operation, then create a rough graph of the area "between" the starting point A and end point B, for some suitable definition of between (maybe the square containing the circle with the midpoint of AB as center and 1.5*AB as diameter, except if that diameter is less than a certain minimum, in which case... experiment). Make the same type of grid that you would use for the rough shape / outline, then create (say) a Delaunay triangulation of the black points. Do a shortest path on this graph, then overlay that on the actual map and refine the path to one that makes sense given the actual map. You may have to redo this at a few different levels of refinement - start with a very rough graph, then "zoom in" taking two points that you got from the higher level as start and end point, and iterate.
If you need to do this very frequently, you'll want to maintain this type of graph for the entire map instead of reconstructing it every time. This could be expensive, though.
Approx convex hull: again start from something like the rough shape, then take the convex hull of the black points in that.
I'm not sure if this would be easy to put into a relational database; a file-based storage could work but it would be impractical to have a write operation be concurrent with anything else, which you would probably want if you want to allow this to grow to a reasonable number of players (per world / map, if there are multiple worlds / maps). I think in that case you are probably best off keeping a separate process alive... and even then making this properly respect multithreading is going to be a headache.
A kd tree or a quadtree is a good data structure to solve your problem. Especially the latter it's a clever way to address the grid and to reduce the 2d complexity to a 1d complexity. Quadtrees is also used in many maps application like bing and google maps. Here is a good start: Nick quadtree spatial index hilbert curve blog.

Packing arbitrary polygons within an arbitrary boundary

I was wondering if anybody could point me to the best algorithm/heuristic which will fit my particular polygon packing problem. I am given a single polygon as a boundary (convex or concave may also contain holes) and a single "fill" polygon (may also be convex or concave, does not contain holes) and I need to fill the boundary polygon with a specified number of fill polygons. (I'm working in 2D).
Many of the polygon packing heuristics I've found assume that the boundary and/or filling polygons will be rectangular and also that the filling polygons will be of different sizes. In my case, the filling polygons may be non-rectangular, but all will be exactly the same.
Maybe this is a particular type of packing problem? If somebody has a definition for this type of polygon packing I'll gladly google away, but so far I've not found anything which is similar enough to be of great use.
Thanks.
The question you ask is very hard. To put this in perspective, the (much) simpler case where you're packing the interior of your bounded polygon with non-overlapping disks is already hard, and disks are the simplest possible "packing shape" (with any other shape you have to consider orientation as well as size and center location).
In fact, I think it's an open problem in computational geometry to determine for an arbitrary integer N and arbitrary bounded polygonal region (in the Euclidean plane), what is the "optimal" (in the sense of covering the greatest percentage of the polygon interior) packing of N inscribed non-overlapping disks, where you are free to choose the radius and center location of each disk. I'm sure the "best" answer is known for certain special polygonal shapes (like rectangles, circles, and triangles), but for arbitrary shapes your best "heuristic" is probably:
Start your shape counter at N.
Add the largest "packing shape" you can fit completely inside the polygonal boundary without overlapping any other packing shapes.
Decrement your shape counter.
If your shape counter is > 0, go to step 2.
I say "probably" because "largest first" isn't always the best way to pack things into a confined space. You can dig into that particular flavor of craziness by reading about the bin packing problem and knapsack problem.
EDIT: Step 2 by itself is hard. A reasonable strategy would be to pick an arbitrary point on the interior of the polygon as the center and "inflate" the disk until it touches either the boundary or another disk (or both), and then "slide" the disk while continuing to inflate it so that it remains inside the boundary without overlapping any other disks until it is "trapped" - with at least 2 points of contact with the boundary and/or other disks. But it isn't easy to formalize this "sliding process". And even if you get the sliding process right, this strategy doesn't guarantee that you'll find the biggest "inscribable disk" - your "locally maximal" disk could be trapped in a "lobe" of the interior which is connected by a narrow "neck" of free space to a larger "lobe" where a larger disk would fit.
Thanks for the replies, my requirements were such that I was able to further simplify the problem by not having to deal with orientation and I then even further simplified by only really worrying about the bounding box of the fill element. With these two simplifications the problem became much easier and I used a stripe like filling algorithm in conjunction with a spatial hash grid (since there were existing elements I was not allowed to fill over).
With this approach I simply divided the fill area into stripes and created a spatial hash grid to register existing elements within the fill area. I created a second spatial hash grid to register the fill area (since my stripes were not guaranteed to be within the bounding area, this made checking if my fill element was in the fill area a little faster since I could just query the grid and if all grids where my fill element were to be placed, were full, I knew the fill element was inside the fill area). After that, I iterated over each stripe and placed a fill element where the hash grids would allow. This is certainly not an optimal solution, but it ended up being all that was required for my particular situation and pretty fast as well. I found the required information about creating a spatial hash grid from here. I got the idea for filling by stripes from this article.
This type of problem is very complex to solve geometrically.
If you can accept a good solution instead of the 100% optimal
solution then you can to solve it with a raster algorithm.
You draw (rasterize) the boundary polygon into one in-memory
image and the fill polygon into another in-memory image.
You can then more easily search for a place where the fill polygon will
fit in the boundary polygon by overlaying the two images with
various (X, Y) offsets for the fill polygon and checking
the pixel values.
When you find a place that the fill polygon fits,
you clear the pixels in the boundary polygon and repeat
until there are no more places where the fill polygon fits.
The keywords to google search for are: rasterization, overlay, algorithm
If your fill polygon is the shape of a jigsaw piece, many algorithms will miss the interlocking alignment. (I don't know what to suggest in that case)
One approach to the general problem that works well when the boundary is much larger than
the fill pieces is to tile an infinite plane with the pieces in the best way you can, and then look for the optimum alignment of the boundary on this plane.

How to subsample a 2D polygon?

I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.

Algorithm for simplifying 3d surface?

I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp

Resources