Mesh simplification for multi layers - computational-geometry

There is a landscape in my game, it consists of several layers(each is a separate mesh), now i want to do mesh simplification to the whole layers, but I found there is many intersections between different layers after apply simplification in the game engine, so is there a simplification algorithm to avoid intersections(or reduce the nums) or how can I avoid the intersection after simplification(any open source code repo or library will be appreciated)?

If I have understood well, the problem is produced because you are decimating each layer independently, and each decimation process has no idea of information about the others.
One idea you can try is to quantize the coordinates of your vertex after each decimation, so you don't let the coordinates to take free values, but you force to be multiple of a given value using rounding and truncation operations. This will enforce the probability of vertex of two different layers to take same values. Depending of your layers, it may be enough to quantize only one of your coordinates to avoid the sensation of overlapping between layers.
Other idea is to use a simplification algorithm based on "vertex clustering decimation", which will produce a similar effect to quantification. In this case, you can do the vertex clustering of a cloud of points with the vertex of every layer to appear together, so you ensure the vertex representator for each cluster is shared between every layer.

Related

Algorithm for creating a specific geometric structure

I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.
I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.

Finding vertices in a mesh that are within certain proximity of each other

I have a 3D mesh that is comprised of a certain amount of vertices.
I know that there are some vertices that are really close to one another. I want to find groups of these, so that I can normalize them.
I could make a KD and do basic NNS, but that doesn't scale so well if I don't have a reference point.
I want to find these groups in relation to all points.
In my searches I also found k-means but I cannot seem to wrap my head around it's scientific descriptions to find out if that's really what I need.
I'm not well versed in spatial algorithms in general. I know where one can apply them, for instance, for this case, but I lack the actual know-how, to even have the correct keywords.
So, yeah, what algorithms are meant for such task?
Simple idea that might work:
Compue a slightly big bounding volume for each vertex in the mesh. For instance is you use a Sphere, use a small radius for it e.g., the radius can be equal to the length of the smallest edge of the mesh.
Compute the intersection of bounding volumes for each vertex. Use a collision detection algorithm for that such as the I-Collide. Use a disjoint-set datastrcture for grouping the points in collision.
Merge all the points residing in the same set.
You can fine-tune the algorithm by changing the size of the bounding volumes. Also you can use this algorithm as a starting point for a k-means algoritm or other sound clustering technique.

The logic behind Skeletal Animation in OpenGL

What are the bone weight, bone influences, joints, offset and local matrices used for ? There is no certain article on the internet that explains the logic well. I still don't know if every bone has a different model that is combined later with the other models. Or how to handle these matrices, how to set bones, combine and skin them... I will be glad if you can share any articles or your knowledge about skeletal animation in opengl.
Some of these terms will differ betweens implementations. Rather than try to provide concrete definitions, I'd prefer to give a very rough overview of how skinning works, as I think that's what you're asking.
Also, this is not particularly GL specific...
The main idea is that you take a polygonal model and attach it to a skeleton. Each vertex in the model is assigned to one or more bones. A bone, in this context, is really just a transformation, though they are typically visualised as a skeleton, as the bones are usually authored in a hierarchy and will naturally resemble an actual skeleton.
A 'joint' may just mean 'bone', or in other contexts may in fact refer to how two bones are connected and articulated...
As the bones are hierarchical, they will have 'local' transforms which describe their transformation relative to a parent. At runtime the transforms will usually be concatenated such that they are all in the same space.
The assigning of vertices to bones is done using weights. Weights will usually add up to 1 for each vertex. Weights can be automatically assigned by proximity to a bone, but will typically be hand-adjusted by an artist. Often they are 'painted' onto the 3D model using a tool in the art package.
At runtime, vertices are transformed by each bone they are influenced by, and the final vertex position is the weighted average of the result of those different transforms. How that weighted average is calculated can vary, but that's the general approach.
However for runtime applications, it is usually important to keep the number of different bones influencing a vertex to a minimum, and there may well be an upper limit which is relatively small; 4, perhaps. So instead of providing each vertex with a weight for every bones in the skeleton, it is common to provide a fixed number of joint indices, with corresponding weights.
Note that you generally do not do anything with texture coordinates when skinning, but you will almost certainly have to recalculate normals and possibly tangent vectors. Again, how to do that can vary between implementations.

How to subsample a 2D polygon?

I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.

Find a similarity of two vector shapes

Looking for any information/algorithms relating to comparing vector graphics. E.g. say there two point collections or vector files with two almost identical figures. I want to determine that a first figure is about 90% similar to the second one.
A common way to test for similarity is with image moments. Moments are intrinsically translationally invariant, and if the objects you compare might be scaled or rotated you can use moments that are invariant to these transformations, such as Hu moments.
Most of the programs I know would require rasterized versions of the vector objects; but the moments could be calculated directly from the vector graphics using a Green's Theorem approach, or a more simplistic approach that just identifies unique (unordered) vertex configurations would be to convert the Hu moment integrals to sums over the vertices -- in a physics analogy replacing the continuous object with equal point masses at each vertex.
There is a paper on a tool called VISTO that sorts vector graphics images (using moments, I think), which should certainly be useful for more details.
You could search for fingerprint matching algorithms. Fingerprints are usually converted to a set of points with their relative location to each other, which makes it basically the same problem as yours.
You could transform it to a non-vector graphic and then apply standard image analysis techniques like SIFT points, etc.

Resources