I have multiple 2d polygons built up from points in y and z direction, each of these "faces" are located at a cordinate x. I want to show this as a solid model and therefor need to triangulate the points between the sections.
This would be easy if the points were evenly distributed and there were equal amounts of points on each section. But that is not the case.
One section can have 4 points, and the next can have 32. Does anyone know of any algorithms or methods to do this?
I attached a picture that shows how the cross sections can look.
http://i.stack.imgur.com/f6B91.jpg
For the case of parallel slices, you can have a look at Boissonnat, Geiger 1993 and for the general case, this paper with references to other works Boissonnat, Memari. 2007.
One solution is to create a transformation that develops the section points onto a plane, use a Delaunay triangulation to triangulate these points, and then envelop the triangles back into your co-ordinate system. In the sample given, you could develop the points radially by taking the centre of gravity or mean coordinate on each section, and using the distance to this point and bearing to this point as your developed coordinates. This is a method I've seen before to triangulate the inside of tunnels.
Related
I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.
I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.
I got an outline (list of points) for a plane I want to generate. The plane is quite big and I need evenly distributed vertices inside the outline. Each vertex has a color value from red to green to visualize some data in the plane. I need to visualize the data as precise as possible in real time.
My idea was to simply create a grid and adjust all the vertices outside of the outline. This turned out to be quite complex.
This is a quick example what I want to achieve.
Is there any algorithm that solves this problem?
Is there another way to generate a mesh from an outline with evenly distributed vertices?
It sounds like you want to do something like this:
1) First generate a triangulate your polygon to create a mesh. There are plenty of options: https://en.wikipedia.org/wiki/Polygon_triangulation
2) Then while any of the edges in the mesh are too long (meaning that the points at either end might be too far apart), add the midpoint of the longest edge to the mesh, dividing the adjacent triangles into 2.
The results is a mesh with every point within a limited distance of other points in every direction. The resulting mesh will not necessarily be optimal, in that it may have more points than are strictly required, but it will probably satisfy your needs.
If you need to reduce the number of points and thin triangles, you can apply Delaunay Triangulation flipping around each candidate edge first: https://en.wikipedia.org/wiki/Delaunay_triangulation#Visual_Delaunay_definition:_Flipping
Although not totally clear from the question, the marching cubes algorithm, adapted to two dimensions, comes to mind. A detailed descriptione of the two-dimensional version can be found here.
Delaunay meshing can create evenly distributed vertices inside a shape. The image below shows a combined grid- and Delaunay-mesh. You may have a look here.
I have a set of 3d points that lie in a plane. Somewhere on the plane, there will be a hole (which is represented by the lack of points), as in this picture:
I am trying to find the contour of this hole. Other solutions out there involve finding convex/concave hulls but those apply to the outer boundaries, rather than an inner one.
Is there an algorithm that does this?
If you know the plane (which you could determine by PCA), you can project all points into this plane and continue with the 2D coordinates. Thus, your problem reduces to finding boundary points in a 2D data set.
Your data looks as if it might be uniformly sampled (independently per axis). Then, a very simple check might be sufficient: Calculate the centroid of the - let's say 30 - nearest neighbors of a point. If the centroid is very far away from the original point, you are very likely on a boundary.
A second approach might be recording the directions in which you have neighbors. I.e. keep something like a bit field for the discretized directions (e.g. angles in 10° steps, which will give you 36 entries). Then, for every neighbor, calculate its direction and mark that direction, including a few of the adjacent directions, as occupied. E.g. if your neighbor is in the direction of 27.4°, you could mark the direction bits 1, 2, and 3 as occupied. This additional surrounding space will influence how fine-grained the result will be. You might also want to make it depend on the distance of the neighbor (i.e. treat the neighbors as circles and find the angular range that is spanned by the circle). Finally, check if all directions are occupied. If not, you are on a boundary.
Alpha shapes can give you both the inner and outer boundaries.
convert to 2D by projecting the points onto your plane
see related QA dealing with this:
C++ plane interpolation from a set of points
find holes in 2D point set
simply apply this related QA:
Finding holes in 2d point sets?
project found holes back to 3D
again see the link in #1
Sorry for almost link only answer but booth links are here on SO/SE and deals exactly with your issue when combined. I was struggling first to flag your question as duplicate and leave this in a comment but this is more readable.
I am trying to write a Rigid body simulator, and during simulation, I am not only interested in finding whether two objects collide or not, but also the point as well as normal of collision. I have found lots of resources which actually says whether two OBB are colliding or not using separating axis theorem. Also I am interested in 3D representation of OBB. Now, if I know the axis with minimum overlap region for two colliding OBB, is there any way to find the point of collision and normal of collision? Also, there are two major cases of collision, first, point-face and second edge-edge.
I tried to google this problem, but almost every solution is only detecting collision with true or false.
Kindly somebody help!
Look at the scene in the direction of the motion (in other terms, apply a change of coordinates such that this direction becomes vertical, and drop the altitude). You get a 2D figure.
Considering the faces of the two boxes that face each other, you will see two hexagons each split in three parallelograms.
Then
Detect the intersections between the edges in 2D. From the section ratios along the edges, you can determine the actual z distances.
For all vertices, determine the face they fall on in the other box; and from the 3D equations, the piercing point of the viewing line into the face plane, hence the distance. (Repeat this for the vertices of A and B.)
Comparing the distances will tell you which collision happens first and give you the coordinates of the first meeting point (in the transformed system, the back to absolute coordinates).
The point-in-face problem is easy to implement as the facesare convex polygons.
I've been working with the Boost geometry, mostly for manipulating polygons; I was using the centroid built-in method (http://www.boost.org/doc/libs/1_55_0/libs/geometry/doc/html/geometry/reference/algorithms/centroid/centroid_2.html) for calculating the geometric (bary) center of my polygons, but recently after outputting the coordinates of my points (composing a specific polygon) (and analyzing them on the side with some Python scripts) I realized that the centroid coordinates the previous method was giving me do not correspond to the geometric mean of the points of the polygon.
I'm in two dimensions and putting it into equations, I should have:
x_centroid = \frac{1}{number of points composing the polygon} \sum{point i} x_i
and the same for the y coordinates. I'm now suspecting that this could have to do with the fact that the boost geometry library is not just looking at the points on the edge of the polygon (its outer ring) but treating it as a filled object.
Does any of you have some experience in manipulating these functions?
Btw, I using:
point my_center(0,0);
bg::centroid(my_polygon,my_center);
to compute the centroid.
Thank you.
In Boost.Geometry the algorithm proposed by Bashein and Detmer [1] is used by default for the calculation of a centroid of Areal Geometries.
The reason is that the simple average method fails for a case where many closely spaced vertices are placed at one side of a Polygon.
[1] Gerard Bashein and Paul R. Detmer. “Centroid of a Polygon”. Graphics Gems IV, Academic Press, 1994, pp. 3–6
That's what the centroid is -- the mean of the infinite number of points making up the filled polygon. It sounds like what you want is not the centroid, but just the average of the vertices.
Incidentally, "geometric mean" has a different definition than you think, and is not in any way applicable to this situation.
Centroid of polygon is considered as mass center of plane figure (for example, paper sheet), not center of vertices only