I need to create a triangle mesh from a set of points. The set has very few points so it doesn't need to be fast or optimised (I will deal with 100 points maximum). The mesh needs to be a constrained "delaunay triangulation". In the image below I showed (on the left) the set of points I start from (blue and red dots). I also know the connections between these points (the outline in black). The mesh needs to look like the example on the right (including the edges in grey that form outside and inner triangles).
I can't use libraries.
I looked at many different algorithms. They are many and it's easy to be confused. I would like to know if there is a naive and thus hopefully simpler algorithm I can use in order to produce the mesh on the right? Brute force approach is fine (ps: I can do a Delaunay triangulation).
Thanks for all the answers.
I went through the process of developing a solution to this problem so thought I would share my own experience, hoping people facing the problem will find the insight useful.
So from my own experience implementing an algorithm I came to the conclusion:
There is not really quick way to this problem. It is not reasonable to think it can be achieved in just 50 lines of code. In fact the routine that I wrote (C++) is about 400 to 500 lines (hard to tell with comments). So reasonably compact yet challenging and it took me a 2 to 3 days to get the logic right (it can be tricky).
I found the algorithm propose by Sloan in "A FAST ALGORITHM FOR GENERATING CONSTRAINED DELAUNAY TRIANGULATIONS" to be perfectly well suited for the problem at hand. The reality when it comes to Delaunay triangulation which was a new subject for me, is that there seems to be a lot of different algorithms approach and this research is pretty old. So for a new comer it's really hard to know where to start.
2.1. It's hard to know which algorithm is recent, simple in its comprehension and fast and simple to implement.
2.2 Generally once you have understood the principle it's mostly a matter of coding the logic in the most efficient way (and it seems that this what most of the algorithms/papers are fighting above).
2.3 I found the paper from Sloan to be understandable, very well explained. If you follow the logic and the instructions, then anyone can really implement a constrained Delaunay triangulation.
So in conclusion:
I recommend the Sloan paper because it includes an explanation on how to create a Delaunay triangulation followed by a constrained triangulation if necessary.
To answer my own question there is not really brute force to this problem. Implementing this technique just requires to implement the full logic and most implementation must require more or less the same amount of work
With the nuance, that I wasn't looking after much optimisations because my point sets are really small. So I am sure many algorithms are better than the one described by Sloan; they probably propose optimized data structures and algorithms optimized to minimize steps such point insertion in triangulation etc.
So anyway Sloan worked. A small image to illustrate the answer and make it more attractive;-)
EDIT
This is production code so helas I can't share that... I could lead me to be fired. The process is very simple though. You look for the intersection between a segment (your constraint) and all edges in the model. Then for each intersected edge, you swap the diagonal between the 2 triangles that this edges belongs to. If the new diagonal intersects the segment still, then add the new diagonal back onto the stack of intersected edges for this segment. If the new diagonal doesn't intersect the segment then add it to the stack of newly created edges. Keep processing the stack of intersected edges until it's empty.
Then once this is finished you need to process the list of new added edges. For each one of them, check that the Delaunay triangulation criterion is respected. If not swap the diagonal of the triangle this edge belongs to. Simple ...
This is just the paper ...
Point Set
26.9375 10.6875
32.75 9.96875
31.375 4.875
27.6562 2.0625
23.9375 -0.75
18.1562 -0.75
10.875 -0.75
6.60938 3.73438
2.34375 8.21875
2.34375 16.3125
2.34375 24.6875
6.65627 29.3125
10.9688 33.9375
17.8438 33.9375
24.5 33.9375
28.7188 29.4062
32.9375 24.875
32.9375 16.6562
32.9375 16.1562
32.9062 15.1562
8.15625 15.1562
8.46875 9.6875
11.25 6.78125
14.0312 3.875
18.1875 3.875
21.2812 3.875
23.4687 5.5
25.6562 7.125
8.46875 19.7812
27 19.7812
26.625 23.9688
24.875 26.0625
22.1875 29.3125
17.9062 29.3125
14.0312 29.3125
11.3906 26.7188
8.75 24.125
These are x/y/z coordinates (z=0)
Segments:
0 1
1 3
3 5
5 7
7 9
9 11
11 13
13 15
15 17
17 19
19 20
20 22
22 24
24 26
26 0
28 29
29 31
31 33
33 35
35 28
Indices start at 0 (0 -> first vertex in vertex list)
I tried it with alpha shapes with good results for a few shapes https://concavehull.codeplex.com/ but it's nowhere near the original constrained delaunay triangulation.
Here is my alpha-shape algorithm:https://alphashape.codeplex.com.
A simple approach seems to be to implement a ear clipping algorithm. Without optimisation as in hash grids or quad trees. For ear clipping you just check every three consecutive vertices a,b, and c. If b is convex and no other vertex of the polygon lies inside the triangle abc then you can clip this triangle reducing the boundary of the polygon by one vertex, b.
Additionally you have to store the neighbourhood relations. Thus, reference from each triangle its, at most three, neighbours.
When the triangulation is finished you convert it to the constrained Delaunay triangulation (CDT). This can be done by edge flipping. Therefore you have to check for every triangle the circumcircle. If no vertex of a neighbouring triangle lies inside the triangle is conform to the CDT otherwise flip the edge of the triangle where the violation occurs.
Edit due to #Betterdev in the comments blow: Possible holes in the input polygon can be added to the initial boundary by adding a bridge. As a preprocessing one can connect a vertex of a hole to a vertex of the boundary by a "double" edge. This is always possible and makes each hole part of the main polygon boundary; and works well with ear clipping. Storing the neighbour through these bridges is vital to the flipping however.
I previously worked on a vector graphics package, so I can't tell you how many hours I've stared at that exact "e" graphic. I eventually settled on the earcut library for triangulation of point data. It's extremely fast and much simpler compared to libraries such as libtess-2.
Related
I can't find a universal solution to "The Spider and the Fly Problem" (the shortest path between two points on a cuboid on its surface). Everybody solves a one specific case but what when two points can be anywhere?
My idea was to create an algorithm that considers various nets of a cuboid, calculates shortest paths on 2D and then returns the minimum but I have no idea for the algorithm to generate these grids (I guess hardcoding all combinations is not the best way).
Simplistic approach (only works where the points are on the same or adjacent faces)
Flatten the cube structure to 2d as follows...
Start with a face containing one of the two points. If this also contains the other point, you can stop there and the solution is trivial.
There are only 4 neighbouring faces. If any of them contain the other point, you can place that face adjoining the first, and plot the straight line.
Otherwise, then the points are on opposing faces. You need to try placing the final face adjoining each of the 4 neighbouring faces, and choose the shortest of the 4 alternatives. This will not always give the best solution, but it's not far off, and is cheap.
Generic approach
Jim Propp's surface distance conjecture is that For a centrally symmetric convex compact body, the greatest surface distance between two points is achieved only for pairs which are opposites through the centre. My conjecture based on that would be that the shortest distance is approximately where the plane made by the two points and the centre of the body meets the surface. So you simply need to find where that plane intersects the faces using 3d geometry, and use the faces that are crossed by the shorter of the two alternatives when looking at possible routes. If the plane runs along an edge of the cube (e.g. if the points are on opposite faces and are both between the centre of the face and the corner of the face, and those corners are linked by an edge) then routes through both faces should be considered, although I speculate they will be equivalent lengths.
This solution is more generic, and also satisfies scenarios where the points are on the same face, connected faces and opposite faces.
The only problem with this approach arises where the line between the two points passes through the centre of the body, which by definition means that the two points are exactly opposite each other, because that means the 3 points are in a straight line, so there isn't a plane...
I think this is a good question, for which the answer is not at all obvious. In the smooth realm, it is an extraordinarily difficult problem. Geodesics (shortest paths) on a sphere (which is a smooth analog of a cube) are easy to find. Geodesics on a biaxial ellipsoid (an ellipsoid of revolution; one cross section is a circle) are much harder to find. Finding geodesics on a triaxial ellipsoid (a smooth analog of a general cuboid) was a challenging unsolved problem in the first half of the 19th century. See the Wikipedia page.
On the other hand, geodesics on cuboid are made from straight line segments so are much simpler. But some of the difficulty of the problem remains.
You may be able to find some literature on the subject if you search for the term "net". A polyhedron cut along some edges so that it can be flattened is often called a "net". I was able to quickly find a site that claims (without proof) that there are just 11 different nets for a cub(oid). But I agree with you that hard coding all the variations is not the best way.
It's not even obvious to me that the approach using nets will work for all polyhedra. I think I see an argument that will work for cuboids, but for general polyhedra, even convex polyhedra, it is not known whether they must have even one net. See the Wikipedia page. I think a satisfying solution to the problem on cuboids should work more generally on polyhedra, and the net idea seems to be insufficiently general, in my view.
What I'm thinking might work is a dynamic programming solution, where you look at the different edges your path can pass through between the initial and final points. There is a hierarchy of edges (those on the starting face; those containing a vertex on the starting face; those on the faces adjacent to the starting face; etc.). For each point on each of those edges you can find the minimum distance to the start point, culminating in the minimum distance from the end point to the start point.
Another way to think about this is to use something akin to the reflection principle, except instead of reflections, we use rotations in space which rotate the polyhedron about one of its edges so that the other face adjacent to the edge becomes coplanar with the starting face. Then we don't have to worry about whether we have a good net or not. You just pick a sequence of edges so that the final point is eventually rotated onto the plane of the initial face. The sequence of edges is finite because any loop is not part of a minimal path. I'll think about how I might be able to communicate this idea better.
I solved the problem for cubes and cuboids by discretizing the cube edges, generating a big graph and solving graph shortest path problem. You can specify start point (sx, sy, 0), and algorithm will determine all shortest path to target points on top face (z = 1), here for 19 * 19 target points. Cube edges are divided into 100 parts. Graph with these settings has n=1558 vertices and m=464000(!) edges, inner loop of floyd_warshall_path() for updating shortest path distances is executed n³ = 3,781,833,112 times (takes less than 1 minute on Raspberry Pi400). Orange shortest paths flow through 3 cuboid faces, blue ones through 4. Algorithm generates OpenSCAD file as output. Details in this posting, all code in GitHub repo.
P.S:
I made experiments with 1 x 1 x 3 cuboid and was able to find examples where shortest path between two points needs to pass 5 faces. Code is submitted to GitHub repo, and details are in this forum posting.
Orange shortest paths are passing 3 faces, blue are passing 4 faces, and the new yellow shortest paths pass 5 faces! With "mirror" at bottom, allowing to see the bottom face with start point as well. This time cuboid edges are divided into 150 parts (149 inner vertices), and there are 49 * 49 top face target points for single start point on bottom face:
I implemented cuboid shortest paths completely different this time, no graph, and geometric distance calculations in 28 possible foldings of cuboid into plane, details in this new forum thread:
Efficient cuboid surface shortest path problem application
The much increased efficiency with all the sliders allows to change x/y coordinate of bottom face point, number of divisions in X/Y direction and which folding to display, with instantaneous display after any change. This allows to play with a cuboid and "see" how the shortest paths change (on top face as well as on side and bottom faces).
Scale to 50% size, 0.5fps animation shows the 6 foldings containing any shortest path. The animation corresponds to clockwise traversal of OpenSCAD 3D top face shown on the right.
With added top and bottom face view.
but for anyone still interested, this question was solved on stackxechange by "Intelligenti pauca" (with some nice diagrams): here is the original link
https://math.stackexchange.com/questions/3023721/finding-the-shortest-path-between-two-points-on-the-surface-of-a-cube
The "Simplistic approach" from "Richardissimo" was on the right track, you just need to check a few more cases.
Intelligenti pauca:
If the two points belong to adjacent faces, you have to check three
different possible unfoldings to find the shortest path. In diagram
below I represented the first point (red) and the second point (black)
in three possible relative positions: middle position occurs when the
path goes through the common edge, in the other cases the path
traverses one of the faces adjacent to both faces. The other possible
positions are clearly longer than these.
image1
If the two points belong to opposite faces, then 12 different possible
positions have to be checked: see diagram below.
image2
After mapping the points like this you can calculate the distances like normal on a plane an have min(possible distances) as your shortest path-length.
I'm currently trying to find a way to take irregularly shaped polygons and divide them into as few quadrilaterals as possible.
I can't find an obvious out of the box algorithm anywhere that does this, so I'm thinking of going down two possible routes.
1.Getting the optimal triangulation first, and then converting these to quadrilaterals
2.Trying to alter the CGAL optimal_convex_partitions function from their 2d polygon partitioning package to create quadrilateral partitions https://doc.cgal.org/latest/Partition_2/group__PkgPolygonPartitioning2.html#ga3ca9fb1f363f9f792bfbbeca65ad5cc5
I'm a total beginner to computational geometry, so I'd just like to know if either of these approaches seems like a fools errand before I try to learn C++? If anyone knows anything about the best possible approach to this that'd be even better. Thanks!
(Edit) Including a sample polygon - None of them should have holes, though they may have complex exteriors and concavity.
I assume that if you start with triangles and then try to merge two adjacent triangles into one quadrilateral in a greedy fashion you may end up with many isolated triangles.
Not sure how the convex partition may come handy.
You may find useful information in the articles below. As far as know, finite element analysis requires that the input object comprise of triangles or quads, so there has been some research done in this direction. Here are two papers that might be relevant:
Ted D. Blacker amd Michael B. Stephenson, “Paving: a new approach to automated quadrilateral mesh generation” Int. J. Num.Meth.Engg, Vol 32, 811-847 (1991)
Jinwoo Choi and Yohngjo Kim , Development of a New Algorithm for Automatic Generation of a Quadrilateral Mesh, International Journal of CAD/CAM Vol. 10, No. 2, pp. 00~00 (2011
I'm far from being an expert on this subjest, but I'm certain that these alg. can be implemented using CGAL...
I have successfully implemented the marching cubes algorithm. I used the standard materials as a reference, but I rewrote it entirely from scratch. It works, but I am observing the ambiguities that lead to holes in the mesh.
I was considering the marching tetrahedrons algorithm, which supposedly does not suffer from ambiguities. I fail to see how this is possible.
The marching tetrahedrons algorithm uses six tetrahedrons in place of a cube, with triangulations for each tetrahedron. But, suppose I were to implement the marching cubes algorithm, but for each of the 256 triangulations, simply choose the one that is the "sum" (union) of the cube's tetrahedron's triangulations? As far as I know, this is what marching tetrahedrons does--so why does that magically fix the ambiguities?
There are 16 unique cases, I think, and the 240 others are just reflections/rotations of those 16. I remember reading in some paper somewhere that to resolve ambiguities, you need 33 cases. Could this be related to why marching tetrahedons somehow doesn't suffer from problems?
So, questions:
Why does marching tetrahedrons not suffer from ambiguities?
If it doesn't, why don't people just use the marching cubes algorithm, but with the tetrahedrons' triangulations instead?
I feel like I'm missing something here. Thanks.
Okay, well I've just finished implementing my version of marching tetrahedrons, and while I easily saw ambiguities lead to problems in the marching cubes's mesh, the marching tetrahedrons's mesh seems to be consistently topologically correct. There are some annoying features along very thin points where some vertices can't quite decide which side of the divide they want to be on, but the mesh is always watertight.
In answer to my questions:
To resolve ambiguities in the marching cubes algorithm, as far as I can tell, one evaluates the function more carefully in the cell. In the tetrahedrons algorithm, one explicitly samples the center of the cell and polygonizes to that. I suspect that because the tetrahedral mesh includes this vertex in particular, ambiguities are implicitly handled. The other extra vertices on the side probably also have something to do with it. As a key point, the function is actually being sampled in more places when you go to refine it.
I'm pretty sure they do. My marching tetrahedrons algorithm does just that, and I think that, internally, it's doing the same thing as the classic marching tetrahedrons algorithm. In my implementation, the tetrahedrons' triangles are all listed for each possible cube, which I suspect makes it faster than figuring out the one or two triangles for each individual tetrahedron individually.
If I had the time and attention span (neither of which I do), it might be beneficial to remesh the insides of each cube to use fewer triangles maximum, which I think wouldn't hurt it.
To answer the question "Why does Marching Tetrahedrons algo has ambiguities?" it is required to understand why do the ambiguities arise in the first place in Marching Cubes.
Ambiguities may occur when there are two diagonally opposite "positive" vertices and two diagonally opposite "negative" vertices in a cube. It took me some time to wrap my mind around it, but the problem with ambiguities is that they theoretically allow to create isosurface patches for adjacent cubes that are incompatible with one another. That's the obvious part. The interesting part is that two adjacent isosurface patches from two ambiguous configurations are incompatible if (and only if) one of them separates "negative" vertices, and the other one separates "positive" verticies.
Here is a relevant quote from Rephael Wenger's great book "Isosurfaces Geometry, Topology & Algorithms" (can't post more then 2 links, so I've merged all relevant images from the book into a single one):
The border of a cube’s three-dimensional isosurface patch defines an
isocontour on each of the cube’s square facets. If some configuration’s
isosurface patch separates the negative vertices on the facet while an
adjacent configuration’s isosurface patch separates the positive ones,
then the isosurface edges on the common facet will not align. The
isosurface patches in Figure 2.16 do not separate the positive
vertices on any facet. Moreover, the derived isosurface surface
patches in any rotation or reflection of the configurations also do not
separate positive vertices on any facet. Thus the isosurface patches
in any two adjacent cubes are properly aligned on their boundaries. An
equally valid, but combinatorially distinct, isosurface table could be
generated by using isosurface patches that do not separate the
negative vertices on any square facet.
What this means is that if all used ambiguous configurations follow the same pattern (i.e. always separate "negative" vertices), then it is impossible to produce a topologically incorrect surface. And problems will arise if you use configurations "from both worlds" for a single isosurface.
The surface that was constructed using the same ambiguity resolution pattern still may contain unwanted errors like this (taken from "Efficient implementation of Marching Cubes’ cases with topological guarantees" article by Thomas Lewiner Helio Lopes, Antonio Wilson Vieira and Geovan Tavares), but it will be, as you said, watertight.
To achieve this, you will need to use the look-up table based on the 22 unique configurations (not the standard 14 or 15) shown in the Figure 2.16.
Now, back to the original question at last - why does Marching Tetrahedrons does not suffer from ambiguities? For the same reason there will be no ambiguities in the Marching Cubes, if done as described above - because you arbitrarily chose to use one of the two available variants of ambiguous configuration resolution. In Marching Cubes it is not obvious at all (at least for me, had to do a lot of digging) that this is even an option, but in Marching Tetrahedrons it is done for you by the algorithm itself. Here is another quote from Rephael Wenger's book:
The regular grid cubes have ambiguous configurations while the
tetrahedral decomposition does not. What happened to the ambiguous
configurations? These configurations are resolved by the choice of
triangulation. For instance, in Figure 2.31, the first triangulation
gives an isosurface patch with two components corresponding to 2B-II
in Figure 2.22 while the second gives an isosurface patch with one
component corresponding to 2B-I.
Note how cubes are sliced into tetrahedrons in two different ways in Figure 2.31. The choice of this slicing or the other is the secret sauce that resolves the ambiguities.
One may ask himself - if the ambiguity problem can be resolved just by using the same pattern for all cubes then why are there so much books and papers about more complicated solutions? Why do I need Asymptotic Decider and all that stuff? As far as I can tell, it all comes down to what you need to achieve. If topological correctness (as in, no holes) is enough for you, then you do not need all the advanced stuff. If you want to resolve problems like those shown in "Efficient implementation of Marching Cubes" article shown above, then you need to dive deeper.
I highly recommend reading the relevant chapters of Rephael Wenger's book "Isosurfaces Geometry, Topology & Algorithms" to better understand the nature of these algorithms, what are the problems, where do the problems come from and how can they be solved.
As was pointed out by Li Xiaosheng, the basics can be better understood by carefully examining the Marching Squares algo first. Actually, the whole answer was layed down by Li Xiaosheng, I've just expanded the explanations a bit.
Take the following 2d example (which introduces ambiguities):
01
10
If we divide this square into two triangles, we will get different results in the diagonal we chose to divide the square.
Along 0-0 diagonal, we get triangles (010,010) while for the 1-1 diagonal, we get triangles (101,101). Obviously, different decomposition of square lead to different results. Either is correct and this is same for 3D Cubes.
MT doesnot resolve the ambiguities actually but it can produce topologically consist result by choosing the same decomposition strategy for all cubes. That the way it get rid of suffering from ambiguities.
I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.
Here is a problem I am trying to solve:
I have an irregular shape. How would I go about evenly distributing 5 points on this shape so that the distance between each point is equal to each other?
David says this is impossible, but in fact there is an answer out of left field: just put all your points on top of each other! They'll all have the same distance to all the other points: zero.
In fact, that's the only algorithm that has a solution (i.e. all pairwise distances are the same) regardless of the input shape.
I know the question asks to put the points "evenly", but since that's not formally defined, I expect that was just an attempt to explain "all pairwise distances are the same", in which case my answer is "even".
this is mathematically impossible. It will only work for a small subset of base shapes.
There are however some solutions you might try:
Analytic approach. Start with a point P0, create a sphere around P0 and intersect it with the base shape, giving you a set of curves C0. Then create another point P1 somewhere on C0. Again, create a sphere around P1 and intersect it with C0, giving you a set of points C1, your third point P2 will be one of the points in C1. And so on and so forth. This approach guarantees distance constraints, but it also heavily depends on initial conditions.
Iterative approach. Essentially form-finding. You create some points on the object and you also create springs between the ones that share a distance constraint. Then you solve the spring forces and move your points accordingly. This will most likely push them away from the base shape, so you need to pull them back onto the base shape. Repeat until your points are no longer moving or until the distance constraint has been satisfied within tolerance.
Sampling approach. Convert your base geometry into a voxel space, and start scooping out all the voxels that are too close to a newly inserted point. This makes sure you never get two points too close together, but it also suffers from tolerance (and probably performance) issues.
If you can supply more information regarding the nature of your geometry and your constraints, a more specific answer becomes possible.
For folks stumbling across here in the future, check out Lloyd's algorithm.
The only way to position 5 points equally distant from one another (other than the trivial solution of putting them through the origin) is in the 4+ dimensional space. It is mathematically impossible to have 5 equally distanced object in 3D.
Four is the most you can have in 3D and that shape is a tetrahedron.