Generate many normal distributed 3d points with minimum distance - algorithm

I want to generate a couple million 3d points. Every point should have a minimum distance to the nearest neighbor. In the center there should be a ball where no points are generated.
I prototyped in Matlab in 2d and got promising points in the picture below with DistMesh(http://persson.berkeley.edu/distmesh/).
fd = #(p) max( sqrt(sum(p.^2,2)) -100, -(sqrt(sum(p.^2,2))-10) );
fh = #(p) (normcdf((sqrt(sum(p.^2,2))), 0, sqrt(20^2+ 20^2))-0.5)*2;
[p,t] = distmesh( fd, fh, 0.1, [-100,-100;100,100], [],[], 25);
I got 366035 Points in 40sec
Then I tried it in a 3d space but with, 40000 Points after 8 minutes I ran out of RAM (16 GB).
I had something in mind, but don't know what to google. I once read an article where every point had weights acting like springs to other points. Then they simulate all these springs to find a balance. In my case, every point would be a "spring" to the other points with the minimum distance and one "spring" to the center with normal distribution.
Other implementations I tried were an octatree/kd trees and other structures that limit the possible neighbor.
Are there any better implementations or algorithms. Or did I use Matlab wrong, this was the first time I ever used Matlab for prototyping.

Related

Algorithm to find best fitting point on a plane

I am working on a path finding system for my game that uses A* and i need to position the nodes in such a way that they would be within minimal distance from other points.
I wonder if there is an algorithm that would allow me to find best fitting point on a plane or a line (between neighboring points) as close as possible to the specified position, while maintaining minimal distance between the neighbors.
Basically i need an algorithm that given input (in pseudocode) min distance = 2, original position = 1, 1 and a set of existing points would do this:
In the example the shape is a triangle and the point can be calculated using Pythagoras theorem, but i need it to work for any shape.
Your problem seems uneasy. If you draw the "forbidden areas", they form a complex geometry made of the union of disks.
The there are two cases:
if the new point belongs to the allowed area, you are done;
otherwise you need to find the nearest allowed point.
It is easy to see if a point is allowed, by computing all distances. But finding the nearest allowed point seems more challenging. (By the way, this point could be very far.)
If the target point lies inside a circle, the nearest candidate location might be the orthogonal projection on a circle, or the intersection between two circles. Compute all these points and check if they are allowed. Then keep the nearest candidate.
In red, the allowed candidates. In black the forbidden candidates.
For N points, this is an O(N³) process. This can probably be reduced by a factor N by means of computational geometry techniques, but at the price of high complexity.

how to compute the shortest distance from a point to triangle using tesselated data

i have to solve a distance problem and i´m getting pretty upset because i don´t know how to do it despite having tried nearly everything that i´ve found on the web... here´s my problem:
i work in the automotive industry and we use tessellated data (like STL, in my case the JT-Format). I have a part that needs to be welded. and i have the coordinates of the weldpoint. to ensure that the weldpoint is located correctly i want to calculate if the weldpoint hits the part or, in other words, i want to check if the weldpoint collides with the part. if yes, then the part can be welded. otherwise the weldpoint would be in the air and it couldnt be welded. therefor i want to calculate the distance between the part (which is basically a set of triangles or polygons in the mentioned format) and the point. if the distance to one of the triangles is less then the also given radius of the weldpoint, then there must be a collision and thus the weldpoint is located correctly and can be welded.
a how to, pseudo-code or whatever that could be useful would be very appreciated. i´m coding in c++ using the JTOpen Toolkit. Please note that the point hasn´t necessarily have to lie within the triangle. Maybe an example could help you and me to understand the problems/answers (no collision in the following example):
let v1, v2, v3 be the vertices of a triangle and px, py, pz the coordinates of the weldpoint (radius 1.8). I also get normals (n1, n2, n3) to every vertex but i dont know what to to with them...
v1x = -273.439
v1y = -787.775
v1z = 854.273
v2x = -274.247
v2y = -788.085
v2z = 855.244
v3x = -272.077
v3y = -787.864
v3z = 855.377
px = 140.99
py = -787.78
pz = 458.93
n1x = -0.113447
n1y = 0.97007
n1z = 0.214693
n2x = -0.113423
n2y = 0.970069
n2z = 0.214712
n3x = -0.110158
n3y = 0.969844
n3z = 0.217413
thank you in advance!
The locus of the points at the same distance of a triangle is a complex surface made of
two triangles parallel to the original one, at the given distance;
three half cylindres corresponding to points at equal distances of the edges;
the spheres with points at equal distances of the vertices.
If you look facing the triangle, you will observe that these surfaces are split by
the three triangle sides,
the six normals to the sides at the vertices.
Hence to find the distance of a given point, you need to project it orthogonally to the plane of the triangle and find its location among 7 regions delimited by half-lines and segments. Using an appropriate spatial rotation, the problem can be solved in 2D. Then knowing the region, you will use either the distance to the plane, to an edge or to a vertex.
Note that in the case of a tessellation, several triangles have to be considered. If there are many of them, acceleration systems will be needed. This is a broad and a little technical topic.

3D mesh direction detection

I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.

Snapping vector to a point from a grid on a sphere (icosahedron)

here is a problem that will turn your brain inside out, I'm trying to deal with it for a quite some time already.
Suppose you have sphere located in the origin of a 3d space. The sphere is segmented into a grid of equidistant points. The procedure that forms grid isn't that important but what seems simple to me is to use regular 3d computer graphics sphere generation procedure (The algorithm that forms the sphere described in the picture below)
Now, after I have such sphere (i.e. icosahedron of some degree) I need a computationally trivial procedure that will be capable to snap (an angle) of a random unit vector to it's closest icosahedron edge points. Also it is acceptable if the vector will be snapped to a center point of triangle that the vector is intersecting.
I would like to emphasise that it is important that the procedure should be computationally trivial. This means that procedures that actually create a sphere in memory and then involve a search among every triangle in sphere is not a good idea because such search will require access to global heap and ram which is slow because I need to perform this procedure millions of times on a low end mobile hardware.
The procedure should yield it's result through a set of mathematical equations based only on two values, the vector and degree of icosahedron (i.e. sphere)
Any thoughts? Thank you in advance!
============
Edit
One afterthought that just came to my mind, it seems that within diagram below step 3 (i.e. Project each new vertex to the unit sphere) is not important at all, because after bisection, projection of every vertex to a sphere would preserve all angular characteristics of a bisected shape that we are trying to snap to. So the task simplifies to identifying a bisected sub triangle coordinates that are penetrated by vector.
Make a table with 20 entries of top-level icosahedron faces coordinates - for example, build them from wiki coordinate set)
The vertices of an icosahedron centered at the origin with an
edge-length of 2 and a circumscribed sphere radius of 2 sin (2π/5) are
described by circular permutations of:
V[] = (0, ±1, ±ϕ)
where ϕ = (1 + √5)/2
is the golden ratio (also written τ).
and calculate corresponding central vectors C[] (sum of three vectors for vertices of every face).
Find the closest central vector using maximum of dot product (DP) of your vector P and all C[]. Perhaps, it is possible to reduce number of checks accounting for P components (for example if dot product of P and some V[i] is negative, there is no sense to consider faces being neighbors of V[i]). Don't sure that this elimination takes less time than direct full comparison of DP's with centers.
When big triangle face is determined, project P onto the plane of that face and get coordinates of P' in u-v (decompose AP' by AB and AC, where A,B,C are face vertices).
Multiply u,v by 2^N (degree of subdivision).
u' = u * 2^N
v' = v * 2^N
iu = Floor(u')
iv = Floor(v')
fu = Frac(u')
fv = Frac(v')
Integer part of u' is "row" of small triangle, integer part of v' is "column". Fractional parts are trilinear coordinates inside small triangle face, so we can choose the smallest value of fu, fv, 1-fu-fv to get the closest vertice. Calculate this closest vertex and normalize vector if needed.
It's not equidistant, you can see if you study this version:
It's a problem of geodesic dome frequency and some people have spent time researching all known methods to do that geometry: http://geo-dome.co.uk/article.asp?uname=domefreq, see that guy is a self labelled geodesizer :)
One page told me that the progression goes like this: 2 + 10·4N (12,42,162...)
You can simplify it down to a simple flat fractal triangle, where every triangle devides into 4 smaller triangles, and every time the subdivision is rotated 12 times around a sphere.
Logically, it is only one triangle rotated 12 times, and if you solve the code on that side, then you have the lowest computation version of the geodesic spheres.
If you don't want to keep the 12 sides as a series of arrays, and you want a lower memory version, then you can read about midpoint subdivision code, there's a lot of versions of midpoint subdivision.
I may have completely missed something. just that there isn't a true equidistant geodesic dome, because a triangle doesn't map to a sphere, only for icos.

An efficient way to simulate many particle collisions?

I would like to write a small program simulating many particle collisions, starting first in 2D (I would extend it to 3D later on), to (in 3D) simulate the convergence towards the Boltzmann distribution and also to see how the distribution evolves in 2D.
I have not yet started programming, so please don't ask for code samples, it is a rather general question that should help me get started. There is no problem for me with the physics behind this problem, it is rather the fact that I will have to simulate at least 200-500 particles, to achieve a pretty good speed distribution. And I would like to do that in real time.
Now, for every time step, I would update first the position of all the particles and then check for collisions, to update the new velocity vector. That, however, includes a lot of checkings, since I would have to see if every single particle undergoes a collision with every other particle.
I found this post to more or less the same problem and the approach used there was also the only one I can think of. I am afraid, however, that this will not work very well in real time, because it would involve too many collision checks.
So now: Even if this approach will work performance wise (getting say 40fps), can anybody think of a way to avoid unnecessary collision checks?
My own idea was splitting up the board (or in 3D: space) into squares (cubes) that have dimensions at least of the diameters of the particles and implement a way of only checking for collisions if the centres of two particles are within adjecent squares in the grid...
I would be happy to hear more ideas, as I would like to increase the amount of particles as much as I can and still have a real time calculation/simulation going on.
Edit: All collisions are purely elastic collisions without any other forces doing work on the particles. The initial situation I will implement to be determined by some variables chosen by the user to choose random starting positions and velocities.
Edit2: I found a good and very helpful paper on the simulation of particle collision here. Hopefully it might help some People that are interested in more depth.
If you think of it, particles moving on a plan are really a 3D system where the three dimensions are x, y and time (t).
Let's say a "time step" goes from t0 to t1. For each particle, you create a 3D line segment going from P0(x0, y0, t0) to P1(x1, y1, t1) based on current particle position, velocity and direction.
Partition the 3D space in a 3D grid, and link each 3D line segments to the cells it cross.
Now, each grid cell should be checked. If it's linked to 0 or 1 segments, it need no further check (mark it as checked). If it contains 2 or more segments, you need to check for collision between them: compute the 3D collision point Pt, shorten the two segments to end at this point (and remove link to cells they doesn't cross anymore), create two new segments going from Pt to newly computed P1 points according to the new direction/velocity of particles. Add these new line segments to the grid and mark the cell as checked. Adding a line segment to the grid turn all crossed cells to unchecked state.
When there is no more unchecked cells in your grid, you've resolved your time step.
EDIT
For 3D particles, adapt above solution to 4D.
Octrees are a nice form of 3D space partitioning grid in this case, as you can "bubble up" checked/unnchecked status to quickly find cells requiring attention.
A good high level example of spatial division is to think about the game of pong, and detecting collisions between the ball and a paddle.
Say the paddle is in the top left corner of the screen, and the ball is near the bottom left corner of the screen...
--------------------
|▌ |
| |
| |
| ○ |
--------------------
It's not necessary to check for collision each time the ball moves. Instead, split the playing field into two right down the middle. Is the ball in the left hand side of the field? (simple point inside rectangle algorithm)
Left Right
|
---------|----------
|▌ | |
| | |
| | |
| ○ | |
---------|----------
|
If the answer is yes, split the the left hand side again, this time horizontally so we have a top left and a bottom left partition.
Left Right
|
---------|----------
|▌ | |
| | |
----------- |
| | |
| ○ | |
---------|----------
|
Is this ball in the same top-left corner of the screen as the paddle? If not, no need to check for collision! Only objects which reside in the same partition need to be tested for collision with each other. By doing a series of simple (and cheap) point inside rectangle checks, you can easily save yourself from doing a more expensive shape/geometry collision check.
You can continue splitting the space down into smaller and smaller chunks until an object spans two partitions. This is the basic principle behind BSP (a technique pioneered in early 3D games like Quake). There is a whole bunch of theory on the web about spatial partitioning in 2 and 3 dimensions.
http://en.wikipedia.org/wiki/Space_partitioning
In 2 dimensions you would often use a BSP or quadtree. In 3 dimensions you would often use an octree. However the underlying principle remains the same.
You can think along the line of 'divide and conquer'. The idea is to identify orthogonal parameters with don't impact each other. e.g. one can think of splitting momentum component along 2 axis in case of 2D (3 axis in 3D) and compute collision/position independently. Another way to to identify such parameters can be grouping of particles which are moving perpendicular to each other. So even if they impact, net momentum along those lines doesn't change.
I agree above doesn't fully answer your question, but it conveys a fundamental idea which you may find useful here.
Let us say that at time t, for each particle, you have:
P position
V speed
and a N*(N-1)/2 array of information between particle A(i) and A(j) where i < j; you use symmetry to evaluate an upper triangular matrix instead of a full N*(N-1) grid.
MAT[i][j] = { dx, dy, dz, sx, sy, sz }.
which means that in respect to particle j, particle j has distance made up of three components dx, dy and dz; and a delta-vee multipled by dt which is sx, sy, sz.
To move to instant t+dt you tentatively update the positions of all particles based on their speed
px[i] += dx[i] // px,py,pz make up vector P; dx,dy,dz is vector V premultiplied by dt
py[i] += dy[i] // Which means that we could use "particle 0" as a fixed origin
pz[i] += dz[i] // except you can't collide with the origin, since it's virtual
Then you check the whole N*(N-1)/2 array and tentatively calculate the new relative distance between every couple of particles.
dx1 = dx + sx
dy1 = dy + sy
dz1 = dz + sz
DN = dx1*dx1+dy1*dy1+dz1*dz1 # This is the new distance
If DN < D^2 with D diameter of the particle, you have had a collision in the dt just past.
You then calculate exactly where this happened, i.e. you calculate the exact d't of collision, which you can do from the old distance squared D2 (dx*dx+dy*dy+dz*dz) and the new DN: it's
d't = [(SQRT(D2)-D)/(SQRT(D2)-SQRT(DN))]*dt
(Time needed to reduce distance from SQRT(D2) to D, at a speed that covers the distance SQRT(D2)-SQRT(DN) in time dt). This makes the hypothesis that particle j, seen from the refrence frame of particle i, hasn't "overshooted".
It is a more hefty calculation, but you only need it when you get a collision.
Knowing d't, and d"t = dt-d't, you can repeat the position calculation on Pi and Pj using dx*d't/dt etc. and obtain the exact position P of particles i and j at the instant of collision; you update speeds, then integrate it for the remaining d"t and get the positions at the end of time dt.
Note that if we stopped here this method would break if a three-particle collision took place, and would only handle two-particle collisions.
So instead of running the calculations we just mark that a collision occurred at d't for particles (i,j), and at the end of the run, we save the minimum d't at which a collision occurred, and between whom.
I.e., say we check particles 25 and 110 and find a collision at 0.7 dt; then we find a collision between 110 and 139 at 0.3 dt. There are no collisions earlier than 0.3 dt.
We enter collision updating phase, and "collide" 110 and 139 and update their position and speed. Then repeat the 2*(N-2) calculations for each (i, 110) and (i, 139).
We will discover that there probably still is a collision with particle 25, but now at 0.5 dt, and maybe, say, another between 139 and 80 at 0.9 dt. 0.5 dt is the new minimum, so we repeat collision calculation between 25 and 110, and repeat, suffering a slight "slow down" in the algorithm for each collision.
Thus implemented, the only risk now is that of "ghost collisions", i.e., a particle is at D > diameter from a target at time t-dt, and is at D > diameter on the other side at time t.
This you can only avoid by choosing a dt so that no particle ever travels more than half its own diameter in any given dt. Actually, you might use an adaptive dt based on the speed of the fastest particle. Ghost glancing collisions are still possible; a further refinement is to reduce dt based on the nearest distance between any two particles.
This way, it is true that the algorithm slows down considerably in the vicinity of a collision, but it speeds up enormously when collisions aren't likely. If the minimum distance (which we calculate at almost no cost during the loop) between two particles is such that the fastest particle (which also we find out at almost no cost) can't cover it in less than fifty dts, that's a 4900% speed increase right there.
Anyway, in the no-collision generic case we have now done five sums (actually more like thirty-four due to array indexing), three products and several assignments for every particle couple. If we include the (k,k) couple to take into account the particle update itself, we have a good approximation of the cost so far.
This method has the advantage of being O(N^2) - it scales with the number of particles - instead of being O(M^3) - scaling with the volume of space involved.
I'd expect a C program on a modern processor to be able to manage in real time a number of particles in the order of the tens of thousands.
P.S.: this is actually very similar to Nicolas Repiquet's approach, including the necessity of slowing down in the 4D vicinity of multiple collisions.
Until a collision between two particles (or between a particle and a wall), happens, the integration is trivial. The approach here is to calculate the time of the first collision, integrate until then, then calculate the time of the second collision and so on. Let's define tw[i] as the time the ith particle takes to hit the first wall. It is quite easy to calculate, although you must take into account the diameter of the sphere.
The calculation of the time tc[i,j] of the collision between two particles i and j takes a little more time, and follows from the study in time of their distance d:
d^2=Δx(t)^2+Δy(t)^2+Δz(t)^2
We study if there exists t positive such that d^2=D^2, being D the diameter of the particles(or the sum of the two radii of the particles, if you want them different). Now, consider the first term of the sum at the RHS,
Δx(t)^2=(x[i](t)-x[j](t))^2=
Δx(t)^2=(x[i](t0)-x[j](t0)+(u[i]-u[j])t)^2=
Δx(t)^2=(x[i](t0)-x[j](t0))^2+2(x[i](t0)-x[j](t0))(u[i]-u[j])t + (u[i]-u[j])^2t^2
where the new terms appearing define the law of motion of the two particles for the x coordinate,
x[i](t)=x[i](t0)+u[i]t
x[j](t)=x[j](t0)+u[j]t
and t0 is the time of the initial configuration. Let then (u[i],v[i],w[i]) be the three components of velocities of the i-th particle. Doing the same for the other three coordinates and summing up, we get to a 2nd order polynomial equation in t,
at^2+2bt+c=0,
where
a=(u[i]-u[j])^2+(v[i]-v[j])^2+(w[i]-w[j])^2
b=(x[i](t0)-x[j](t0))(u[i]-u[j]) + (y[i](t0)-y[j](t0))(v[i]-v[j]) + (z[i](t0)-z[j](t0))(w[i]-w[j])
c=(x[i](t0)-x[j](t0))^2 + (y[i](t0)-y[j](t0))^2 + (z[i](t0)-z[j](t0))^2-D^2
Now, there are many criteria to evaluate the existence of a real solution, etc... You can evaluate that later if you want to optimize it. In any case you get tc[i,j], and if it is complex or negative you set it to plus infinity. To speed up, remember that tc[i,j] is symmetric, and you also want to set tc[i,i] to infinity for convenience.
Then you take the minimum tmin of the array tw and of the matrix tc, and integrate in time for the time tmin.
You now subtract tmin to all elements of tw and of tc.
In case of an elastic collision with the wall of the i-th particle, you just flip the velocity of that particle, and recalculate only tw[i] and tc[i,k] for every other k.
In case of a collision between two particles, you recalculate tw[i],tw[j] and tc[i,k],tc[j,k] for every other k. The evaluation of an elastic collision in 3D is not trivial, maybe you can use this
http://www.atmos.illinois.edu/courses/atmos100/userdocs/3Dcollisions.html
About how does the process scale, you have an initial overhead that is O(n^2). Then the integration between two timesteps is O(n), and hitting the wall or a collision requires O(n) recalculation. But what really matters is how the average time between collisions scales with n. And there should be an answer somewhere in statistical physics for this :-)
Don't forget to add further intermediate timesteps if you want to plot a property against time.
You can define a repulsive force between particles, proportional to 1/(distance squared). At each iteration, calculate all the forces between particle pairs, add all the forces acting on each particle, calculate the particle acceleration, then particle velocity and finally the particle new position. Collisions will be handled naturally in this way. But dealing with interactions between particles and walls is another problem and must be handled in other way.

Resources