Is point A nearby point B in 3D - distance check - algorithm

I am looking for efficent algorithm for checking if one point is nearby another in 3D.
sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) < radius
This doesn't seem to be too fast and actually I don't need such a big accuracy. How else could I do this?

Square the distance, and drop the call to sqrt(), that's much faster:
(((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2 < radius * radius
Of course, in many cases at least radius * radius can be computed ahead of time and stored as e.g. squaredRadius.

Well if you can be content with a cube distance rather than a spherical distance a pretty naive implementation would be like this:
Math.Abs(x2-x1) < radius && Math.Abs(y2-y1) < radius && Math.Abs(z2-z1) < radius
You can use your own favourite methods of optimising Math.Abs if it proves a bottleneck.
I should also add that if one of the dimensions generally varies less than other dimensions then putting that one last should lead to a performance gain. For example if you are mainly dealing with objects on a "ground" x-y plane then check the z axis last, as you should be able to rule out collisions earlier by using the x and y checks.

If you do not need big accuracy maybe you can check if 2nd point is inside cube (side length '2a'), not sphere, where the 1st point is in center:
|x2-x1|<a && |y2-y1|<a && |z2-z1|<a

Because of the pipelined processor architectures it is - nowadays - in most cases more efficient to do the FPU calculation twice, as branching once. In case of a branch mis-prediction you are stalling for ages ( in cpu-terms ).
So, I would rather go the calculation-way, not the branching-way.

if you don't need the accuracy you can check whether you are in a cube rather than a sphere.
there are options here as well you can pick the cube that enclose the sphere (no false negatives) the cube with the same volume as the sphere (some false positives and negatives, but max error is minimized), the cube that is contained within the sphere (no false positives).
this technique also extends well to higher dimensions.
if you want to get all the points near another one some form of spacial indexing may also be appropriate (kd-tree perhaps)

If you have to check against many other points, you could consider using a spatial ordering method to quickly discover points, that are near a certain location. Have a look at this link:
wiki link

If we were going to optimise this because it was being run billions of times, I would solve this by using unwind's method, and then parallelizing it using SIMD. There's a few different ways to do that. You might simply do all the subtractions (x2-x1,y2-y1,z2-z1) in one op, and then the multiplies in one op also. That way you parallize inside the method without re-designing your algorithm.
Or you could write a bulk version which calculates (x2-x1)^2+(y2-y1)^2+(z2-z1)^2 - r^2 on many elements and stores the results in an array. You can maybe get better throughput, but it means re-designing your algorithm and depends what the tests are used for.
You could also easily optimize this using something like OpenMP if you were really doing lots of tests in a row.

Use max(abs(x1-x2), abs(y1-y2), abs(z1-z2))

After dropping the square root, if the values gets larger, its better to apply log.

This does the cube-distance, and if you are doing a lot of points, most of the time it only does the first test.
close = (abs(x2-x1) < r && abs(y2-y1) < r && abs(z2-z1) < r);

Related

How do I synchronize scale and position of map and point layers in d3.js?

I've seen many example maps in d3 where points added to a map automatically align as expected, but in code I've adapted from http://bl.ocks.org/bycoffe/3230965 the points I've added do not line up with the map below.
Example here: https://naltmann.github.io/d3-geo-collision/
(the points should match up with some major US cities)
I'm pretty sure the difference is due to the code around scale/range, but I don't know how to unify them between the map and points.
Aligning geographic features geographically with your example will be challenging - first you are projecting points and then scaling x,y:
node.cx = xScale(projection(node.coordinates)[0]);
node.cy = yScale(projection(node.coordinates)[1]);
The ranges for the scales is interesting in that both limits of both ranges are negatives, this might be an attempt to rectify the positioning of points due to the cumulative nature of forces on the points:
.on('tick', function(e) {
k = 10 * e.alpha;
for (i=0; i < nodes.length; i++) {
nodes[i].x += k * nodes[i].cx
nodes[i].y += k * nodes[i].cy
This is challenging as if we remove the scales, the points move farther and farther right and down. This cumulative nature means that with each tick the points drift further and further from recognizable geographic coordinates. This is fine when dealing with a set of geographic data that undergoes the same transformation, but when dealing with a background that doesn't undergo the same transformation, it's a bit hard.
I'll note that if you want a map width of 1800 and a height of 900, you should set the mercator projection's translate to [1800/2,900/2] and the scale to something like 1800/Math.PI/2
The disconnection between geographic coordinates and force coordinates appears to be very difficult to rectify. Any solution for this particular layout and dimensions is likely to fail on different layouts and dimensions.
Instead I'd suggest attempting to use only a projection to place coordinates and not cumulatively adding force changes to each point. This is the short answer to your question.
For a longer answer, my first thought was to get rid of the collision function and use an anchor point linked to a floating point for each city, only drawing the floating point (using link distance to keep them close). This is likely a cleaner solution, but one that is unfortunately completely different than what you've attempted.
However, my second thoughts were more towards keeping your example, but removing the scales (and the cumulative forces) and reducing the forces to zero so that the collision function can work without interference. Based on those thoughts, here's a demonstration of a possible solution.

Any faster method to move things in a circle?

Currently I'm using Math.cos and Math.sin to move objects in a circle in my game, however I suspect it's slow (didn't make proper tests yet though) after reading a bit about it.
Are there any ways to calculate this in a faster way?. Been reading that one alternative could be to have a sort of hash table with stored pre-calculated results, like old people used it in the old times before the computer age.
Any input is appreciated.
Expanding on my comment, if you don't have any angular acceleration (the angular velocity stays constant -- this is a requirement for the object to remain traveling in a circle with constant radius without changing the center-pointing force, e.g. via tension in a string), then you can use the following strategy:
1) Compute B = angular_velocity * time_step_size. This is how much angle change the object needs to go through in a single time step.
2) Compute sinb = sin(B) and cosb = cos(B).
3)
Note that we want to change the angle from A to A+B (the object is going counterclockwise). In this derivation, the center of the circle we're orbiting is given by the origin.
Since the radius of the circle is constant, we know r*sin(A+B) = y_new = r*sin(A)cos(B) + r*cos(A)sin(B) = y_old * cos(B) + x_old*sin(B) and r*cos(A+B) = x_new = r*cos(A)*cos(B) - r*sin(A)sin(B) = x_old*cos(B) - y_old*sin(B).
We've removed the cosine and sine of anything we don't already know, so the Cartesian coordinates can be written as
x_new = x_old*cosb - y_old*sinb
y_new = x_old*sinb + y_old*cosb
No more cos or sin calls except in an initialization step which is called once. Obviously, this won't save you anything if B keeps changing for whatever reason (either angular velocity or time step size changes).
You'll notice this is the same as multiplying the position vector by a fixed rotation matrix. You can translate by the circle center and translate back if you don't want to only consider circles with a center at the origin.
First Edit
As #user5428643 mentions, this method is numerically unstable over time due to drift in the radius. You can probably correct this by periodically renormalizing x and y (x_new = x_old * r_const / sqrt(x_old^2 + y_old^2) and similarly for y every few thousand steps -- if you implement this, save the factor r_const / sqrt(x_old^2 + y_old^2) since it is the same for both x and y). I'll think about it some more and edit this answer if I come up with a better fix.
Second Edit
Some more comments on the numerical drift over time:
I did a couple of tests in C++ and python. In C++ using single precision floats, there is sizable drift even after 1 million time steps when B = 0.1. I used a circle with radius 1. In double precision, I didn't notice any drift visually after 100 million steps, but checking the radius shows that it is contaminated in the lower few digits. Doing the renormalization on every step (which is unnecessary if you're just doing visualization) results in an approximately 4 times slower running time versus the drifty version. However, even this version is about 2-3 times faster than using sin and cos on every iteration. I used full optimization (-O3) in g++. In python (using the math package) I only got a speed up of 2 between the drifty and normalized versions, however the sin and cos version actually slots in between these two -- it's almost exactly halfway between these two in terms of run time. Renormalizing every once in a few thousand steps would still make this faster, but it's not nearly as big a difference as my C++ version would indicate.
I didn't do too much scientific testing to get the timings, just a few tests with 1 million to 1 billion steps in increments of 10.
Sorry, not enough rep to comment.
The answers by #neocpp and #oliveryas01 would both be perfectly correct without roundoff error.
The answer by #oliveryas01, just using sine and cosine directly, and precalculating and storing many values if necessary, will work fine.
However, #neocpp's answer, repeatedly rotating by small angles using a rotation matrix, is numerically unstable; over time, the roundoff error in the radius will tend to grow exponentially, so if you run your programme for a long time the objects will slowly move off the circle, spiralling either inwards or outwards.
You can see this mathematically with a little numerical analysis: at each stage, the squared radius is approximately multiplied by a number which is approximately constant and approximately equal to 1, but almost certainly not exactly equal to 1 due to inexactness of floating point representations.
If course, if you're using double precision numbers and only trying to achieve a simple visual effect, this error may not be large enough to matter to you.
I would stick with sine and cosine if I were you. They're the most efficient way to do what you're trying to do. If you really want maximum performance then you should generate an array of x and y values from the sine and cosine values, then plug that array's values into the circle's position. This way, you aren't running sine and cosine repeatedly, instead only for one cycle.
Another possibility completely avoiding the trig functions would be use a polar-coordinate model, where you set the distance and angle. For example, you could set the x coordinate to be the distance, and the rotation to be the angle, as in...
var gameBoardPin:Sprite = new Sprite();
var gameEntity:Sprite = new YourGameEntityHere();
gameBoardPin.addChild( gameEntity );
...and in your loop...
// move gameEntity relative to the center of gameBoardPin
gameEntity.x = circleRadius;
// rotate gameBoardPin from its center causes gameEntity to rotate at the circleRadius
gameBoardPin.rotation = desiredAngleForMovingObject
gameBoardPin's x,y coordinates would be set to the center of rotation for gameEntity. So, if you wanted the gameEntity to rotate with a 100 pixel tether around the center of the stage, you might...
gameBoardPin.x = stage.stageWidth / 2;
gameBoardPin.y = stage.stageHeight / 2;
gameEntity.x = 100;
...and then in the loop you might...
desiredAngleForMovingObject += 2;
gameBoardPin.rotation = desiredAngleForMovingObject
With this method you're using degrees instead of radians.

Path Tracing algorithm - Need help understanding key point

So the Wikipedia page for path tracing (http://en.wikipedia.org/wiki/Path_tracing) contains a naive implementation of the algorithm with the following explanation underneath:
"All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1)."
The part I'm having trouble understanding is the part in bold. I am familiar with PDFs but I am not quite sure how they fit into here. If we stick to the mirror example, what would be the PDF value we would divide by? Why? How would I go about finding the PDF value to divide by if I was using an arbitrary BRDF value such as a Phong reflection model or Cook-Torrance reflection model, etc? Lastly, why do we divide by the PDF instead of multiply? If we divide, don't we give more weight to a direction with a lower probability?
Let's assume that we have only materials without color (greyscale). Then, their BDRF at each point can be expressed as a single valued function
float BDRF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit);
Here, phi and theta are the azimuth and zenith angles of the two rays under consideration. For pure Lambertian reflection, this function would look like this:
float lambertBRDF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit)
{
return albedo*1/pi*cos(theta_out);
}
albedo ranges from 0 to 1 - this measures how much of the incoming light is reemitted. The factor 1/pi ensures that the integral of BRDF over all outgoing vectors does not exceed 1. With the naive approach of the Wikipedia article (http://en.wikipedia.org/wiki/Path_tracing), one can use this BRDF as follows:
Color TracePath(Ray r, depth) {
/* .... */
Ray newRay;
newRay.origin = r.pointWhereObjWasHit;
newRay.direction = RandomUnitVectorInHemisphereOf(normal(r.pointWhereObjWasHit));
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*lambertBDRF(r.phi,r.theta,newRay.phi,newRay.theta,r.pointWhereObjWasHit);
}
As mentioned in the article and by Ross, this random sampling is unfortunate because it traces incoming directions (newRay's) from which little light is reflected with the same probability as directions from which there is lots of light. Instead, directions whence much light is reflected to the observer should be selected preferentially, to have an equal sample rate per contribution to the final color over all directions. For that, one needs a way to generate random rays from a probability distribution. Let's say there exists a function that can do that; this function takes as input the desired PDF (which, ideally should be be equal to the BDRF) and the incoming ray:
vector RandomVectorWithPDF(function PDF(p_i,t_i,p_o,t_o,point x), Ray incoming)
{
// this function is responsible to create random Rays emanating from x
// with the probability distribution PDF. Depending on the complexity of PDF,
// this might somewhat involved. It is possible, however, to do it for Lambertian
// reflection (how exactly is math, not programming):
vector randomVector;
if(PDF==lambertBDRF)
{
float phi = uniformRandomNumber(0,2*pi);
float rho = acos(sqrt(uniformRandomNumber(0,1)));
float theta = pi/2-rho;
randomVector = getVectorFromAzimuthZenithAndNormal(phi,zenith,normal(incoming.whereObjectWasHit));
}
else // deal with other PDFs
return randomVector;
}
The code in the TracePath routine would then simply look like this:
newRay.direction = RandomVectorWithPDF(lambertBDRF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected;
Because the bright directions are preferred in the choice of samples, you do not have to weight them again by applying the BDRF as a scaling factor to reflected. However, if PDF and BDRF are different for some reason, you would have to scale down the output whenever PDF>BDRF (if you picked to many from the respective direction) and enhance it when you picked to little .
In code:
newRay.direction = RandomVectorWithPDF(PDF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*BDRF(...)/PDF(...);
The output is best, however, if BDRF/PDF is equal to 1.
The question remains why can't one always choose the perfect PDF which is exactly equal to the BDRF? First, some random distributions are harder to compute than others. For example, if there was a slight variation in the albedo parameter, the algorithm would still do much better for the non-naive sampling than for uniform sampling, but the correction term BDRF/PDF would be needed for the slight variations. Sometimes, it might even be impossible to do it at all. Imagine a colored object with different reflective behavior of red green and blue - you could either render in three passes, one for each color, or use an average PDF, which fits all color components approximately, but none perfectly.
How would one go about implementing something like Phong shading? For simplicity, I still assume that there is only one color component, and that the ratio of diffuse to specular reflection is 60% / 40% (the notion of ambient light makes no sense in path tracing). Then my code would look like this:
if(uniformRandomNumber(0,1)<0.6) //diffuse reflection
{
newRay.direction=RandomVectorWithPDF(lambertBDRF,r);
reflected = TracePath(newRay,depth+1)/0.6;
}
else //specular reflection
{
newRay.direction=RandomVectorWithPDF(specularPDF,r);
reflected = TracePath(newRay,depth+1)*specularBDRF/specularPDF/0.4;
}
return emittance + reflected;
Here specularPDF is a distribution with a narrow peak around the reflected ray (theta_in=theta_out,phi_in=phi_out+pi) for which a way to create random vectors is available, and specularBDRF returns the specular intensity from Phong's model (http://en.wikipedia.org/wiki/Phong_reflection_model).
Note how the PDFs are modified by 0.6 and 0.4 respectively.
I'm by no means an expert in ray tracing, but this seems to be classic Monte Carlo:
You have lots of possible rays, and you choose one uniformly at random and then average over lots of trials.
The distribution you used to choose one of the rays was uniform (they were all equally as likely)
so you don't have to do any clever re-normalising.
However, Perhaps there are lots of possible rays to choose, but only a few would possibly lead to useful results.We therefore bias towards picking those 'useful' possibilities with higher probability, and then re-normalise (we are not choosing the rays uniformly any more, so we can't just take the average). This is
importance sampling.
The mirror example seems to be the following: only one possible ray will give a useful result.
If we choose a ray at random then the probability we hit that useful ray is zero: this is a property
of conditional probability on continuous spaces (it's not actually continuous, it's implicitly discretised
by your computer, so it's not quite true...): the probability of hitting something specific when there are infinitely many things must be zero.
Thus we are re-normalising by something with probability zero - standard conditional probability definitions
break when we consider events with probability zero, and that is where the problem would come from.

Algorithm for following the path of ridges on a 3D image

I'm trying to find an algorithm (or algorithm ideas) for following a ridge on a 3D image, derived from a digital elevation model (DEM). I've managed to get very basic program working which just iterates across each row of the image marking a ridge line wherever it finds a large change in aspect (ie. from < 180 degrees to > 180 degrees).
However, the lines this produces aren't brilliant, there are often gaps and various strange artefacts. I'm hoping to try and extend this by using some sort of algorithm to follow the ridge lines, thus producing lines that are complete (that is, no gaps) and more accurate.
A number of people have mentioned snake algorithms to me, but they don't seem to be quite what I'm looking for. I've also done a lot of searching about path-finding algorithms, but again, they don't seem to be quite the right thing.
Does anyone have any suggestions for types or algorithms or specific algorithms I should look at?
Update: I've been asked to add some more detail on the exact area I'll be applying this to. It's working with gridded elevation data of sand dunes. I'm trying to extract the crests if these sand dunes, which look similar to the boundaries between drainage basins, but can be far more complex (for example, there can be multiple sand dunes very close to each other with gradually merging crests)
You can get a good estimate of the ridges using sign changes of the curvature. Note that the curvature will be near infinity at flat regions. Hence possible psuedo-code for a ridge detection algorithm could be:
for each face in the mesh
compute 1/curvature
if abs(1/curvature) != zeroTolerance
flag face as ridge
else
continue
(zeroTolerance is a number near but not equal to zero e.g. 0.003 etc)
Also Meshlab provides a module for normal & curvature estimation on most formats. You can test the idea using it, before you code it up.
I don't know how what your data is like or how much automation you need. This won't work if if consists of peaks without clear ridges (but then you probably wouldn't be asking the question.)
startPoint = highest point in DEM (or on ridge)
curPoint = startPoint;
line += curPoint;
Loop
curPoint = highest point adjacent to curPoint not in line; // (Don't backtrack)
line += point;
Repeat
Curious what the real solution turns out to be.
Edited to add: depending on the coarseness of your data set, 'point' can be a single point or a smoothed average of a local region of points.
http://en.wikipedia.org/wiki/Ridge_detection
You can treat the elevation as you would a grayscale color, then use a 2D edge recognition filter. There are lots of edge recognition methods available. The best would depend on your specific needs.

How to speed up marching cubes?

I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate.
Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this.
I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls?
Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option...
I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.
I know this is a bit old, but I recently implemented Marching Cubes based on much the same source. There is a LOT of inefficiency here. At a minimum if you were doing something like
for (int x=0; x<densityArrayWidth; x++)
for (int z=0; z<densityArrayLength; z++)
for (int y=0; y<densityArrayHeight; y++)
Polygonize(Gridcell, isolevel, Triangles)
Look at how many times you'd be reallocating the edgeTable and Tritable! Those immediately need to move out to the overall class. I ditched the gridCell object as well, going directly from the points/values to the triangles.
In short it isn't just the algorithmic complexity, memory allocations (and in the base this does a huge amount of them) take time also.
Just in case anyone else ends up here, dead-space elimination through a coarser sampling rate makes virtually no difference at all. Any remotely safe (ie: allowing a border for sampling artifacts) coarser sampling ends up grabbing most of the grid anyway in any remotely non-trivial field.
Speeding up the underlying field evaluation (with heavy memoisation) seemed to mostly solve the performance problems.
Try marching tetrahedra instead -- the math is simpler, allowing you to consider fewer cases per cell.
each cube has 12 edges, if you go through each cube and find 12 intersection points, you are doing 4 times too many calculations for intersection points- you have to only use 3 edges in the bottom left corner of each cube, with an extra row in the top right corner of the zone, and then use a special upgrade to access all the values that you have found. I'm going to do a topic on this because it needs to be discussed and it's complicated.
Also, testing for areas in space that need polygons, by assessing the ISO level using Octree, and skipping areas far from the ISO level.
I had a look at propagation, but it isn't that reliable and efficient.

Resources