Path generation for non-intersecting disc movement on a plane - algorithm

What I'm looking for
I have 300 or fewer discs of equal radius on a plane. At time 0 each disc is at a position. At time 1 each disc is at a potentially different position. I'm looking to generate a 2D path for each disc for times between 0 and 1 such that the discs do not intersect and the paths are relatively efficient (short) and of low curvature if possible. (for example, straight lines are preferable to squiggly lines)
Lower computation time is generally more important than exactness of solution. (for example, a little intersection is okay, and I don't necessarily need an optimal result)
However, discs shouldn't teleport through each other, stop or slow abruptly, or change direction abruptly -- the "smoother" the better. Only exception is time 0 and 1.
Paths can be expressed in a sampled form or piecewise linear nature (or better) -- I'm not worried about having truly smooth paths via splines. (I can approximate that if I so need.)
What I've tried
You can see a demo of my best attempt (via Javascript + WebGL). Be warned, it will load slowly on older computers due to the computations involved. It appears to work in Firefox/Chrome/IE11 under Windows.
In this demo I've represented each disc as an "elastic band" in 3D (that is, each disc has a position at each time) and ran a simple game-style physics engine that resolves constraints and treats each point in time like a mass with springs to the previous/next time. ('Time' in this case is just the third dimension.)
This actually works pretty well for small N (<20), but in common test cases (for example, start with discs arranged in circle, move each disc to the opposite point on the circle) this fails to generate convincing paths since the constraints and elasticity propagate slowly throughout the springs. (for example, if I slice time into 100 discrete levels, tension in the elastic bands only propagates one level per each simulation cycle) This makes good solutions require many (>10000) iterations, and that is tediously slow for my application. It also fails to reasonably resolve many N>40 cases, but this may be simply because I can't feasibly run enough iterations.
What else I've tried
My initial attempt was a hill-climber that started with straight-line paths which were gradually mutated. Solutions which measured better than the currently best solution replaced the currently best solution. Better measurements resulted from the amount of intersection (that is, completely overlapping measured worse than just grazing) and the length of the paths (shorter paths were better).
This produced some surprisingly good results, but unreliably, likely getting stuck in local minima very often. It was extremely slow for N>20. I tried applying a few techniques (simulated annealing, a genetic algorithms approach, etc) in an attempt to get around the local minima issue, but I never had much success.
What I'm trying
I'm optimizing the "elastic band" model so that tension and constraints propagate much more quickly in the time dimension. This would save a good deal of needed iterations in many cases, however in highly-constrained scenarios (for example, many discs trying to cross the same location) an untenable amount of iterations would still be required. I'm no expert on how to solve constraints or propagate springs more quickly (I've tried reading a few papers on non-stretchable cloth simulation, but I haven't been able to figure out if they apply), so I'd be interested in if there's a good way to go about this.
Ideas on the table
Spektre has implemented a very fast RTS-style unit movement algorithm that works admirably well. It's fast and elegant, however it suffers from RTS-movement style problems: sudden direction changes, units can stop abruptly to resolve collisions. Additionally, units do not all arrive at their destination at the same time, which is essentially an abrupt stop. This may be a good heuristic to make viable non-smooth paths after which the paths could be resampled in time and a "smoothing" algorithm could be run (much like the one used in my demo.)
Ashkan Kzme has suggested that the problem may be related to network flows. It would appear that the minimum cost flow problem could work, as long as space and time could be discritized in a reasonable manner, and the running times could be kept down. The advantage here is that it's a well studied set of problems, but sudden velocity changes would still be an issue and some sort of "smoothing" post-steps may be desirable. The stumbling block I'm currently having is deciding on a network representation of space-time that wouldn't result in discs teleporting through each other.
Jay Kominek posted an answer that uses a nonlinear optimizer to optimize quadratic Bezier curves with some promising results.

Have played with this for fun a bit and here the result:
Algorithm:
process each disc
set speed as constant*destination_vector
multiplicative constant a
and limit the speed to constant v afterwards
test if new iterated position does not conflict any other disc
if it does rotate the speed in one direction by some angle step ang
loop until free direction found or full circle covered
if no free direction found mark disc as stuck
This is how it looks like for circle to inverse circle path:
This is how it looks like for random to random path:
stuck disc are yellow (none in these cases) and not moving discs are at destination already. This can also get stuck if there is no path like if disc already in destination circles another ones destination. To avoid that you need also change the colliding disc also ... You can play with the ang,a,v constants to make different appearance and also you could try random direction of angle rotation to avoid that swirling/twister movement
Here the source code I used (C++):
//---------------------------------------------------------------------------
const int discs =23; // number of discs
const double disc_r=5; // disc radius
const double disc_dd=4.0*disc_r*disc_r;
struct _disc
{
double x,y,vx,vy; // actual position
double x1,y1; // destination
bool _stuck; // is currently stuck?
};
_disc disc[discs]; // discs array
//---------------------------------------------------------------------------
void disc_generate0(double x,double y,double r) // circle position to inverse circle destination
{
int i;
_disc *p;
double a,da;
for (p=disc,a=0,da=2.0*M_PI/double(discs),i=0;i<discs;a+=da,i++,p++)
{
p->x =x+(r*cos(a));
p->y =y+(r*sin(a));
p->x1=x-(r*cos(a));
p->y1=y-(r*sin(a));
p->vx=0.0;
p->vy=0.0;
p->_stuck=false;
}
}
//---------------------------------------------------------------------------
void disc_generate1(double x,double y,double r) // random position to random destination
{
int i,j;
_disc *p,*q;
double a,da;
Randomize();
for (p=disc,a=0,da=2.0*M_PI/double(discs),i=0;i<discs;a+=da,i++,p++)
{
for (j=-1;j<0;)
{
p->x=x+(2.0*Random(r))-r;
p->y=y+(2.0*Random(r))-r;
for (q=disc,j=0;j<discs;j++,q++)
if (i!=j)
if (((q->x-p->x)*(q->x-p->x))+((q->y-p->y)*(q->y-p->y))<disc_dd)
{ j=-1; break; }
}
for (j=-1;j<0;)
{
p->x1=x+(2.0*Random(r))-r;
p->y1=y+(2.0*Random(r))-r;
for (q=disc,j=0;j<discs;j++,q++)
if (i!=j)
if (((q->x1-p->x1)*(q->x1-p->x1))+((q->y1-p->y1)*(q->y1-p->y1))<disc_dd)
{ j=-1; break; }
}
p->vx=0.0;
p->vy=0.0;
p->_stuck=false;
}
}
//---------------------------------------------------------------------------
void disc_iterate(double dt) // iterate positions
{
int i,j,k;
_disc *p,*q;
double v=25.0,a=10.0,x,y;
const double ang=10.0*M_PI/180.0,ca=cos(ang),sa=sin(ang);
const int n=double(2.0*M_PI/ang);
for (p=disc,i=0;i<discs;i++,p++)
{
p->vx=a*(p->x1-p->x); if (p->vx>+v) p->vx=+v; if (p->vx<-v) p->vx=-v;
p->vy=a*(p->y1-p->y); if (p->vy>+v) p->vy=+v; if (p->vy<-v) p->vy=-v;
x=p->x; p->x+=(p->vx*dt);
y=p->y; p->y+=(p->vy*dt);
p->_stuck=false;
for (k=0,q=disc,j=0;j<discs;j++,q++)
if (i!=j)
if (((q->x-p->x)*(q->x-p->x))+((q->y-p->y)*(q->y-p->y))<disc_dd)
{
k++; if (k>=n) { p->x=x; p->y=y; p->_stuck=true; break; }
p->x=+(p->vx*ca)+(p->vy*sa); p->vx=p->x;
p->y=-(p->vx*sa)+(p->vy*ca); p->vy=p->y;
p->x=x+(p->vx*dt);
p->y=y+(p->vy*dt);
j=-1; q=disc-1;
}
}
}
//---------------------------------------------------------------------------
Usage is simple:
call generate0/1 with center and radius of your plane where discs will be placed
call iterate (dt is time elapsed in seconds)
draw the scene
if you want to change this to use t=<0,1>
loop iterate until all disc at destination or timeout
remember any change in speed for each disc in a list
need the position or speed vector and time it occur
after loop rescale the discs list all to the range of <0,1>
render/animate the rescaled lists
[Notes]
My test is running in real time but I did not apply the <0,1> range and have not too many discs. So you need to test if this is fast enough for your setup.
To speed up you can:
enlarge the angle step
test the collision after rotation against last collided disc and only when free test the rest...
segmentate the disc into (overlapping by radius) regions handle each region separately
also I think some field approach here could speed up things like create field map once in a while for better determine the obstacle avoidance direction
[edit1] some tweaks to avoid infinite oscillations around obstacle
For more discs some of them get stuck bouncing around already stopped disc. To avoid that just change the ang step direction once in a while this is the result:
you can see the oscillating bouncing before finish
this is the changed source:
void disc_iterate(double dt) // iterate positions
{
int i,j,k;
static int cnt=0;
_disc *p,*q;
double v=25.0,a=10.0,x,y;
const double ang=10.0*M_PI/180.0,ca=cos(ang),sa=sin(ang);
const int n=double(2.0*M_PI/ang);
// process discs
for (p=disc,i=0;i<discs;i++,p++)
{
// compute and limit speed
p->vx=a*(p->x1-p->x); if (p->vx>+v) p->vx=+v; if (p->vx<-v) p->vx=-v;
p->vy=a*(p->y1-p->y); if (p->vy>+v) p->vy=+v; if (p->vy<-v) p->vy=-v;
// stroe old and compute new position
x=p->x; p->x+=(p->vx*dt);
y=p->y; p->y+=(p->vy*dt);
p->_stuck=false;
// test if coliding
for (k=0,q=disc,j=0;j<discs;j++,q++)
if (i!=j)
if (((q->x-p->x)*(q->x-p->x))+((q->y-p->y)*(q->y-p->y))<disc_dd)
{
k++; if (k>=n) { p->x=x; p->y=y; p->_stuck=true; break; } // if full circle covered? stop
if (int(cnt&128)) // change the rotation direction every 128 iterations
{
// rotate +ang
p->x=+(p->vx*ca)+(p->vy*sa); p->vx=p->x;
p->y=-(p->vx*sa)+(p->vy*ca); p->vy=p->y;
}
else{
//rotate -ang
p->x=+(p->vx*ca)-(p->vy*sa); p->vx=p->x;
p->y=+(p->vx*sa)+(p->vy*ca); p->vy=p->y;
}
// update new position and test from the start again
p->x=x+(p->vx*dt);
p->y=y+(p->vy*dt);
j=-1; q=disc-1;
}
}
cnt++;
}

It isn't perfect, but my best idea has been to move the discs along quadratic Bezier curves. That means you've got just 2 free variables per disc that you're trying to find values for.
At that point, you can "plug" an error function into a nonlinear optimizer. Longer you're willing to wait, the better your solution will be, in terms of discs avoiding each other.
Only one actual hit:
Doesn't bother displaying hits, the discs actually start overlapped:
I've produced a full example, but the key is the error function to be minimized, which I reproduce here:
double errorf(unsigned n, const double *pts, double *grad,
void *data)
{
problem_t *setup = (problem_t *)data;
double error = 0.0;
for(int step=0; step<setup->steps; step++) {
double t = (1.0+step) / (1.0+setup->steps);
for(int i=0; i<setup->N; i++)
quadbezier(&setup->starts[2*i],
&pts[2*i],
&setup->stops[2*i],
t,
&setup->scratch[2*i]);
for(int i=0; i<setup->N; i++)
for(int j=i+1; j<setup->N; j++) {
double d = distance(&setup->scratch[2*i],
&setup->scratch[2*j]);
d /= RADIUS;
error += (1.0/d) * (1.0/d);
}
}
return error / setup->steps;
}
Ignore n, grad and data. setup describes the specific problem being optimized, number of discs, and where they start and stop. quadbezier does the Bezier curve interpolation, placing its answer into ->scratch. We check ->steps points part way along the path, and measure how close the discs are to one another at each step. To make the optimization problem smoother, it doesn't have a hard switch when the discs start touching, it just tries to keep them all as far apart from one another as possible.
Completely compilable code, Makefile and some Python for turning a bunch of quadratic bezier curves into a series of images is available at https://github.com/jkominek/discs
Performance is a bit sluggish on huge numbers of points, but there are a number of options for improvement.
If the user is making minor tweaks to the starting and finishing positions, then after every tweak, rerun the optimization in the background, using the previous solution as the new starting point. Fixing up a close solution should be faster than recreating it from scratch every time.
Parallelize the n^2 loop over all points.
Check to see if other optimization algorithms will do better on this data. Right now it starts with a global optimization pass, and then does a local optimization pass. There are algorithms which already "know" how to do that sort of thing, and are probably smarter about it.
If you can figure out how to compute the gradient function for free or close to, I'm sure it would be worth it to do so, and switch to algorithms that can make use of the gradient information. It might be worth it even if the gradient isn't cheap.
Replace the whole steps thing with a suboptimization that finds the t at which the two discs are closest, and then uses that distance for the error. Figuring out the gradient for that suboptimization should be much easier.
Better data structures for the intermediate points, so you don't perform a bunch of unnecessary distance calculations for discs that are very far apart.
Probably more?

The usual solution for this kind of problem is to use what is called a "heat map" (or "influence map"). For every point in the field, you compute a "heat" value. The disks move towards high values and away from cold values. Heat maps are good for your type of problem because they are very simple to program, yet can generate sophisticated, AI-like behavior.
For example, imagine just two disks. If your heat map rule is equi-radial, then the disks will just move towards each other, then back away, oscillating back and forth. If your rule randomizes intensity on different radials, then the behavior will be chaotic. You can also make the rule depend on velocity in which case disks will accelerate and decelerate as they move around.
Generally, speaking the heat map rule should make areas "hotter" at they approach some optimal distance from a disk. Places that are too near a disk, or too far away get "colder". By changing this optimal distance you can determine how close the disks congregate together.
Here are a couple of articles with example code showing how to use heat maps:
http://haufler.org/2012/05/26/beating-the-scribd-ai-challenge-implementing-traits-through-heuristics-part-1/
http://www.gamedev.net/page/resources/_/technical/artificial-intelligence/the-core-mechanics-of-influence-mapping-r2799
Game AI Pro, Volume 2, chapter on Heat Maps

I don't have enough rep to comment yet, so sorry for the non-answer.
But to the RTS angle, RTS's generally use the A* algorithm for path finding. Is there a reason you're insisting on using a physics-based model?
Secondly, your attempt you linked that operates rather smoothly, but with the acceleration in the middle, behaves how I initially thought. Since your model treats it as a rubber band, it basically is looking for which way to rotate for the shortest path to the desired location.
If you arent worried about a physical approach, I would attempt as follows:
Try to move directly toward the target. if it collides, it should attempt to roll clockwise around its most recent collision until it is in a position on the vector at 90 degrees to the vector from current location to the target location.
If we assume a test case of 5 in a row at the top of a box and five in a row at the bottom, they will move directly toward each other until they collide. The entire top row will slide to the right until they fall over the edge of the bottom row as it moves to the left and floats over the edge of the top row. (Think of what the whiskey and water shot glass trick looks like when it starts)
Since the motion is not determined by a potential energy stored in the spring which will accelerate the object during a rotation, you have complete control over how the speed changes during the simulation.
In a circular test like you have above, if all disks are initialized with the same speed, the entire clump will go to the middle, collide and twist as a unit for approximately a quarter turn at which point they will break away and head for their goal.
If the timing is lightly randomized, I think you'll get the behavior you're looking for.
I hope this helps.

Related

Algorithm for evenly arranging steps in 2 directions

I am currently programming the controller for a CNC machine, and therefore I need to get the amount of stepper motor steps in each direction when I get from point A to B.
For example point A's coordinates are x=0 and y=0 and B's coordinates are x=15 and y=3. So I have to go 15 steps on the x axis and 3 und the y axis.
But how do I get those two values mixed up in a way that is smooth (aka not first x and then y, this results in really ugly lines)?
In my example with x=15 and y=3 I want it arranged like that:
for 3 times do:
x:4 steps y:0 steps
x:1 steps y:1 step
But how can I get these numbers from an algorithm?
I hope you get what my problem is, thanks for your time,
Luca
there are 2 major issues in here:
trajectory
this can be handled by any interpolation/rasterization like:
DDA
Bresenham
the DDA is your best option as it can handle any number of dimensions easily and can be computed on both integer and floating arithmetics. Its also faster (was not true in the x386 days but nowadays CPU architecture changed all)
and even if you got just 2D machine the interpolation itself will be most likely multidimensional as you will probably add another stuff like: holding force, tool rpm, preasures for what ever, etc... That has to be interpolated along your line in the same way.
speed
This one is much much more complicated. You need to drive your motors smoothly from start position to the end concerning with these:
line start/end speeds so you can smoothly connect more lines together
top speed (dependent on the manufactoring process usually constant for each tool)
motor/mechanics resonance
motor speed limits: start/stop and top
When writing about speed I mean frequency [Hz] for the steps of the motor or physical speed of the tool [m/s] or [mm/2].
Linear interpolation is not good for this I am using cubics instead as they can be smoothly connected and provide good shape for the speed change. See:
How can i produce multi point linear interpolation?
The interpolation cubic (form of CATMUL ROM) is exactly what I use for tasks like this (and I derived it for this very purpose)
The main problem is the startup of the motor. You need to drive from 0 Hz to some frequency but usual stepping motor has resonance in the lower frequencies and as they can not be avoided for multidimensional machines you need to spend as small time in such frequencies as possible. There are also another means of handling this shifting resonance of kinematics by adding weights or change of shape, and adding inertial dampeners on the motors itself (rotary motors only)
So usual speed control for single start/stop line looks like this:
So you should have 2 cubics one per start up and one per stopping dividing your line into 2 joined ones. You have to do it so start and stop frequency is configurable ...
Now how to merge speed and time? I am using discrete non linear time for this:
Find start point (time) of each cycle in a sine wave
its the same process but instead of time there is angle. The frequency of sinwave is changing linearly so that part you need to change with the cubic. Also You have not a sinwave so instead of that use the resulting time as interpolation parameter for DDA ... or compare it with time of next step and if bigger or equal do step and compute the next one ...
Here another example of this technique:
how to control the speed of animation, using a Bezier curve?
This one actually does exactly what you should be doing ... interpolate DDA with Speed controled by cubic curve.
When done you need to build another layer on top of this which will configure the speeds for each line of trajectory so the result is as fast as possible and matching your machine speed limits and also matching tool speed if possible. This part is the most complicated one...
In order to show you what is ahead of you when I put all this together mine CNC interpolator has ~166KByte of pure C++ code not counting depending libs like vector math, dynamic lists, communication etc... The whole control code is ~2.2 MByte
If your controller can issue commands faster than the steppers can actually turn, you probably want to use some kind of event-driven timer-based system. You need to calculate when you trigger each of the motors so that the motion is distributed evenly on both axes.
The longer motion should be programmed as fast as it can go (that is, if the motor can do 100 steps per second, pulse it every 1/100th of a second) and the other motion at longer intervals.
Edit: the paragraph above assumes that you want to move the tool as fast as possible. This is not normally the case. Usually, the tool speed is given, so you need to calculate the speed along X and Y (and maybe also Z) axes separately from that. You also should know what tool travel distance corresponds to one step of the motor. So you can calculate the number of steps you need to do per time unit, and also duration of the entire movement, and thus time intervals between successive stepper pulses along each axis.
So you program your timer to fire after the smallest of the calculated time intervals, pulse the corresponding motor, program the timer for the next pulse, and so on.
This is a simplification because motors, like all physical objects, have inertia and need time to accelerate/decelerate. So you need to take this into account if you want to produce smooth movement. There are more considerations to be taken into account. But this is more about physics than programming. The programming model stays the same. You model your machine as a physical object that reacts to known stimuli (stepper pulses) in some known way. Your program calculates timings for stepper pulses from the model, and sits in an event loop, waiting for the next time event to occur.
Consider Bresenham's line drawing algorithm - he invented it for plotters many years ago. (Also DDA one)
In your case X/Y displacements have common divisor GCD=3 > 1, so steps should change evenly, but in general case they won't distributed so uniformly.
You should take the ratio between the distance on each of the coordinates, and then alternate between steps along the coordinate that has the longest distance with steps that do a single unit step on both coordinates.
Here is an implementation in JavaScript -- using only the simplest of its syntax:
function steps(a, b) {
const dx = Math.abs(b.x - a.x);
const dy = Math.abs(b.y - a.y);
const sx = Math.sign(b.x - a.x); // sign = -1, 0, or 1
const sy = Math.sign(b.y - a.y);
const longest = Math.max(dx, dy);
const shortest = Math.min(dx, dy);
const ratio = shortest / longest;
const series = [];
let longDone = 0;
let remainder = 0;
for (let shortStep = 0; shortStep < shortest; shortStep++) {
const steps = Math.ceil((0.5 - remainder) / ratio);
if (steps > 1) {
if (dy === longest) {
series.push( {x: 0, y: (steps-1)*sy} );
} else {
series.push( {x: (steps-1)*sx, y: 0} );
}
}
series.push( {x: sx, y: sy} );
longDone += steps;
remainder += steps*ratio-1;
}
if (longest > longDone) {
if (dy === longest) {
series.push( {x: 0, y: longest-longDone} );
} else {
series.push( {x: longest-longDone, y: 0} );
}
}
return series;
}
// Demo
console.log(steps({x: 0, y: 0}, {x: 3, y: 15}));
Note that the first segment is shorter than all the others, so that it is more symmetrical with how the sequence ends near the second point. If you don't like that, then replace the occurrence of 0.5 in the code with either 0 or 1.

How do I synchronize scale and position of map and point layers in d3.js?

I've seen many example maps in d3 where points added to a map automatically align as expected, but in code I've adapted from http://bl.ocks.org/bycoffe/3230965 the points I've added do not line up with the map below.
Example here: https://naltmann.github.io/d3-geo-collision/
(the points should match up with some major US cities)
I'm pretty sure the difference is due to the code around scale/range, but I don't know how to unify them between the map and points.
Aligning geographic features geographically with your example will be challenging - first you are projecting points and then scaling x,y:
node.cx = xScale(projection(node.coordinates)[0]);
node.cy = yScale(projection(node.coordinates)[1]);
The ranges for the scales is interesting in that both limits of both ranges are negatives, this might be an attempt to rectify the positioning of points due to the cumulative nature of forces on the points:
.on('tick', function(e) {
k = 10 * e.alpha;
for (i=0; i < nodes.length; i++) {
nodes[i].x += k * nodes[i].cx
nodes[i].y += k * nodes[i].cy
This is challenging as if we remove the scales, the points move farther and farther right and down. This cumulative nature means that with each tick the points drift further and further from recognizable geographic coordinates. This is fine when dealing with a set of geographic data that undergoes the same transformation, but when dealing with a background that doesn't undergo the same transformation, it's a bit hard.
I'll note that if you want a map width of 1800 and a height of 900, you should set the mercator projection's translate to [1800/2,900/2] and the scale to something like 1800/Math.PI/2
The disconnection between geographic coordinates and force coordinates appears to be very difficult to rectify. Any solution for this particular layout and dimensions is likely to fail on different layouts and dimensions.
Instead I'd suggest attempting to use only a projection to place coordinates and not cumulatively adding force changes to each point. This is the short answer to your question.
For a longer answer, my first thought was to get rid of the collision function and use an anchor point linked to a floating point for each city, only drawing the floating point (using link distance to keep them close). This is likely a cleaner solution, but one that is unfortunately completely different than what you've attempted.
However, my second thoughts were more towards keeping your example, but removing the scales (and the cumulative forces) and reducing the forces to zero so that the collision function can work without interference. Based on those thoughts, here's a demonstration of a possible solution.

Path Tracing algorithm - Need help understanding key point

So the Wikipedia page for path tracing (http://en.wikipedia.org/wiki/Path_tracing) contains a naive implementation of the algorithm with the following explanation underneath:
"All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1)."
The part I'm having trouble understanding is the part in bold. I am familiar with PDFs but I am not quite sure how they fit into here. If we stick to the mirror example, what would be the PDF value we would divide by? Why? How would I go about finding the PDF value to divide by if I was using an arbitrary BRDF value such as a Phong reflection model or Cook-Torrance reflection model, etc? Lastly, why do we divide by the PDF instead of multiply? If we divide, don't we give more weight to a direction with a lower probability?
Let's assume that we have only materials without color (greyscale). Then, their BDRF at each point can be expressed as a single valued function
float BDRF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit);
Here, phi and theta are the azimuth and zenith angles of the two rays under consideration. For pure Lambertian reflection, this function would look like this:
float lambertBRDF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit)
{
return albedo*1/pi*cos(theta_out);
}
albedo ranges from 0 to 1 - this measures how much of the incoming light is reemitted. The factor 1/pi ensures that the integral of BRDF over all outgoing vectors does not exceed 1. With the naive approach of the Wikipedia article (http://en.wikipedia.org/wiki/Path_tracing), one can use this BRDF as follows:
Color TracePath(Ray r, depth) {
/* .... */
Ray newRay;
newRay.origin = r.pointWhereObjWasHit;
newRay.direction = RandomUnitVectorInHemisphereOf(normal(r.pointWhereObjWasHit));
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*lambertBDRF(r.phi,r.theta,newRay.phi,newRay.theta,r.pointWhereObjWasHit);
}
As mentioned in the article and by Ross, this random sampling is unfortunate because it traces incoming directions (newRay's) from which little light is reflected with the same probability as directions from which there is lots of light. Instead, directions whence much light is reflected to the observer should be selected preferentially, to have an equal sample rate per contribution to the final color over all directions. For that, one needs a way to generate random rays from a probability distribution. Let's say there exists a function that can do that; this function takes as input the desired PDF (which, ideally should be be equal to the BDRF) and the incoming ray:
vector RandomVectorWithPDF(function PDF(p_i,t_i,p_o,t_o,point x), Ray incoming)
{
// this function is responsible to create random Rays emanating from x
// with the probability distribution PDF. Depending on the complexity of PDF,
// this might somewhat involved. It is possible, however, to do it for Lambertian
// reflection (how exactly is math, not programming):
vector randomVector;
if(PDF==lambertBDRF)
{
float phi = uniformRandomNumber(0,2*pi);
float rho = acos(sqrt(uniformRandomNumber(0,1)));
float theta = pi/2-rho;
randomVector = getVectorFromAzimuthZenithAndNormal(phi,zenith,normal(incoming.whereObjectWasHit));
}
else // deal with other PDFs
return randomVector;
}
The code in the TracePath routine would then simply look like this:
newRay.direction = RandomVectorWithPDF(lambertBDRF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected;
Because the bright directions are preferred in the choice of samples, you do not have to weight them again by applying the BDRF as a scaling factor to reflected. However, if PDF and BDRF are different for some reason, you would have to scale down the output whenever PDF>BDRF (if you picked to many from the respective direction) and enhance it when you picked to little .
In code:
newRay.direction = RandomVectorWithPDF(PDF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*BDRF(...)/PDF(...);
The output is best, however, if BDRF/PDF is equal to 1.
The question remains why can't one always choose the perfect PDF which is exactly equal to the BDRF? First, some random distributions are harder to compute than others. For example, if there was a slight variation in the albedo parameter, the algorithm would still do much better for the non-naive sampling than for uniform sampling, but the correction term BDRF/PDF would be needed for the slight variations. Sometimes, it might even be impossible to do it at all. Imagine a colored object with different reflective behavior of red green and blue - you could either render in three passes, one for each color, or use an average PDF, which fits all color components approximately, but none perfectly.
How would one go about implementing something like Phong shading? For simplicity, I still assume that there is only one color component, and that the ratio of diffuse to specular reflection is 60% / 40% (the notion of ambient light makes no sense in path tracing). Then my code would look like this:
if(uniformRandomNumber(0,1)<0.6) //diffuse reflection
{
newRay.direction=RandomVectorWithPDF(lambertBDRF,r);
reflected = TracePath(newRay,depth+1)/0.6;
}
else //specular reflection
{
newRay.direction=RandomVectorWithPDF(specularPDF,r);
reflected = TracePath(newRay,depth+1)*specularBDRF/specularPDF/0.4;
}
return emittance + reflected;
Here specularPDF is a distribution with a narrow peak around the reflected ray (theta_in=theta_out,phi_in=phi_out+pi) for which a way to create random vectors is available, and specularBDRF returns the specular intensity from Phong's model (http://en.wikipedia.org/wiki/Phong_reflection_model).
Note how the PDFs are modified by 0.6 and 0.4 respectively.
I'm by no means an expert in ray tracing, but this seems to be classic Monte Carlo:
You have lots of possible rays, and you choose one uniformly at random and then average over lots of trials.
The distribution you used to choose one of the rays was uniform (they were all equally as likely)
so you don't have to do any clever re-normalising.
However, Perhaps there are lots of possible rays to choose, but only a few would possibly lead to useful results.We therefore bias towards picking those 'useful' possibilities with higher probability, and then re-normalise (we are not choosing the rays uniformly any more, so we can't just take the average). This is
importance sampling.
The mirror example seems to be the following: only one possible ray will give a useful result.
If we choose a ray at random then the probability we hit that useful ray is zero: this is a property
of conditional probability on continuous spaces (it's not actually continuous, it's implicitly discretised
by your computer, so it's not quite true...): the probability of hitting something specific when there are infinitely many things must be zero.
Thus we are re-normalising by something with probability zero - standard conditional probability definitions
break when we consider events with probability zero, and that is where the problem would come from.

circle-circle collision problem

I have a problem with circle-circle collision detection.I used the following algorithm
func collision(id,other.id)
{
var vaP1,vaP2,dis,va1,vb1,va2,vb2,vp1,vp2,dx,dy,dt;
if (id!=other.id)
{
dx=other.x-x;
dy=other.y-y;
dis=sqrt(sqr(dx)+sqr(dy));
if dis<=radius+other.radius
{
//normalize
dx/=dis;
dy/=dis;
//calculate the component of velocity in the direction
vp1=hspeed*dx+vspeed*dy;
vp2=other.hspeed*dx+other.vspeed*dy;
if (vp1-vp2)!=0
{
dt=(radius+other.radius-dis)/(vp1-vp2);
//move the balls back so they just touch
x-=hspeed*dt;
y-=vspeed*dt;
other.x-=other.hspeed*dt;
other.y-=other.vspeed*dt;
//projection of the velocities in these axes
va1=(hspeed*dx+vspeed*dy);
vb1=(vspeed*dx-hspeed*dy);
va2=(other.hspeed*dx+other.vspeed*dy);
vb2=(other.vspeed*dx-other.hspeed*dy);
//new velocities in these axes. take into account the mass of each ball.
vaP1=(va1+bounce*(va2-va1))/(1+mass/other.mass);
vaP2=(va2+other.bounce*(va1-va2))/(1+other.mass/mass);
hspeed=vaP1*dx-vb1*dy;
vspeed=vaP1*dy+vb1*dx;
other.hspeed=vaP2*dx-vb2*dy;
other.vspeed=vaP2*dy+vb2*dx;
//we moved the balls back in time, so we need to move them forward
x+=hspeed*dt;
y+=vspeed*dt;
other.x+=other.hspeed*dt;
other.y+=other.vspeed*dt;
}
}
}
x=ball 1 x-position
y=ball 1 y-position
other.x= ball 2 x position
other.y=ball 2 y position
this algorithm works well when i have a ball image of 40 x 40 pixel and ball center is (20,20) means image consists only ball.But the problem arises when image size is 80 x 80.and ball center position is (60,60),means ball is lower right corner with radius 20.
in this case there are multiple collision occur,means the portion
x+=hspeed*dt;
y+=vspeed*dt;
other.x+=other.hspeed*dt;
other.y+=other.vspeed*dt;
unable to seperate the ball /velocity does not change according to collision.
I have changed the value of x which is the center of image 40,40 to 60,60 center of ball adding 20.but the result is same .Can any one tell me what is the problem.I think algorithm is correct because it works nicely in all other case and lots of people used this algorithm.problem is changing position from image center to ball center.what correction should i do for this??? or any idea.if someone want to help plz give me e-mail address so that i can send my full project.
I didnt have the mental power to digest your entire question, but here is my 2 cents on how to solve your problem
1) The simplest way to detect a circle collision with another is to check if their distance is less than the radius of the combined circles. (i might be wrong with the math, so correct me if i am wrong)
Circle c1,c2;
float distance = DISTANCE(c1.center,c2.center);
if(distance < c1.radius + c2.radius)
{
// collision .. BOOOOOOM
}
2) Try to use accurate data types. Try not to convert floats to integers without checking overflow, underflow and decimal points. Better still, just use floats .
3) Write a log and trace through your values. See if there are any obvious maths errors .
4) Break down your code to its simplest portion. Try to remove all that velocity computation to get the simplest movements to help you debug.
I will not give you the answer that you are looking for and I am not sure someone else will. The amount of code that must be decyphered to get you the answer may not warrant the reward. What I would recommend is to losen the coupling in your algorithm. The function above is doing way too much work.
Ideally you would have a collision detection that concentrated only on the collision and not on advancing the balls. Something like function shown below and that would allow other developers to help you more easily if you still had a problem.
function(firstCircleCenterX, firstCircleCenterY, secondCircleCenterX, secondCircleCenterY, firstCircleRadius, secondCircleRadius)
{
...this code should concentrate on logic to determine collision
...use pythagoran theory to find distance between the two centers
...if the distance between the two centers is less than ((2*firstCircleRadius)+(2*secondCircleRadius) then you have a collision
...return true or false depending on collision
}

How to speed up marching cubes?

I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate.
Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this.
I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls?
Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option...
I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.
I know this is a bit old, but I recently implemented Marching Cubes based on much the same source. There is a LOT of inefficiency here. At a minimum if you were doing something like
for (int x=0; x<densityArrayWidth; x++)
for (int z=0; z<densityArrayLength; z++)
for (int y=0; y<densityArrayHeight; y++)
Polygonize(Gridcell, isolevel, Triangles)
Look at how many times you'd be reallocating the edgeTable and Tritable! Those immediately need to move out to the overall class. I ditched the gridCell object as well, going directly from the points/values to the triangles.
In short it isn't just the algorithmic complexity, memory allocations (and in the base this does a huge amount of them) take time also.
Just in case anyone else ends up here, dead-space elimination through a coarser sampling rate makes virtually no difference at all. Any remotely safe (ie: allowing a border for sampling artifacts) coarser sampling ends up grabbing most of the grid anyway in any remotely non-trivial field.
Speeding up the underlying field evaluation (with heavy memoisation) seemed to mostly solve the performance problems.
Try marching tetrahedra instead -- the math is simpler, allowing you to consider fewer cases per cell.
each cube has 12 edges, if you go through each cube and find 12 intersection points, you are doing 4 times too many calculations for intersection points- you have to only use 3 edges in the bottom left corner of each cube, with an extra row in the top right corner of the zone, and then use a special upgrade to access all the values that you have found. I'm going to do a topic on this because it needs to be discussed and it's complicated.
Also, testing for areas in space that need polygons, by assessing the ISO level using Octree, and skipping areas far from the ISO level.
I had a look at propagation, but it isn't that reliable and efficient.

Resources