Go float64 does not work for latitude and longitude - go

I'm trying to parse latitude and longitude from a json object with precision, and I picked float64 for the job. However float64 is somehow rounding the number and I'm not sure what to do in order to avoid rounding.
I've created a quick snippet so you can execute the problem:
http://play.golang.org/p/9g6Imn-7GK
package main
import (
"encoding/json"
"fmt"
"reflect"
)
type Position struct {
Lat float64 `json:"lat"`
Lon float64 `json:"lon"`
}
func main() {
s := `{"lat":13.519004709972312,"lon": -13.519004709972312}`
pos := Position{}
json.Unmarshal([]byte(s), &pos)
if !reflect.DeepEqual(s, &pos) {
fmt.Printf("\nExpected %#v\nbut got %#v", s, pos)
}
}

A practical solution:
Do nothing.
The difference in the numbers is about a tenth of the width of a single small atom, and your measurements can't possibly be that precise.
The eighth decimal place (you have 15 in your numbers) represents a distance of about 1.1mm. I doubt if your measurements are accurate to this degree, and anything more is getting really silly. The 5th decimal place is about 1.1m, which is in the realm of sanity, and not affected by floating point errors.
The wikipedia page on Decimal Degrees may be helpful in determining which values are reasonable for your project.
Some considerations:
There are two potential issues at play:
Floating point:
Some reading that might shed light on floating point issues:
What Every Programmer Should Know About Floating-Point Arithmetic or Why don’t my numbers add up?
What Every Computer Scientist Should Know About Floating-Point Arithmetic
If you read these, and understand how floating point works in
practice, you may be enlightened, and understand what's happening
and how to work around it.
Precision of measurement:
This, in my opinion, is the bigger issue. One number you posted was 13.519004709972312, which was displayed as 13.519004709972313. Whether the value has "changed" or not (see: 1), every software calculator I tried to calculate the difference between these values returned 0, which is indicative.
Doing the calculation by hand revealed a difference of 0.000000000000001 in the values. That is, a 14 zeroes before the trailing one, or 1^-15.
The wikipedia page on Latitude says:
the meridian length of 1 degree of latitude on the sphere is 111.2 km.
Working backward from this, the difference in locations represented by the 15th decimal place in a latitude corresponds to a distance of approximately 0.00000011mm, or 0.11nanometers.
From the The Physics Factbook's Diameter of an Atom page:
An atom is a million times smaller than the thickest human hair. The diameter of an atom ranges from about 0.1 to 0.5 nanometers
Therefore, your measurement would be "off" by at most 1/10 of the diameter of a single atom.
Even if all my calculations were off by a million or billion times, the distances would still be so small that they would not matter in practice!

Related

Any faster method to move things in a circle?

Currently I'm using Math.cos and Math.sin to move objects in a circle in my game, however I suspect it's slow (didn't make proper tests yet though) after reading a bit about it.
Are there any ways to calculate this in a faster way?. Been reading that one alternative could be to have a sort of hash table with stored pre-calculated results, like old people used it in the old times before the computer age.
Any input is appreciated.
Expanding on my comment, if you don't have any angular acceleration (the angular velocity stays constant -- this is a requirement for the object to remain traveling in a circle with constant radius without changing the center-pointing force, e.g. via tension in a string), then you can use the following strategy:
1) Compute B = angular_velocity * time_step_size. This is how much angle change the object needs to go through in a single time step.
2) Compute sinb = sin(B) and cosb = cos(B).
3)
Note that we want to change the angle from A to A+B (the object is going counterclockwise). In this derivation, the center of the circle we're orbiting is given by the origin.
Since the radius of the circle is constant, we know r*sin(A+B) = y_new = r*sin(A)cos(B) + r*cos(A)sin(B) = y_old * cos(B) + x_old*sin(B) and r*cos(A+B) = x_new = r*cos(A)*cos(B) - r*sin(A)sin(B) = x_old*cos(B) - y_old*sin(B).
We've removed the cosine and sine of anything we don't already know, so the Cartesian coordinates can be written as
x_new = x_old*cosb - y_old*sinb
y_new = x_old*sinb + y_old*cosb
No more cos or sin calls except in an initialization step which is called once. Obviously, this won't save you anything if B keeps changing for whatever reason (either angular velocity or time step size changes).
You'll notice this is the same as multiplying the position vector by a fixed rotation matrix. You can translate by the circle center and translate back if you don't want to only consider circles with a center at the origin.
First Edit
As #user5428643 mentions, this method is numerically unstable over time due to drift in the radius. You can probably correct this by periodically renormalizing x and y (x_new = x_old * r_const / sqrt(x_old^2 + y_old^2) and similarly for y every few thousand steps -- if you implement this, save the factor r_const / sqrt(x_old^2 + y_old^2) since it is the same for both x and y). I'll think about it some more and edit this answer if I come up with a better fix.
Second Edit
Some more comments on the numerical drift over time:
I did a couple of tests in C++ and python. In C++ using single precision floats, there is sizable drift even after 1 million time steps when B = 0.1. I used a circle with radius 1. In double precision, I didn't notice any drift visually after 100 million steps, but checking the radius shows that it is contaminated in the lower few digits. Doing the renormalization on every step (which is unnecessary if you're just doing visualization) results in an approximately 4 times slower running time versus the drifty version. However, even this version is about 2-3 times faster than using sin and cos on every iteration. I used full optimization (-O3) in g++. In python (using the math package) I only got a speed up of 2 between the drifty and normalized versions, however the sin and cos version actually slots in between these two -- it's almost exactly halfway between these two in terms of run time. Renormalizing every once in a few thousand steps would still make this faster, but it's not nearly as big a difference as my C++ version would indicate.
I didn't do too much scientific testing to get the timings, just a few tests with 1 million to 1 billion steps in increments of 10.
Sorry, not enough rep to comment.
The answers by #neocpp and #oliveryas01 would both be perfectly correct without roundoff error.
The answer by #oliveryas01, just using sine and cosine directly, and precalculating and storing many values if necessary, will work fine.
However, #neocpp's answer, repeatedly rotating by small angles using a rotation matrix, is numerically unstable; over time, the roundoff error in the radius will tend to grow exponentially, so if you run your programme for a long time the objects will slowly move off the circle, spiralling either inwards or outwards.
You can see this mathematically with a little numerical analysis: at each stage, the squared radius is approximately multiplied by a number which is approximately constant and approximately equal to 1, but almost certainly not exactly equal to 1 due to inexactness of floating point representations.
If course, if you're using double precision numbers and only trying to achieve a simple visual effect, this error may not be large enough to matter to you.
I would stick with sine and cosine if I were you. They're the most efficient way to do what you're trying to do. If you really want maximum performance then you should generate an array of x and y values from the sine and cosine values, then plug that array's values into the circle's position. This way, you aren't running sine and cosine repeatedly, instead only for one cycle.
Another possibility completely avoiding the trig functions would be use a polar-coordinate model, where you set the distance and angle. For example, you could set the x coordinate to be the distance, and the rotation to be the angle, as in...
var gameBoardPin:Sprite = new Sprite();
var gameEntity:Sprite = new YourGameEntityHere();
gameBoardPin.addChild( gameEntity );
...and in your loop...
// move gameEntity relative to the center of gameBoardPin
gameEntity.x = circleRadius;
// rotate gameBoardPin from its center causes gameEntity to rotate at the circleRadius
gameBoardPin.rotation = desiredAngleForMovingObject
gameBoardPin's x,y coordinates would be set to the center of rotation for gameEntity. So, if you wanted the gameEntity to rotate with a 100 pixel tether around the center of the stage, you might...
gameBoardPin.x = stage.stageWidth / 2;
gameBoardPin.y = stage.stageHeight / 2;
gameEntity.x = 100;
...and then in the loop you might...
desiredAngleForMovingObject += 2;
gameBoardPin.rotation = desiredAngleForMovingObject
With this method you're using degrees instead of radians.

Size limitation when drawing to CanvasRenderingContext2D

I HAVE HEAVILY EDITED THIS QUESTION TO PROVIDE MORE DETAILS
I came across a limitation when drawing to CanvasRenderingContext2D via EaselJS framework. I have objects like this:
but when the position of those objects surpasses couple million pixels the drawings start to crumble apart. This is the same object with x position 58524928. (The parent container is moved to -58524928 so that we can see the object on stage.) The more I offset the object the more it will crumble. Also when I try to move the object - drag it with mouse - it will "jump" like it was subjected to a large grid.
This is EaseJS framework and the shapes are ultimately drawn to the CanvasRenderingContext2D via the drawImage() method. Here is snippet from the code:
ctx.drawImage(cacheCanvas, this._cacheOffsetX+this._filterOffsetX, this._cacheOffsetY+this._filterOffsetY, cacheCanvas.width/scale, cacheCanvas.height/scale);
I suppose it has something to do with the limited number of real numbers in JavaScript:
Note that there are infinitely many real numbers, but only a finite
number of them (18437736874454810627, to be exact) can be represented
exactly by the JavaScript floating-point format. This means that when
you're working with real numbers in JavaScript, the representation of
the number will often be an approximation of the actual number.
Source: JavaScript: The Definitive Guide
Can someone confirm/reject my assumption? 58 million (58524928) does not seems so much to me, is it some inefficiency of EaselJS or it is a limit of the Canvas?
PS:
Scaling has no effect. I have drawn everything 1000 times smaller and 1000 times closer with no effect. Equally, if you scale the object up 1000 times while still x:58 million it will not look crumbled. But move it to 50 billion and you are where you started. Basically offset divided by size is constant limit for details.
EDIT
Here is example jsfiddle.net/wzbsbtgc/2. Basically there are two separate problems
If I use huge numbers as parameters for the drawing itself (red curve) it will be distorted. This can be avoided by using smaller numbers and moving the DisplayObject instead (blue curve).
In both cases it is not possible to move the DisplayObject by 1px. I think this is explained in GameAlchemist's post.
Any advice/workaround for the second problem is welcome.
It appears that Context2D uses lower precision numbers for transforms. I haven't confirmed the precision yet, but my guess is that it is using floats instead of doubles.
As such, with higher values, the transform method (and other similar C2D methods) that EaselJS relies on loses precision, similar to what GameAlchemist describes. You can see this issue reproduced using pure C2D calls here:
http://jsfiddle.net/9fLff2we/
The best workaround that I can think of, would be to precalculate the "final" values external to the transform methods. Normal JS numbers are higher precision than what C2D is using, so this should solve the issue. Really rough example to illustrate:
http://jsfiddle.net/wzbsbtgc/3/
The behavior that you see is related to the way figures are represented in the IEEE 754 standard.
While Javascript uses 64bits floats, WebGL uses only 32bits float, and since most (?all?) canvases are webGL accelerated, all your numbers will be (down)converted before the draw.
The IEEE 754 32 bits standard uses 32 bits to represent a number : 1 bit for sign, 8 exponent bits, and then only 23 bits for the mantissa.
Let's call IEEE_754_32max :
IEEE_754_32max = ( 1 << 23 ) -1 = 8.388.6071 (8+ millions)
We can have full precision for integers only in the [-IEEE_754_32max, IEEE_754_32max] range.
Beyond that point, the exponent will be used, and we'll loose the weak bits of the mantissa.
For instance ( 10 millions + 1 ) = 10.000.001 is too big, it can't fit into 23 bits,so it will be stored as
10.000.001 = 10.000.00• * 10 = 1e7 = 10.000.000
- We lost the final '1' -
The grid effect that you see is linked to the exponent being used /precision being lost : with figures such as 58524928, we need 26 bits to represent the figure. So 3 bits are lost, and we have, for instance :
58524928 + 7 == 58524928
So when using a figure that is near from 58524928, it will either be rounded to 58524928, OR 'jump' to the nearest possible figure : you have your grid effect.
Solution ?
-->> Change the units you are using for your applications, to have much smaller figures. Maybe you're using mm --> use meters or kilometers.
Mind that the precision you are using is an illusion : display resolution is the first limit, and the mouse is 1 pixel precise at most, so even with a 4K display, there's no way 32 bit floats can be a limit.
Choose the right measure unit to fit your all your coordinates in a smaller range and you'll solve your issue.
More clearly : you must change the units you are using for the display. Which does not mean you have to trade accuracy : you just have to do the translation + scaling by yourself before drawing : that way you still use the Javascript IEEE 64 bits accuracy and you've got no more those 32 bits rounding issue.
(you might override the x, y properties with getters/setters
Object.defineProperty(targetObject, 'x', { get : function() { return view.pixelWidth*(this.rx-view.left)/view.width ; } }
)
You can use any sized drawing coordinates that you desire.
Canvas will clip your drawing to the display area of the canvas element.
For example, here's a demo that starts drawing a line from x = -50000000 and finishes on the canvas. Only the visible portion of the line is rendered. All non-visible (off-canvas) points are clipped.
var canvas=document.getElementById("canvas");
var ctx=canvas.getContext("2d");
var cw=canvas.width;
var ch=canvas.height;
ctx.beginPath();
ctx.moveTo(-50000000,100);
ctx.lineTo(150,100);
ctx.stroke();
body{ background-color: ivory; padding:10px; }
#canvas{border:1px solid red;}
<h4>This line starts at x = negative 50 million!</h4>
<canvas id="canvas" width=300 height=300></canvas>
Remember that the target audience for W3C standard is mainly browser vendors. The unsigned long value (or 232) addresses more the underlying system for creating a bitmap by the browser. The standard says values in this range are valid, but there is no guarantee the underlying system will be able to provide a bitmap that large (most browsers today limits the bitmap to much smaller sizes than this). You stated that you don't mean the canvas element itself, but the link reference is the interface definition of the element so I just wanted to point that out in regards to the number range.
From the JavaScript side of things, where we developers usually are, and with the exception of typed arrays, there is no such thing as ulong etc. Only Number (aka unrestricted double) which is signed and stores numbers in 64-bit, formatted as IEEE-754.
The valid range for Number is:
Number.MIN_VALUE = 5e-324
Number.MAX_VALUE = 1.7976931348623157e+308
You can use any values in this range with canvas for your vector paths. Canvas will clip them to the bitmap based on the current transformation matrix when the paths are rasterized.
If you by drawing mean another bitmap (ie. Image, Canvas, Video) then they will be subject to the same system and browser capabilities/restrictions as the target canvas itself. Positioning (direct or via transformation) is limited (in sum) by the range of a Number.

How to tilt compensate my magnetometer ? Tried a lot

I try to tilt compensate a magnetometer (BMX055) reading and tried various approaches I have found online, not a single one works.
I atually tried almost any result I found on Google.
I run this on an AVR, it would be extra awesome to find something that works without complex functions (trigonometry etc) for angles up to 50 degree.
I have a fused gravity vector (int16 signed in a float) from gyro+acc (1g gravity=16k).
attitude.vect_mag.x/y/z is a float but contains a 16bit integer ranging from around -250 to +250 per axis.
Currently I try this code:
float rollRadians = attitude.roll * DEG_TO_RAD / 10;
float pitchRadians = attitude.pitch * DEG_TO_RAD / 10;
float cosRoll = cos(rollRadians);
float sinRoll = sin(rollRadians);
float cosPitch = cos(pitchRadians);
float sinPitch = sin(pitchRadians);
float Xh = attitude.vect_mag.x * cosPitch + attitude.vect_mag.z * sinPitch;
float Yh = attitude.vect_mag.x * sinRoll * sinPitch + attitude.vect_mag.y * cosRoll - attitude.vect_mag.z *sinRoll * cosPitch;
float heading = atan2(Yh, Xh);
attitude.yaw = heading*RAD_TO_DEG;
The result is meaningless, but the values without tilt compensation are correct.
The uncompensated formula:
atan2(attitude.vect_mag.y,attitude.vect_mag.x);
works fine (when not tilted)
I am sort of clueless what is going wrong, the normal atan2 returns a good result (when balanced) but using the wide spread formulas for tilt compensation completely fails.
Do I have to keep the mag vector values within a specific range for the trigonometry to work ?
Any way to do the compensation without trig functions ?
I'd be glad for some help.
Update:
I found that the BMX055 magnetometer has X and Y inverted as well as Y axis is *-1
The sin/cos functions now seem to lead to a better result.
I am trying to implement the suggested vector algorithms, struggling so far :)
Let us see.
(First, forgive me a bit of style nagging. The keyword volatile means that the variable may change even if we do not change it ourselves in our code. This may happen with a memory position that is written by another process (interrupt request in AVR context). For the compiler volatile means that the variable has to be always loaded and stored into memory when used. See:
http://en.wikipedia.org/wiki/Volatile_variable
So, most likely you do not want to have any attributes to your floats.)
Your input:
three 12-bit (11 bits + sign) integers representing accelerometer data
three approximately 9-bit (8 bits + sign) integers representing the magnetic field
Good news (well...) is that your resolution is not that big, so you can use integer arithmetics, which is much faster. Bad news is that there is no simple magical one-liner which would solve your problem.
First of all, what would you like to have as the compass bearing when the device is tilted? Should the device act as if it was not tilted, or should it actually show the correct projection of the magnetic field lines on the screen? The latter is how an ordinary compass acts (if the needle moves at all when tilted). In that case you should not compensate for anything, and the device can show the fancy vertical tilt of the magnetic lines when rolled sideways.
In any case, try to avoid trigonometry, it takes a lot of code space and time. Vector arithmetics is much simpler, and most of the time you can make do with multiplys and adds.
Let us try to define your problem in vector terms. Actually you have two space vectors to start with, m pointing to the direction of the magnetic field, g to the direction of gravity. If I have understood your intention correctly, you need to have vector d which points along some fixed direction in the device. (If I think of a mobile phone, d would be a vector parallel to the screen left or right edges.)
With vector mathematics this looks rather simple:
g is a normal to a horizontal (truly horizontal) plane
the projection of m on this plane defines the direction a horizontal compass would show
the projection of d on the plane defines the "north" on the compass face
the angle between m and d gives the compass bearing
Now that we are not interested in the magnitude of the magnetic field, we can scale everything as we want. This reduces the need to use unity vectors which are expensive to calculate.
So, the maths will be something along these lines:
# projection of m on g (. represents dot product)
mp := m - g (m.g) / (g.g)
# projection of d on g
dp := d - g (d.g) / (g.g)
# angle between mp and dp
cos2 := (mp.dp)^2 / (mp.mp * dp.dp)
sgn1 := sign(mp.dp)
# create a vector 90 rotated from d on the plane defined by g (x is cross product)
drot := dp x g
sin2 := (mp.drot)^2 / (mp.mp * drot.drot)
sgn2 := sign(mp.drot)
After this you will have a sin^2 and cos^2 of the compass directions. You need to create a resolving function for one quadrant and then determine the correct quadrant by using the signs. The resolving function may sound difficult, but actually you just need to create a table lookup function for sin2/cos2 or cos2/sin2 (whichever is smaller). It is relatively fast, and only a few points are required in the lookup (with bilinear approximation even fewer).
So, as you can see, there are no trig functions around, and even no square roots around. Vector dots and crosses are just multiplys. The only slightly challenging trick is to scale the fixed point arithmetics to the correct scale in each calculation.
You might notice that there is a lot of room for optimization, as the same values are used several times. The first step is to get the algorithm run on a PC with floating point with the correct results. The optimizations come later.
(Sorry, I am not going to write the actual code here, but if there is something that needs clarifying, I'll be glad to help.)

XNA 2D Camera loosing precision

I have created a 2D camera (code below) for a top down game. Everything works fine when the players position is close to 0.0x and 0.0y.
Unfortunately as distance increases the transform seems to have problems, at around 0.0x 30e7y (yup that's 30 million y) the camera starts to shudder when the player moves (the camera gets updated with the player position at the end of each update) At really big distances, a billion + the camera wont even track the player, as I'm guessing what ever error is in the matrix is amplified by too much.
My question is: Is there either a problem in the matrix, or is this standard behavior for extreme numbers.
Camera Transform Method:
public Matrix getTransform()
{
Matrix transform;
transform = (Matrix.CreateTranslation(new Vector3(-position.X, -position.Y, 0)) *
Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(new Vector3(zoom, zoom, 1.0f)) *
Matrix.CreateTranslation(new Vector3((viewport.Width / 2.0f), (viewport.Height / 2.0f), 0)));
return transform;
}
Camera Update Method:
This requests the objects position given it's ID, it returns a basic Vector2 which is then set as the cameras position.
if (camera.CameraMode == Camera2D.Mode.Track && cameraTrackObject != Guid.Empty)
{
camera.setFocus(quadTree.getObjectPosition(cameraTrackObject));
}
If any one can see an error or enlighten me as to why the matrix struggles I would be most grateful.
I have actually found the reason for this, it was something I should have thought of.
I'm using single precision floating points, which only have precision to 7 digits. That's fine for smaller numbers (up to around the 2.5 million mark I have found). Anything over this and the multiplication functions in the matrix start to gain precision errors as the floats start to truncate.
The best solution for my particular problem is to introduce some artificial scaling (I need the very large numbers as the simulation is set in space). I have limited my worlds to 5 million units squared (+/- 2.5 million units) and will come up with another way of granulating the world.
I also found a good answer about this here:
Vertices shaking with large camera position values
And a good article that discusses floating points in more detail:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Thank you for the views and comments!!

Path Tracing algorithm - Need help understanding key point

So the Wikipedia page for path tracing (http://en.wikipedia.org/wiki/Path_tracing) contains a naive implementation of the algorithm with the following explanation underneath:
"All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1)."
The part I'm having trouble understanding is the part in bold. I am familiar with PDFs but I am not quite sure how they fit into here. If we stick to the mirror example, what would be the PDF value we would divide by? Why? How would I go about finding the PDF value to divide by if I was using an arbitrary BRDF value such as a Phong reflection model or Cook-Torrance reflection model, etc? Lastly, why do we divide by the PDF instead of multiply? If we divide, don't we give more weight to a direction with a lower probability?
Let's assume that we have only materials without color (greyscale). Then, their BDRF at each point can be expressed as a single valued function
float BDRF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit);
Here, phi and theta are the azimuth and zenith angles of the two rays under consideration. For pure Lambertian reflection, this function would look like this:
float lambertBRDF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit)
{
return albedo*1/pi*cos(theta_out);
}
albedo ranges from 0 to 1 - this measures how much of the incoming light is reemitted. The factor 1/pi ensures that the integral of BRDF over all outgoing vectors does not exceed 1. With the naive approach of the Wikipedia article (http://en.wikipedia.org/wiki/Path_tracing), one can use this BRDF as follows:
Color TracePath(Ray r, depth) {
/* .... */
Ray newRay;
newRay.origin = r.pointWhereObjWasHit;
newRay.direction = RandomUnitVectorInHemisphereOf(normal(r.pointWhereObjWasHit));
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*lambertBDRF(r.phi,r.theta,newRay.phi,newRay.theta,r.pointWhereObjWasHit);
}
As mentioned in the article and by Ross, this random sampling is unfortunate because it traces incoming directions (newRay's) from which little light is reflected with the same probability as directions from which there is lots of light. Instead, directions whence much light is reflected to the observer should be selected preferentially, to have an equal sample rate per contribution to the final color over all directions. For that, one needs a way to generate random rays from a probability distribution. Let's say there exists a function that can do that; this function takes as input the desired PDF (which, ideally should be be equal to the BDRF) and the incoming ray:
vector RandomVectorWithPDF(function PDF(p_i,t_i,p_o,t_o,point x), Ray incoming)
{
// this function is responsible to create random Rays emanating from x
// with the probability distribution PDF. Depending on the complexity of PDF,
// this might somewhat involved. It is possible, however, to do it for Lambertian
// reflection (how exactly is math, not programming):
vector randomVector;
if(PDF==lambertBDRF)
{
float phi = uniformRandomNumber(0,2*pi);
float rho = acos(sqrt(uniformRandomNumber(0,1)));
float theta = pi/2-rho;
randomVector = getVectorFromAzimuthZenithAndNormal(phi,zenith,normal(incoming.whereObjectWasHit));
}
else // deal with other PDFs
return randomVector;
}
The code in the TracePath routine would then simply look like this:
newRay.direction = RandomVectorWithPDF(lambertBDRF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected;
Because the bright directions are preferred in the choice of samples, you do not have to weight them again by applying the BDRF as a scaling factor to reflected. However, if PDF and BDRF are different for some reason, you would have to scale down the output whenever PDF>BDRF (if you picked to many from the respective direction) and enhance it when you picked to little .
In code:
newRay.direction = RandomVectorWithPDF(PDF,r);
Color reflected = TracePath(newRay, depth + 1);
return emittance + reflected*BDRF(...)/PDF(...);
The output is best, however, if BDRF/PDF is equal to 1.
The question remains why can't one always choose the perfect PDF which is exactly equal to the BDRF? First, some random distributions are harder to compute than others. For example, if there was a slight variation in the albedo parameter, the algorithm would still do much better for the non-naive sampling than for uniform sampling, but the correction term BDRF/PDF would be needed for the slight variations. Sometimes, it might even be impossible to do it at all. Imagine a colored object with different reflective behavior of red green and blue - you could either render in three passes, one for each color, or use an average PDF, which fits all color components approximately, but none perfectly.
How would one go about implementing something like Phong shading? For simplicity, I still assume that there is only one color component, and that the ratio of diffuse to specular reflection is 60% / 40% (the notion of ambient light makes no sense in path tracing). Then my code would look like this:
if(uniformRandomNumber(0,1)<0.6) //diffuse reflection
{
newRay.direction=RandomVectorWithPDF(lambertBDRF,r);
reflected = TracePath(newRay,depth+1)/0.6;
}
else //specular reflection
{
newRay.direction=RandomVectorWithPDF(specularPDF,r);
reflected = TracePath(newRay,depth+1)*specularBDRF/specularPDF/0.4;
}
return emittance + reflected;
Here specularPDF is a distribution with a narrow peak around the reflected ray (theta_in=theta_out,phi_in=phi_out+pi) for which a way to create random vectors is available, and specularBDRF returns the specular intensity from Phong's model (http://en.wikipedia.org/wiki/Phong_reflection_model).
Note how the PDFs are modified by 0.6 and 0.4 respectively.
I'm by no means an expert in ray tracing, but this seems to be classic Monte Carlo:
You have lots of possible rays, and you choose one uniformly at random and then average over lots of trials.
The distribution you used to choose one of the rays was uniform (they were all equally as likely)
so you don't have to do any clever re-normalising.
However, Perhaps there are lots of possible rays to choose, but only a few would possibly lead to useful results.We therefore bias towards picking those 'useful' possibilities with higher probability, and then re-normalise (we are not choosing the rays uniformly any more, so we can't just take the average). This is
importance sampling.
The mirror example seems to be the following: only one possible ray will give a useful result.
If we choose a ray at random then the probability we hit that useful ray is zero: this is a property
of conditional probability on continuous spaces (it's not actually continuous, it's implicitly discretised
by your computer, so it's not quite true...): the probability of hitting something specific when there are infinitely many things must be zero.
Thus we are re-normalising by something with probability zero - standard conditional probability definitions
break when we consider events with probability zero, and that is where the problem would come from.

Resources