ray tracing lighting - raytracing

I am trying to implement specular and diffuse lighting for a simple sphere ray tracing application, but I am having problems with my vectors.
I am trying to use the following to update the light, but the generated image looks exactly the same, so I know I am doing something wrong. I assume I am messing up the vectors in some way. Hit is the sphere that has been hit and mindis is the distance to this spheres point. Pir, pig, pib are the rgb for the color.
P3D intersection = ray.position.add(ray.direction).scale(mindis);
P3D l = intersection.sub(light).normalize();
P3D n = hit.center.sub(intersection).normalize();
double dot = l.dot(n);
P3D f = l.add(n).scale(-2.0 * dot);
double dot2 = f.dot(ray.direction);
pir += dot2 * 20;
pig += dot2 * 20;
pib += dot2 * 20;

Perhaps the first line should be:
P3D intersection = ray.position.add(ray.direction.scale(mindis));
Also
P3D f = l.add(n.scale(-2.0 * dot));
f appears to be the direction that the light bounces off the sphere. This would typically be the opposite direction of the ray, so you probably want
double dot2 = -f.dot(ray.direction);

Related

Calculating a transformation matrix to place an object on a sphere in glsl

I'm trying generate some matrices to place trees on a planet on the GPU. The position of each tree is predetermined - based on a biome map and various heightmap data - but this data is GPU resident so I can't do this on the CPU. At the moment I'm instancing using the geometry shader - this will change to traditional instancing if performance is bad, and I'd then compute the model matrices for each tree on a compute shader.
I've got as far as trying to use a modified version of lookAt() but I can't get it working and even if I did, the trees would be perpendicular to the planet instead of standing up. I know I can define a using 3 axis, so the normal of the sphere, a tangent and a bitangent, but given I don't care what direction these tangents and bitangents are in at the moment, what would be a quick way to calculate this matrix in GLSL? Thanks!
void drawInstance(vec3 offset)
{
//Grab the model's position from the model matrix
vec3 modelPos = vec3(modelMatrix[3][0],modelMatrix[3][1],modelMatrix[3][2]);
//Add the offset
modelPos +=offset;
//Eye = where the new pos is, look in x direction for now, planet is at origin so up is just the modelPos normalized
mat4 m = lookAt(modelPos, modelPos + vec3(1,0,0), normalize(modelPos));
//Lookat is intended as a camera matrix, fix this
m = inverse(m);
vec3 pos = gl_in[0].gl_Position.xyz;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[1].gl_Position.xyz ;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[2].gl_Position.xyz;
gl_Position = vp * m * vec4(pos, 1.0);
EmitVertex();
EndPrimitive();
}
void main()
{
vp = proj * view;
mvp = proj * view * modelMatrix;
drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,-40));
// drawInstance(vec3(40,40,0));
// drawInstance(vec3(-40,0,0));
}
I would recommend taking a different approach completely.
First, don't use geometry shaders for replicating geometry. That's what the glDrawArraysInstanced is for.
Second, it's hard to define such a matrix procedurally. This is related to the Hairy Ball Theorem.
Instead I would generate a bunch of random rotations on the CPU. Use this method to create a uniformly distributed quaternion. Pass that quaternion to the vertex shader as a single vec4 instanced attribute. In the vertex shader:
Offset the tree vertex by (0, 0, radiusOfThePlanet) so that it's located at the north pole (assuming Z-axis is up).
Apply the quaternion rotation (it will rotate around planet center so the tree stays on the surface).
Apply the planet model-view and camera projection matrices as usual.
This will yield an unbiased uniformly distributed random set of trees.
Found a solution to the problem which allows me to place objects on the surface of a sphere facing in the correct directions. Here is the code:
mat4 m = mat4(1);
vec3 worldPos = getWorldPoint(sphericalCoords);
//Add a random number to the world pos, then normalize it so that it is a point on a unit sphere slightly different to the world pos. The vector between them is a tangent. Change this value to rotate the object once placed on the sphere
vec3 xAxis = normalize(normalize(worldPos + vec3(0.0,0.2,0.0)) - normalize(worldPos));
//Planet is at 0,0,0 so world pos can be used as the normal, and therefore the y axis
vec3 yAxis = normalize(worldPos);
//We can cross the y and x axis to generate a bitangent to use as the z axis
vec3 zAxis = normalize(cross(yAxis, xAxis));
//This is our rotation matrix!
mat3 baseMat = mat3(xAxis, yAxis, zAxis);
//Fill this into our 4x4 matrix
m = mat4(baseMat);
//Transform m by the Radius in the y axis to put it on the surface
mat4 m2 = transformMatrix(mat4(1), vec3(0,radius,0));
m = m * m2;
//Multiply by the MVP to project correctly
m = mvp* m;
//Draw an instance of your object
drawInstance(m);

Programmatically generate simple UV Mapping for models

Coming from this question I'm trying to generate UV Mappings programmatically with Three.js for some models, I need this because my models are being generated programmatically too and I need to apply a simple texture to them. I have read here and successfully generated UV mapping for some simple 3D text but when applying the same mapping to more complex models it just doesn't work.
The texture I'm trying to apply is something like this:
The black background it's just transparent in the PNG image. I need to apply this to my models, it's just a glitter effect so I don't care about the exact position in the model, is any way to create a simple UV Map programatically for this cases?
I'm using this code from the linked question which works great for planar models but doesn't work for non-planar models:
assignUVs = function( geometry ){
geometry.computeBoundingBox();
var max = geometry.boundingBox.max;
var min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
geometry.faceVertexUvs[0] = [];
var faces = geometry.faces;
for (i = 0; i < geometry.faces.length ; i++) {
var v1 = geometry.vertices[faces[i].a];
var v2 = geometry.vertices[faces[i].b];
var v3 = geometry.vertices[faces[i].c];
geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
geometry.uvsNeedUpdate = true;
}
You need to be more specific. Here, I'll apply UV mapping programmatically
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}
Happy?
There are an infinite ways of applying UV coordinates. How about this
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}
There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.
Sorry to be so snarky, just pointing out the question is in one sense nonsensical.
Anyway, there are a few common methods for applying a texture.
Spherical mapping
Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation
To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.
The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then
U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
V = 0.5 - Math.asin(y) / Math.PI;
Projection Mapping
This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points
Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this
gl_Position = matrix * position;
Where matrix = worldViewProjection. You can then do
clipSpace = gl_Position.xy / gl_Position.w
You now have x,y values that go from -1 to +1. You then convert them
to 0 to 1 for UV coords
uv = clipSpace * 0.5 + 0.5;
Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.
Planar Mapping
This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.
Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.
Cube Mapping?
This is effectively planar mapping from 6 directions. It's up to you
to decide which UV coordinates get which of the 6 planes. I'd guess
most of the time you'd take the normal of the triangle to see which
plane it most faces, then do planar mapping from that plane.
Actually I might be getting my terms mixed up. You can also do
real cube mapping where you have a cube texture but that requires
U,V,W instead of just U,V. For that it's the same as the sphere
example except you just use the normalized coordinates directly as
U,V,W.
Cylindrical mapping
This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .
U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
V = y
2 more solutions
Maybe you want Environment Mapping?
Here's 1 example and Here's another.
Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.

GLSL Shader: Mapping Bars in Polar-Coordinates

I'd like to create a polar representation of this shader: https://www.shadertoy.com/view/4sfSDN
So that it looks like in this screenshot:
http://postimg.org/image/uwc34jxxz/
I know the basics of the polar-system: How to calculate r and ϕ, but i can only use those values with a texture2d() load function on a image.
When i only have a amplitude value like in the shader above, i dont get it working.
r should somehow be based of the amplitude, but then i dont know how to draw the circle without the texture2d() function... i can draw a circle with r only, but then there are no different amplitudes. Or do i even need to fill a matrix with the generated bars in a loop and load the circle from there?
Im quite sure it is possible, because of the insane shaders on shadertoy, but i dont quite get it...
Can anyone point me out to a solution?
From the shader you posted I think it should be enough to simply transform the uv to polar coordinates.
So what you are looking for are angle and radius from the center. First let us transform the uv so it gives the vector pointing from the center:
uv = fragCoord - (iResolution*.5);
Next try to normalize it. Since the view is not square the normalization transform should only be by 1 coordinate such that
if(iResolution.x>iResolution.y)
{
uv = uv/iResolution.y;
}
else
{
uv = uv/iResolution.x;
}
This will kind of produce a fit effect but you may just hard code one or the other if you need to. min can be used if available (uv = uv/min(iResolution.x, iResolution.y))) to remove the condition.
So at this point the uv vector points from the center toward the pixel position in a coordinate system that is normalized in one dimension.
Now to get the angle you may simply use atan(uv.y, uv.x). To get the radius you then need length(uv).
The radius in your case will be for the shorter dimension in range [0, .5] so you may multiply it by 2.0 but this is a factor you may later change to get the desired effect so that the maximum value is not hitting the border but maybe having 80% or so (just play around with it).
The angle is in range of [-Pi, Pi] plus in the docs it says it does not work for X = 0 which you will need to handle yourself then. So now the angle must be transformed to be in range [.0, 1.0] to access the texture coordinate:
angle = angle/(Pi*2.0) + .5
So now construct the new uv
uv = vec2(angle, radius)
And use the same shader you did before.
You will also need to keep in mind that radius may be larger then 1.0 in corners which may produce a wrong texture access. In such cases it would be best to discard the fragment.
From the shader toy:
#define M_PI 3.1415926535897932384626433832795
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy - (iResolution.xy*.5);
uv = uv/min(iResolution.x, iResolution.y);
float angle = atan(uv.y, uv.x);
angle = angle/(M_PI*2.0) + .5;
float radius = length(uv);
uv = vec2(angle, radius*2.0);
float bars = 24.;
float fft = texture2D( iChannel0, vec2(floor(uv.x*bars)/bars,0.25) ).x;
float amp = (fft - uv.y)*100.;
fragColor = vec4(amp,0.,0.,1.0);
}

Rotate scene about Up vector in jsc3d

I'm using jsc3d to load and display some 3d objects on a canvas. The viewer has already a built-in feature that allows to rotate the "view coordinates" (correct me if i'm wrong) about the Y axis by dragging the mouse.
The rotation is performed through a classic rotation matrix, and finally the trasformation matrix is multiplied by this rotation matrix.
The totation about the Y axis is calculated in a way that resembles a circular movement around the whole scene of loaded objects:
JSC3D.Matrix3x4.prototype.rotateAboutYAxis = function(angle) {
if(angle != 0) {
angle *= Math.PI / 180;
var c = Math.cos(angle);
var s = Math.sin(angle);
var m00 = c * this.m00 + s * this.m20;
var m01 = c * this.m01 + s * this.m21;
var m02 = c * this.m02 + s * this.m22;
var m03 = c * this.m03 + s * this.m23;
var m20 = c * this.m20 - s * this.m00;
var m21 = c * this.m21 - s * this.m01;
var m22 = c * this.m22 - s * this.m02;
var m23 = c * this.m23 - s * this.m03;
this.m00 = m00; this.m01 = m01; this.m02 = m02; this.m03 = m03;
this.m20 = m20; this.m21 = m21; this.m22 = m22; this.m23 = m23;
}
};
Now, dragging the mouse will apply this rotation about the Y axis on the whole world, like on the left side in the picture below. Is there a way, to apply a rotation about the Up vector to keep it in the initial position, like it appear on the right side?
I tried something like that:
var rotY = (x - viewer.mouseX) * 360 / viewer.canvas.height;
var rotMat = new JSC3D.Matrix3x4; // identity
rotMat.rotateAboutYAxis(rotY);
viewer.rotMatrix.multiply(rotMat);
but it has no effect.
What operations shall be applied to my rotation matrix to achieve a rotation about the Up vector?
Sample: https://jsfiddle.net/4xzjnnar/1/
This 3D library has already some built-in functions to allow scene rotation about X,Y,and Z axis, so there is no need to implement new matrix operations for that, we can use the existing functions rotateAboutXAyis, rotateAboutYAxis and rotateAboutZAxis, which apply an in-place matrix multiplication of the desired rotation angle in degrees.
The scene in JSC3D is transformed by a 3x4 matrix where the rotation is stored in the first 3 values of each row.
After applying a scene rotation and/or translation, applying a subsequent rotation about the Up vector, is a problem of calculate a rotation about an arbitrary axis.
A very clean and didactic explanation how to solve this problem is described here: http://ami.ektf.hu/uploads/papers/finalpdf/AMI_40_from175to186.pdf
Translate the P 0 (x 0 ,y 0 ,z 0 ) axis point to the origin of the coordinate system.
Perform appropriate rotations to make the axis of rotation coincident with
z-coordinate axis.
Rotate about the z-axis by the angle θ.
Perform the inverse of the combined rotation transformation.
Perform the inverse of the translation.
Now, its easy to write a function for that, because we use the functions already available in JSC3D (translation part is omitted here).
JSC3D.Viewer.prototype.rotateAboutUpVector = function(angle) {
angle %= 360;
/* pitch, counter-clockwise rotation about the Y axis */
var degX = this.rpy[0], degZ = this.rpy[2];
this.rotMatrix.rotateAboutXAxis(-degX);
this.rotMatrix.rotateAboutZAxis(-degZ);
this.rotMatrix.rotateAboutYAxis(angle);
this.rotMatrix.rotateAboutZAxis(degZ);
this.rotMatrix.rotateAboutXAxis(degX);
}
Because all above mentioned functions are using degrees, we need to get back the actual Euler angles from the rotation matrix (simplified):
JSC3D.Viewer.prototype.calcRollPitchYaw = function() {
var m = this.rotMatrix;
var radians = 180 / Math.PI;
var angleX = Math.atan2(-m.m12, m.m22) * radians;
var angleY = Math.asin(m.m01) * radians;
var angleZ = Math.atan2(-m.m01, m.m00) * radians;
this.rpy[0] = angleX;
this.rpy[1] = angleY;
this.rpy[2] = angleZ;
}
The tricky part here, is that we need always to get back the current rotation angles, as they results from the applied rotations, so a separate function must be used to store the current Euler angles every time that a rotation is applied to the scene.
For that, we can use a very simple structure:
JSC3D.Viewer.prototype.rpy = [0, 0, 0];
This will be the final result:

How does depth work in a frustum environment?

I need some help understanding the basics of a frustum transformation. Mainly, how depth works.
The following uses a viewport of 768x1024. Using an Orthogonal projection and a square of 768x768 (z defaults to 0) with no translation or scaling, and a viewport of glViewport(0, 0, 768, 1024) this square easily fills the width of the frame:
Now when I change the project to a frustum and mess with the z translation, the square scales appropriately due to the perspective changes.
Here is the same square in such an environment:
I can play with this z translation, as well as the near and far parameters of the frustum matrix and make the square change is apparent onscreen size accordingly. Fine.
But what I cannot figure out is the obvious relationship between its onscreen size and these depth parameters.
For example, suppose I want to use a frustum but have the square fill the frame width, as in my first example image above. How to achieve this?
I would think that if the z translation matched the near plane, then you'd essentially have a square "right in front of the camera", filling the frame. But I cannot figure a way to achieve this. If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!) , filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
I could paste a bunch of code here to show what I'm doing but I think the concept here is clear. I just want to figure out where the near plane actually is, how to situate something on it, as this will help me understand how the frustum is working.
Okay here is the relevant code I'm using, where width=768 and height=1024.
My vertex shader is the simple gl_Position=Projection*Modelview*Position;
My projection matrix (frustum) is thus:
Frustum(-width/2, width/2, -height/2, height/2, 1,10);
This function is:
static Matrix4<T> Frustum(T left, T right, T bottom, T top, T near, T far)
{
T a = 2 * near / (right - left);
T b = 2 * near / (top - bottom);
T c = (right + left) / (right - left);
T d = (top + bottom) / (top - bottom);
T e = - (far + near) / (far - near);
T f = -2 * far * near / (far - near);
Matrix4 m;
m.x.x = a; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = b; m.y.z = 0; m.y.w = 0;
m.z.x = c; m.z.y = d; m.z.z = e; m.z.w = -1;
m.w.x = 0; m.w.y = 0; m.w.z = f; m.w.w = 1;
return m;
}
My square is just two 2d triangles with a default z=0, and an x range from left as -768/2 and right edge at 768/2. The square is clearly working properly as my first image above shows, using the orthogonal projection. (Though I switched to the frustum projection for this question)
To draw the square, I translate the Modelview with:
Translate(0, 0, -1);
Using:
static Matrix4<T> Translate(T x, T y, T z)
{
Matrix4 m;
m.x.x = 1; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = 1; m.y.z = 0; m.y.w = 0;
m.z.x = 0; m.z.y = 0; m.z.z = 1; m.z.w = 0;
m.w.x = x; m.w.y = y; m.w.z = z; m.w.w = 1;
return m;
}
As you can see, the translation should put the square on the near plane, yet it looks like this:
If I translate instead of -1.01 just to be sure I avoid near clipping, the result is the same. If I do not translate, thus z=0, the square does not appear, as you'd expect, since it would be behind the camera.
In your frustum matrix, m.w.w should be 0, not 1. This will fix your problem.
But, the mistake isn't your fault. It's my fault! I'm actually the one who wrote that code in the first place, and unfortunately it has proliferated. It's an errata in my book (iPhone 3D Programming), which is where it first appeared.
Feeling very guilty about this!
If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!)
Yes
, filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
Not neccesarily. The near plane has the extents given with the left, right, bottom and top parameters of glFrustum. A rectangle going to exactly those bounds will snugly fit the viewport when being placed at the near plane distance.

Resources