Apply matrix transformation to a sphere - performance

I have a Sphere structure that looks like this
struct Sphere {
vec3 _center;
float _radius;
};
How do I apply a 4x4 transformation matrix to that sphere? The matrix may contain a scale factor, a rotation (which will obviously will not affect the sphere) and a translation.
The current approach I'm using contains three length() methods (that have sqrt() in them) which are pretty slow.
glm::vec3 extractTranslation(const glm::mat4 &m)
{
glm::vec3 translation;
// Extract the translation
translation.x = m[3][0];
translation.y = m[3][1];
translation.z = m[3][2];
return translation;
}
glm::vec3 extractScale(const glm::mat4 &m) //should work only if matrix is calculated as M = T * R * S
{
glm::vec3 scale;
scale.x = glm::length( glm::vec3(m[0][0], m[0][1], m[0][2]) );
scale.y = glm::length( glm::vec3(m[1][0], m[1][1], m[1][2]) );
scale.z = glm::length( glm::vec3(m[2][0], m[2][1], m[2][2]) );
return scale;
}
float extractLargestScale(const glm::mat4 &m)
{
glm::vec3 scale = extractScale(m);
return glm::max(scale.x, glm::max(scale.y, scale.z));
}
void Sphere::applyTransformation(const glm::mat4 &transformation)
{
glm::vec4 center = transformation * glm::vec4(_center, 1.0f);
float largestScale = extractLargestScale(transformation);
set(glm::vec3(center)/* / center.w */, _radius * largestScale);
}
I wonder if anyone knows of a more efficient way to do this?

This is a question about efficiency and specifically to avoid doing the square root. One idea would be to defer doing the square root until the last moment. Since length and length squared are increasing functions starting at 0, comparing length squared is the same as comparing length. So you could avoid the three calls to length and make it one.
#include <glm/gtx/norm.hpp>
#include <algorithm>
glm::vec3 extractScale(const glm::mat4 &m)
{
// length2 returns length squared i.e. v·v
// no square root involved
return glm::vec3(glm::length2( glm::vec3(m[0]) ),
glm::length2( glm::vec3(m[1]) ),
glm::length2( glm::vec3(m[2]) ));
}
void Sphere::applyTransformation(const glm::mat4 &transformation)
{
glm::vec4 center = transformation * glm::vec4(_center, 1.0f);
glm::vec3 scalesSq = extractScale(transformation);
float const maxScaleSq = std::max_element(&scalesSq[0], &scalesSq[0] + scalesSq.length()); // length gives the dimension here i.e. 3
// one sqrt when you know the largest of the three
float const largestScale = std::sqrt(maxScaleSq);
set(glm::vec3(center), _radius * largestScale);
}
Aside:
A non-uniform scale means the scaling ratios along the different axes aren't the same. E.g. S1, 2, 4 is non-uniform while S2, 2, 2 is uniform. See this intuitive primer on transformations to understand them better; it has animations to demonstrate such differences.
Can the scale be non-uniform too? From the code it looks like it could. Transforming the radius with the largest scale isn't right. If you'd a non-uniform scale, the sphere would actually become an ellipsoid and hence just scaling the radius isn't correct. You'd have to transform the sphere into an ellipsoid with semi-principle axes of differing lengths.

Related

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Calculating a transformation matrix to place an object on a sphere in glsl

I'm trying generate some matrices to place trees on a planet on the GPU. The position of each tree is predetermined - based on a biome map and various heightmap data - but this data is GPU resident so I can't do this on the CPU. At the moment I'm instancing using the geometry shader - this will change to traditional instancing if performance is bad, and I'd then compute the model matrices for each tree on a compute shader.
I've got as far as trying to use a modified version of lookAt() but I can't get it working and even if I did, the trees would be perpendicular to the planet instead of standing up. I know I can define a using 3 axis, so the normal of the sphere, a tangent and a bitangent, but given I don't care what direction these tangents and bitangents are in at the moment, what would be a quick way to calculate this matrix in GLSL? Thanks!
void drawInstance(vec3 offset)
{
//Grab the model's position from the model matrix
vec3 modelPos = vec3(modelMatrix[3][0],modelMatrix[3][1],modelMatrix[3][2]);
//Add the offset
modelPos +=offset;
//Eye = where the new pos is, look in x direction for now, planet is at origin so up is just the modelPos normalized
mat4 m = lookAt(modelPos, modelPos + vec3(1,0,0), normalize(modelPos));
//Lookat is intended as a camera matrix, fix this
m = inverse(m);
vec3 pos = gl_in[0].gl_Position.xyz;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[1].gl_Position.xyz ;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[2].gl_Position.xyz;
gl_Position = vp * m * vec4(pos, 1.0);
EmitVertex();
EndPrimitive();
}
void main()
{
vp = proj * view;
mvp = proj * view * modelMatrix;
drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,-40));
// drawInstance(vec3(40,40,0));
// drawInstance(vec3(-40,0,0));
}
I would recommend taking a different approach completely.
First, don't use geometry shaders for replicating geometry. That's what the glDrawArraysInstanced is for.
Second, it's hard to define such a matrix procedurally. This is related to the Hairy Ball Theorem.
Instead I would generate a bunch of random rotations on the CPU. Use this method to create a uniformly distributed quaternion. Pass that quaternion to the vertex shader as a single vec4 instanced attribute. In the vertex shader:
Offset the tree vertex by (0, 0, radiusOfThePlanet) so that it's located at the north pole (assuming Z-axis is up).
Apply the quaternion rotation (it will rotate around planet center so the tree stays on the surface).
Apply the planet model-view and camera projection matrices as usual.
This will yield an unbiased uniformly distributed random set of trees.
Found a solution to the problem which allows me to place objects on the surface of a sphere facing in the correct directions. Here is the code:
mat4 m = mat4(1);
vec3 worldPos = getWorldPoint(sphericalCoords);
//Add a random number to the world pos, then normalize it so that it is a point on a unit sphere slightly different to the world pos. The vector between them is a tangent. Change this value to rotate the object once placed on the sphere
vec3 xAxis = normalize(normalize(worldPos + vec3(0.0,0.2,0.0)) - normalize(worldPos));
//Planet is at 0,0,0 so world pos can be used as the normal, and therefore the y axis
vec3 yAxis = normalize(worldPos);
//We can cross the y and x axis to generate a bitangent to use as the z axis
vec3 zAxis = normalize(cross(yAxis, xAxis));
//This is our rotation matrix!
mat3 baseMat = mat3(xAxis, yAxis, zAxis);
//Fill this into our 4x4 matrix
m = mat4(baseMat);
//Transform m by the Radius in the y axis to put it on the surface
mat4 m2 = transformMatrix(mat4(1), vec3(0,radius,0));
m = m * m2;
//Multiply by the MVP to project correctly
m = mvp* m;
//Draw an instance of your object
drawInstance(m);

Why are my specular highlights elliptical?

I think these should be circular. I assume there is something wrong with my normals but I haven't found anything wrong with them. Then again, finding a good test for the normals is difficult.
Here is the image:
Here is my shading code for each light, leaving out the recursive part for reflections:
lighting = ( hit.obj.ambient + hit.obj.emission );
const glm::vec3 view_direction = glm::normalize(eye - hit.pos);
const glm::vec3 reflection = glm::normalize(( static_cast<float>(2) * ( glm::dot(view_direction, hit.normal) * hit.normal ) ) - view_direction);
for(int i = 0; i < numused; ++i)
{
glm::vec3 hit_to_light = (lights[i].pos - hit.pos);
float dist = glm::length(hit_to_light);
glm::vec3 light_direction = glm::normalize(hit_to_light);
Ray lightray(hit.pos, light_direction);
Intersection blocked = Intersect(lightray, scene, verbose ? verbose : false);
if( blocked.dist >= dist)
{
glm::vec3 halfangle = glm::normalize(view_direction + light_direction);
float specular_multiplier = pow(std::max(glm::dot(halfangle,hit.normal), 0.f), shininess);
glm::vec3 attenuation_term = lights[i].rgb * (1.0f / (attenuation + dist * linear + dist*dist * quad));
glm::vec3 diffuse_term = hit.obj.diffuse * ( std::max(glm::dot(light_direction,hit.normal) , 0.f) );
glm::vec3 specular_term = hit.obj.specular * specular_multiplier;
}
}
And here is the line where I transform the object space normal to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
Using the full phong model, instead of blinn-phong, I get teardrop highlights:
If I color pixels according to the (absolute value of the) normal at the intersection point I get the following image (r = x, g = y, b = z):
I've solved this issue. It turns out that the normals were all just slightly off, but not enough that the image colored by normals could depict it.
I found this out by computing the normals on spheres with a uniform scale and a translation.
The problem occurred in the line where I transformed the normals to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
I assumed that the homogeneous coordinate would be 0 after the transformation because it was zero beforehand (rotations and scales do not affect it, and because it is 0, neither can translations). However, it is not 0 because the matrix is transposed, so the bottom row was filled with the inverse translations, causing the homogeneous coordinate to be nonzero.
The 4-vector is then normalized and the result is assigned to a 3-vector. The constructor for the 3-vector simply removes the last entry, so the normal was left unnormalized.
Here's the final picture:

glm - Decompose mat4 into translation and rotation?

For purposes of lerping I need to decompose a 4x4 matrix into a quaternion and a vec3.
Grabbing the quaternion is simple, as you can just pass the matrix into the constructor, but I can't find a way to grab the translation.
Surely there must be a way?
It looks like glm 0.9.6 supports matrix decomposition
http://glm.g-truc.net/0.9.6/api/a00204.html
#include <glm/gtx/matrix_decompose.hpp>
glm::mat4 transformation; // your transformation matrix.
glm::vec3 scale;
glm::quat rotation;
glm::vec3 translation;
glm::vec3 skew;
glm::vec4 perspective;
glm::decompose(transformation, scale, rotation, translation, skew, perspective);
glm::vec3(m[3]) is the position vector(assuming m is glm::mat4)
At version glm-0.9.8.1 you have to include:
#include <glm/gtx/matrix_decompose.hpp>
To use it:
glm::mat4 transformation; // your transformation matrix.
glm::vec3 scale;
glm::quat rotation;
glm::vec3 translation;
glm::vec3 skew;
glm::vec4 perspective;
glm::decompose(transformation, scale, rotation, translation, skew,perspective);
Keep in mind that the resulting quaternion in not correct.
It returns its conjugate!
To fix this add this to your code:
rotation=glm::conjugate(rotation);
I figured I'd post an updated and complete answer for 2019. Credit where it's due, this is based off valmo's answer, includes some items from Konstantinos Roditakis's answer as well as some additional info I ran into.
Anyway, as of version 0.9.9 you can still use the experimental matrix decomposition: https://glm.g-truc.net/0.9.9/api/a00518.html
First, and the part I am adding because I don't see it anywhere else, is that you will get an error unless you define the following before the include below:
#define GLM_ENABLE_EXPERIMENTAL
Next, you have to include:
#include <glm/gtx/matrix_decompose.hpp>
Finally, an example of use:
glm::mat4 transformation; // your transformation matrix.
glm::vec3 scale;
glm::quat rotation;
glm::vec3 translation;
glm::vec3 skew;
glm::vec4 perspective;
glm::decompose(transformation, scale, rotation, translation, skew,perspective);
Also, the Quaternion, as stated in Konstantinos Roditakis's answer, is indeed incorrect and can be fixed by applying the following:
rotation = glm::conjugate(rotation);
I made my own decompose function that doesn't need "skew" and "perspective" components.
void decomposeMtx(const glm::mat4& m, glm::vec3& pos, glm::quat& rot, glm::vec3& scale)
{
pos = m[3];
for(int i = 0; i < 3; i++)
scale[i] = glm::length(vec3(m[i]));
const glm::mat3 rotMtx(
glm::vec3(m[0]) / scale[0],
glm::vec3(m[1]) / scale[1],
glm::vec3(m[2]) / scale[2]);
rot = glm::quat_cast(rotMtx);
}
If you don't need scale either, it can be further simplified:
void decomposeMtx(const glm::mat4& m, glm::vec3& pos, glm::quat& rot)
{
pos = m[3];
rot = glm::quat_cast(m);
}
Sorry for being late. Actually the reason you have to conjugate the result quat is wrong substraction order of matrix components when calculating x,y,z components of the quaternion.
Here is an explanation and sample code of how it should be.
So basically in glm, decompose() method, matrix_decompose.inl file:
We have :
orientation.x = root * (Row[1].z - Row[2].y);
orientation.y = root * (Row[2].x - Row[0].z);
orientation.z = root * (Row[0].y - Row[1].x);
When it should be:
orientation.x = root * (Row[2].y - Row[1].z);
orientation.y = root * (Row[0].z - Row[2].x);
orientation.z = root * (Row[1].x - Row[0].y);
Also see this impl which looks very close to the one found in GLM,but which is correct one.

Opengl-es 1.x add shadow?

Is there an easy way to add shadows in opengl-es 1.x? Or only in 2.0?
For projecting a shadow on a plane there's a simple way (not very efficient, but simple).
This function is not mine, I forget were I found it. What it does is create a matrix projection that maps everything you draw onto a single plane.
static inline void glShadowProjection(float * l, float * e, float * n)
{
float d, c;
float mat[16];
// These are c and d (corresponding to the tutorial)
d = n[0]*l[0] + n[1]*l[1] + n[2]*l[2];
c = e[0]*n[0] + e[1]*n[1] + e[2]*n[2] - d;
// Create the matrix. OpenGL uses column by column
// ordering
mat[0] = l[0]*n[0]+c;
mat[4] = n[1]*l[0];
mat[8] = n[2]*l[0];
mat[12] = -l[0]*c-l[0]*d;
mat[1] = n[0]*l[1];
mat[5] = l[1]*n[1]+c;
mat[9] = n[2]*l[1];
mat[13] = -l[1]*c-l[1]*d;
mat[2] = n[0]*l[2];
mat[6] = n[1]*l[2];
mat[10] = l[2]*n[2]+c;
mat[14] = -l[2]*c-l[2]*d;
mat[3] = n[0];
mat[7] = n[1];
mat[11] = n[2];
mat[15] = -d;
// Finally multiply the matrices together *plonk*
glMultMatrixf(mat);
}
Use it like this:
Draw your object.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
Suply it with a light source position, a plane where the shadow will be projected and the normal.
float lightPosition[] = {383.0, 461.0, 500.0, 0.0}
float n[] = { 0.0, 0.0, -1.0 }; // Normal vector for the plane
float e[] = { 0.0, 0.0, beltOrigin+1 }; // Point of the plane
glShadowProjection(lightPosition,e,n);
Ok, shadow matrix is applied.
Change the drawing color to something that fits.
glColor4f(0.3, 0.3, 0.3, 0.9);
Draw your object again.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
That is why this is not efficient, the more complex the object the more useless triangles you waste just for a shadow.
Also remember that every manipulation you made to the unshadowed object needs to be done after the shadow matrix is applied.
For more complex stuff the subject is a bit broad, and depends a lot on your scene and complexity.
Projective texture mapped shadows like they were done with OpenGL-1.2 without shaders are possible. Look for older shadow mapping tutorials, written between 1999 and 2002.

Resources