Opengl-es 1.x add shadow? - opengl-es

Is there an easy way to add shadows in opengl-es 1.x? Or only in 2.0?

For projecting a shadow on a plane there's a simple way (not very efficient, but simple).
This function is not mine, I forget were I found it. What it does is create a matrix projection that maps everything you draw onto a single plane.
static inline void glShadowProjection(float * l, float * e, float * n)
{
float d, c;
float mat[16];
// These are c and d (corresponding to the tutorial)
d = n[0]*l[0] + n[1]*l[1] + n[2]*l[2];
c = e[0]*n[0] + e[1]*n[1] + e[2]*n[2] - d;
// Create the matrix. OpenGL uses column by column
// ordering
mat[0] = l[0]*n[0]+c;
mat[4] = n[1]*l[0];
mat[8] = n[2]*l[0];
mat[12] = -l[0]*c-l[0]*d;
mat[1] = n[0]*l[1];
mat[5] = l[1]*n[1]+c;
mat[9] = n[2]*l[1];
mat[13] = -l[1]*c-l[1]*d;
mat[2] = n[0]*l[2];
mat[6] = n[1]*l[2];
mat[10] = l[2]*n[2]+c;
mat[14] = -l[2]*c-l[2]*d;
mat[3] = n[0];
mat[7] = n[1];
mat[11] = n[2];
mat[15] = -d;
// Finally multiply the matrices together *plonk*
glMultMatrixf(mat);
}
Use it like this:
Draw your object.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
Suply it with a light source position, a plane where the shadow will be projected and the normal.
float lightPosition[] = {383.0, 461.0, 500.0, 0.0}
float n[] = { 0.0, 0.0, -1.0 }; // Normal vector for the plane
float e[] = { 0.0, 0.0, beltOrigin+1 }; // Point of the plane
glShadowProjection(lightPosition,e,n);
Ok, shadow matrix is applied.
Change the drawing color to something that fits.
glColor4f(0.3, 0.3, 0.3, 0.9);
Draw your object again.
glDrawArrays(GL_TRIANGLES, 0, machadoNumVerts); // Machado
That is why this is not efficient, the more complex the object the more useless triangles you waste just for a shadow.
Also remember that every manipulation you made to the unshadowed object needs to be done after the shadow matrix is applied.
For more complex stuff the subject is a bit broad, and depends a lot on your scene and complexity.

Projective texture mapped shadows like they were done with OpenGL-1.2 without shaders are possible. Look for older shadow mapping tutorials, written between 1999 and 2002.

Related

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Apply matrix transformation to a sphere

I have a Sphere structure that looks like this
struct Sphere {
vec3 _center;
float _radius;
};
How do I apply a 4x4 transformation matrix to that sphere? The matrix may contain a scale factor, a rotation (which will obviously will not affect the sphere) and a translation.
The current approach I'm using contains three length() methods (that have sqrt() in them) which are pretty slow.
glm::vec3 extractTranslation(const glm::mat4 &m)
{
glm::vec3 translation;
// Extract the translation
translation.x = m[3][0];
translation.y = m[3][1];
translation.z = m[3][2];
return translation;
}
glm::vec3 extractScale(const glm::mat4 &m) //should work only if matrix is calculated as M = T * R * S
{
glm::vec3 scale;
scale.x = glm::length( glm::vec3(m[0][0], m[0][1], m[0][2]) );
scale.y = glm::length( glm::vec3(m[1][0], m[1][1], m[1][2]) );
scale.z = glm::length( glm::vec3(m[2][0], m[2][1], m[2][2]) );
return scale;
}
float extractLargestScale(const glm::mat4 &m)
{
glm::vec3 scale = extractScale(m);
return glm::max(scale.x, glm::max(scale.y, scale.z));
}
void Sphere::applyTransformation(const glm::mat4 &transformation)
{
glm::vec4 center = transformation * glm::vec4(_center, 1.0f);
float largestScale = extractLargestScale(transformation);
set(glm::vec3(center)/* / center.w */, _radius * largestScale);
}
I wonder if anyone knows of a more efficient way to do this?
This is a question about efficiency and specifically to avoid doing the square root. One idea would be to defer doing the square root until the last moment. Since length and length squared are increasing functions starting at 0, comparing length squared is the same as comparing length. So you could avoid the three calls to length and make it one.
#include <glm/gtx/norm.hpp>
#include <algorithm>
glm::vec3 extractScale(const glm::mat4 &m)
{
// length2 returns length squared i.e. v·v
// no square root involved
return glm::vec3(glm::length2( glm::vec3(m[0]) ),
glm::length2( glm::vec3(m[1]) ),
glm::length2( glm::vec3(m[2]) ));
}
void Sphere::applyTransformation(const glm::mat4 &transformation)
{
glm::vec4 center = transformation * glm::vec4(_center, 1.0f);
glm::vec3 scalesSq = extractScale(transformation);
float const maxScaleSq = std::max_element(&scalesSq[0], &scalesSq[0] + scalesSq.length()); // length gives the dimension here i.e. 3
// one sqrt when you know the largest of the three
float const largestScale = std::sqrt(maxScaleSq);
set(glm::vec3(center), _radius * largestScale);
}
Aside:
A non-uniform scale means the scaling ratios along the different axes aren't the same. E.g. S1, 2, 4 is non-uniform while S2, 2, 2 is uniform. See this intuitive primer on transformations to understand them better; it has animations to demonstrate such differences.
Can the scale be non-uniform too? From the code it looks like it could. Transforming the radius with the largest scale isn't right. If you'd a non-uniform scale, the sphere would actually become an ellipsoid and hence just scaling the radius isn't correct. You'd have to transform the sphere into an ellipsoid with semi-principle axes of differing lengths.

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

OpenGL ortho, perspective and frustum projections

45I am trying to understand OpenGL projections on a single point. I am using QGLWidget for rendering context and QMatrix4x4 for projection matrix. Here is the draw function
attribute vec4 vPosition;
uniform mat4 projection;
uniform mat4 modelView;
void main()
{
gl_Position = projection* vPosition;
}
void OpenGLView::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(programObject);
glViewport(0, 0, width(), height());
qreal aspect = (qreal)800 / ((qreal)600);
const qreal zNear = 3.0f, zFar = 7.0f, fov = 45.0f;
QMatrix4x4 projection;
projection.setToIdentity();
projection.ortho(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.frustum(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.perspective(fov,aspect,zNear, zFar);
position.setToIdentity();
position.translate(0.0f, 0.0f, -5.0f);
position.rotate(0,0,0, 0);
QMatrix4x4 mvpMatrix = projection * position;
for (int r=0; r<4; r++)
for (int c=0; c<4; c++)
tempMat[r][c] = mvpMatrix.constData()[ r*4 + c ];
glUniformMatrix4fv(projection, 1, GL_FALSE, (float*)&tempMat[0][0]);
//Draw point at 0,0
GLfloat f_RefPoint[2];
glUniform4f(color,1, 0,1,1);
glPointSize(15);
f_RefPoint[0] = 0;
f_RefPoint[1] = 0;
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 2, GL_FLOAT, 0, 0, f_RefPoint);
glDrawArrays (GL_POINTS, 0, 1);
}
Observations:
1) projection.ortho: the point rendered on the window and translating the point with different z-axis value has no effect
2) projection.frustum: the point is drawn on the windown only the point is translated as translate(0.0f, 0.0f, -20.0f)
3) projection.perspective: the point is never rendered on the screen.
Could someone help me understand this behaviour?
The ortho projection works this way. I suggest you search for some images or some videos about the differences between different projections.
I don't know how you see a point translation in Z coordinate but if you would have a square it would become smaller by translating it further away (with ortho it would stay the same). There is an issue here as you use -20.0f for zNear while this value should be positive. The values inserted into this method should in most cases be generated with field of view, aspect ratio... Anyway you will not be able to see anything closer then zNear and anything further then zFar.
This is the same as frustum but already takes parameters as field of view, aspect ratio. The reason you do not see anything is your zNear is at 3.0f and the point is .0f length away. By translating the point you will be able to see it but try translating it by anything from 3.0f to 7.0f (3.0f is your zNear and 7.0f is your zFar). Alternatives are increasing zFar or translating the projection matrix backwards. Or mostly in your case I suggest adding some "look at" system on the projection matrix as it will give you some easy-to-use tools to manipulate your "camera", in most cases you can set a point you are looking from, a point you are looking at and up vector.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Resources