Why is this basic "rotate around the origin" failing to work? - matrix

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.

Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

Related

Creating a rotate3D() function for PMatrix3D in Processing

Some time ago, I coded a little fidgetable logo based on CSS transforms alone.
You can fiddle with it over https://document.paris/
The result feels nice, it feels natural to click/touch and drag to rotate the logo.
I remember banging my head against the walls until I found out that I could chain CSS transforms quite easily just by chaining them.
transform: matrix3d(currentMatrix) rotate3d(x, y, z, angle);
And most importantly to get the currentMatrix I would simply do m = $('#logobackground').css('transform'); with jQuery, the browser would magically return the computed matrix instead of the raw "css" which actually avoided me to deal with matrices or to infinitely stack rotate3D() properties.
So the hardest part was then to calculate the rotate3D arguments (x, y, z, angle) based on mouse inputs. In theory shouldn't have problems transposing this part to java so i'll just skip over it.
Now
I'm trying to do the exact same thing with Processing and there is two problems :
There is no rotate3D() in processing.
There is no browser to apply/chain transformations and return me the current matrix state automatically.
Here's the plan/implementation I'm working on :
I need a "currentMatrix" to apply every frame to the scene
PMatrix3D currentMatrix = new PMatrix3D();
In the setup() I set it to the "identity matrix" which from what I understand is equivalent to "no transformation".
// set currentMatrix to identity Matrix
currentMatrix.set(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1);
Every frame I would calculate a transformation matrix and apply it to the currentMatrix.
Then I would apply this matrix to the scene.
// Apply Matrix to the currentMatrix
void mouseRotate() {
float diag = sqrt(pow(width,2)+pow(height,2));
float x = deltaX()/ diag * 10; // deltaX = difference between previous prevous MouseX and current mouseX)
float y = deltaY()/ diag * 10; // deltaY = same with Y axis
float angle = sqrt( pow(x, 2) + pow(y, 2) );
currentMatrix.apply( rotate3D(y,x,0,angle) );
}
// Apply Matrix to the scene
applyMatrix(currentMatrix);
PMatrix3D reference : https://processing.github.io/processing-javadocs/core/processing/core/PMatrix3D.html
ApplyMatrix() reference : https://processing.org/reference/applyMatrix_.html
All I need to do then is to implement the rotate3D css transform as a function which returns a transformation matrix.
Based on what I found on this page https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotate3d()
I implemented this first function :
PMatrix3D rotate3D(float x, float y, float z, float a) {
PMatrix3D rotationMatrix = new PMatrix3D();
rotationMatrix.set(
1+(1-cos(a))*(pow(x,2)-1), z*sin(a)+x*y*(1-cos(a)), -y*sin(a)+x*z*(1-cos(a)), 0,
-z*sin(a)+x*y*(1-cos(a)), 1+(1-cos(a))*(pow(y,2)-1), x*sin(a)+y*z*(1-cos(a)), 0,
y*sin(a)+x*z*(1-cos(a)), -x*sin(a)+y*z*(1-cos(a)), 1+(1-cos(a))*(pow(z,2)-1), 0,
0,0,0,1
);
return rotationMatrix;
}
and based on what I found on this page https://drafts.csswg.org/css-transforms-2/#Rotate3dDefined
I implemented this other function :
PMatrix3D rotate3Dbis(float getX, float getY, float getZ, float getA) {
float sc = sin(getA/2)*cos(getA/2);
float sq = pow(sin(getA/2),2);
float normalizer = sqrt( pow(getX,2) + pow(getY,2) + pow(getZ,2) );
float x = getX/normalizer;
float y = getY/normalizer;
float z = getZ/normalizer;
PMatrix3D rotationMatrix = new PMatrix3D();
rotationMatrix.set(
1-2*(pow(y,2)+pow(z,2))*sq, 2*(x*y*sq-z*sc), 2*(x*z*sq+y*sc), 0,
2*(x*y*sq+z*sc), 1-2*(pow(x,2)+pow(z,2))*sq, 2*(y*z*sq-x*sc), 0,
2*(x*z*sq-y*sc), 2*(y*z*sq+x*sc), 1-2*(pow(x,2)+pow(y,2)*sq), 0,
0, 0, 0, 1
);
return rotationMatrix;
}
When testing, they don't produce exactly the same result with the same inputs (although the differences are kind of "symmetric" which makes me think that they are kind of equivalent at least in some way ?) Also rotate3Dbis() has a tendency to produce NaN numbers, especially when i'm not moving the mouse (x & y = 0).
But most importantly, in the end it doesn't work. Instead of rotating, the drawing just zooms out progressively when I'm using rotate3D(), and rotate3Dbis() doesn't render correctly because of the NaNs.
The overall question :
I'm trying to get guidance from people who understand transformations Matrices and trying to narrow down where the issue is. Are my processing/java implementations of rotate3D() flawed ? Or would the issue come from somewhere else ? And are my rotate3D() and rotate3Dbis functions equivalent ?
You might get away with simply rotating on X and Y axis, as you already mentioned, using the previous and current mouse coordinates:
PVector cameraRotation = new PVector(0, 0);
void setup(){
size(900, 900, P3D);
rectMode(CENTER);
strokeWeight(9);
strokeJoin(MITER);
}
void draw(){
//update "camera" rotation
if (mousePressed){
cameraRotation.x += -float(mouseY-pmouseY);
cameraRotation.y += float(mouseX-pmouseX);
}
background(255);
translate(width * 0.5, height * 0.5, 0);
rotateX(radians(cameraRotation.x));
rotateY(radians(cameraRotation.y));
rect(0, 0, 300, 450);
}
The Document Paris example you've shared also uses easing. You can have a look at this minimal easing Processing example
Here's a version of the above with easing applied:
PVector cameraRotation = new PVector();
PVector cameraTargetRotation = new PVector();
float easing = 0.01;
void setup(){
size(900, 900, P3D);
rectMode(CENTER);
strokeWeight(9);
strokeJoin(MITER);
}
void draw(){
//update "camera" rotation
if (mousePressed){
cameraTargetRotation.x += -float(mouseY-pmouseY);
cameraTargetRotation.y += float(mouseX-pmouseX);
}
background(255);
translate(width * 0.5, height * 0.5, 0);
// ease rotation
rotateX(radians(cameraRotation.x -= (cameraRotation.x - cameraTargetRotation.x) * easing));
rotateY(radians(cameraRotation.y -= (cameraRotation.y - cameraTargetRotation.y) * easing));
fill(255);
rect(0, 0, 300, 450);
fill(0);
translate(0, 0, 3);
rect(0, 0, 300, 450);
}
Additionally there's a library called PeasyCam which can make this much simpler.
If you do want to implement your own version using PMatrix3D here are a couple of tips that could save you time:
When you instantiate PMatrix3D() it's the identity matrix. If you have transformations applied and you want to reset() to identity.
If you want to rotate a PMatrix3D() around and axis the rotate(float angleInRadians, float axisX, float axisY, float axisZ) override should help.
Additionally you could get away without PMatrix3D since resetMatrix() will reset the global transformation matrix and you can call rotate(float angleInRadians, float axisX, float axisY, float axisZ) directly.
Part of the answer is a fix added to the first rotate3D function.
I needed to normalize the x,y,z values to avoid the weird scaling.
I'm posting the current state of the code (i'm skipping a few parts for the sake of simplicity):
// Mouse movement since last fame on X axis
float deltaX() {
return (float)(mouseX-pmouseX);
}
// Mouse movement since last fame on Y axis
float deltaY() {
return (float)(mouseY-pmouseY);
}
// Convert user input into angle and amount to rotate to
void mouseRotate() {
double diag = Math.sqrt(Math.pow(width,2)+Math.pow(height,2));
double x = deltaX()/ diag * 50;
double y = -deltaY()/ diag * 50;
double angle = Math.sqrt( x*x + y*y );
currentMatrix.apply( rotate3D((float)y,(float)x,0,(float)angle) );
}
// Convert those values into a rotation matrix
PMatrix3D rotate3D(float getX, float getY, float getZ, float getA) {
float normalizer = sqrt( getX*getX + getY*getY + getZ*getZ );
float x = 0;
float y = 0;
float z = 0;
if (normalizer != 0) {
x = getX/normalizer;
y = getY/normalizer;
z = getZ/normalizer;
}
float x2 = pow(x,2);
float y2 = pow(y,2);
float z2 = 0;
float sina = sin(getA);
float f1cosa = 1-cos(getA);
PMatrix3D rotationMatrix = new PMatrix3D(
1+f1cosa*(x2-1), z*sina+x*y*f1cosa, -y*sina+x*z*f1cosa, 0,
-z*sina+x*y*f1cosa, 1+f1cosa*(y2-1), x*sina+y*z*f1cosa, 0,
y*sina+x*z*f1cosa, -x*sina+y*z*f1cosa, 1+f1cosa*(z2-1), 0,
0, 0, 0, 1
);
return rotationMatrix;
}
// Draw
draw() {
mouseRotate();
applyMatrix(currentMatrix);
object.render();
}
I thought that using this method would allow me to "stack" cumulative rotations relative to the screen and not relative to the object. But the result seems to always do the rotation relative to the object drawn.
I am not using a camera because I basically only want to rotate the object on itself. I'm actually a bit lost atm on what I should rotate and when to that the newly applied rotations are relative to the user, and the previously applied rotation are conserved.

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

OpenGL ortho, perspective and frustum projections

45I am trying to understand OpenGL projections on a single point. I am using QGLWidget for rendering context and QMatrix4x4 for projection matrix. Here is the draw function
attribute vec4 vPosition;
uniform mat4 projection;
uniform mat4 modelView;
void main()
{
gl_Position = projection* vPosition;
}
void OpenGLView::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(programObject);
glViewport(0, 0, width(), height());
qreal aspect = (qreal)800 / ((qreal)600);
const qreal zNear = 3.0f, zFar = 7.0f, fov = 45.0f;
QMatrix4x4 projection;
projection.setToIdentity();
projection.ortho(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.frustum(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.perspective(fov,aspect,zNear, zFar);
position.setToIdentity();
position.translate(0.0f, 0.0f, -5.0f);
position.rotate(0,0,0, 0);
QMatrix4x4 mvpMatrix = projection * position;
for (int r=0; r<4; r++)
for (int c=0; c<4; c++)
tempMat[r][c] = mvpMatrix.constData()[ r*4 + c ];
glUniformMatrix4fv(projection, 1, GL_FALSE, (float*)&tempMat[0][0]);
//Draw point at 0,0
GLfloat f_RefPoint[2];
glUniform4f(color,1, 0,1,1);
glPointSize(15);
f_RefPoint[0] = 0;
f_RefPoint[1] = 0;
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 2, GL_FLOAT, 0, 0, f_RefPoint);
glDrawArrays (GL_POINTS, 0, 1);
}
Observations:
1) projection.ortho: the point rendered on the window and translating the point with different z-axis value has no effect
2) projection.frustum: the point is drawn on the windown only the point is translated as translate(0.0f, 0.0f, -20.0f)
3) projection.perspective: the point is never rendered on the screen.
Could someone help me understand this behaviour?
The ortho projection works this way. I suggest you search for some images or some videos about the differences between different projections.
I don't know how you see a point translation in Z coordinate but if you would have a square it would become smaller by translating it further away (with ortho it would stay the same). There is an issue here as you use -20.0f for zNear while this value should be positive. The values inserted into this method should in most cases be generated with field of view, aspect ratio... Anyway you will not be able to see anything closer then zNear and anything further then zFar.
This is the same as frustum but already takes parameters as field of view, aspect ratio. The reason you do not see anything is your zNear is at 3.0f and the point is .0f length away. By translating the point you will be able to see it but try translating it by anything from 3.0f to 7.0f (3.0f is your zNear and 7.0f is your zFar). Alternatives are increasing zFar or translating the projection matrix backwards. Or mostly in your case I suggest adding some "look at" system on the projection matrix as it will give you some easy-to-use tools to manipulate your "camera", in most cases you can set a point you are looking from, a point you are looking at and up vector.

Processing, Rotate rectangle using matrix?

I am trying to rotate a rectangle without using the rotate function, but instead using a matrix..I know how to rotate a line using a matrix but all my attempts to rotate a rectangle have failed.
I dont think this is use full but heres is my code that rotates the line.
float[][] rotation;
float[] position;
float theta = 180;
float pointX;
float pointY;
void setup() {
frameRate(60);
size(600, 600);
pointX = 0;
pointY = 0;
rotation = new float[2][2];
position = new float[8];
}
void draw() {
background(200);
theta = mouseX;
position[0] = mouseY;
position[1] = mouseY;
position[2] = -mouseY;
position[3] = mouseY;
rotation[0][0] = cos(radians(theta));
rotation[0][1] = -sin(radians(theta));
rotation[1][0] = sin(radians(theta));
rotation[1][1] = cos(radians(theta));
float newpos[] = new float[8];
newpos[0] += position[0] * rotation[0][0];
newpos[1] += position[1] * rotation[0][1];
translate(width/2, height/2);
line(0, 0, pointX+newpos[0], pointY+newpos[1]);
line(0, 0, pointX+newpos[0] * -1, pointY+newpos[1] * -1);
}
Although the lines behaves properly it is by chance... You have a crucial part of the calculation of the new x and y of the point not as it should have been. As you can find in wikipedia, you need to calculate the sin and cos in the matrix as you properly did, but when creating the new point you don't exactly do this:
Start by having a look at pushMatrix()/popMatrix() and coordinate spaces.
Have a look at Daniel Shiffman's tutorial as well, it's pretty well explained.
If you need to get lower level than this, have a look at the PMatrix2D class.
Notice there is a rotate() function. After you rotate, you can either simply apply
the matrix(using applyMatrix()) but you might as well be using push/pop matrix calls.
Another option is multiply vectors(rectangle corners) to the rotation matrix and draw the result/transformed points.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Resources