Negative scale in axis Y using applyMatrix does not work - three.js

I am creating a web page to illustrate the 3D transformations and I am using Three.js. I have detected a problem when I try to do a negative scale in Y axis. In this case, the object is not affected (a face inversion should be done but it doesn't). However, for negative scales in axis X or Z it works well. Any help? This is my code:
var m = new THREE.Matrix4(
scaleX, 0, 0, 0,
0, scaleY, 0, 0,
0, 0, scaleZ, 0,
0, 0, 0, 1
);
cube.applyMatrix(m);
If I use cube.scale.set(scaleX,scaleY,scaleZ) the first transformation is performed rightly, but I can't link with other transformations. I need for my application that the user can do several transformations in the same scene.
Thanks in advance

Your matrix is not correct.
Try with :
var m = new THREE.Matrix4(
1, 0, 0, scaleX,
0, 1, 0, scaleY,
0, 0, 1, scaleZ,
0, 0, 0, 1
);
cube.applyMatrix(m);

Related

Gridmap Node set_cell_item() rotation of the tile object

I'm developing a procedural map using gridmap 3D in Godot,
set_cell_item(x: int, y: int, z: int, item: int, orientation: int = 0)
On the last property I can setup the orientation, of the object... but it looks like it ranges from -1 to 1...so, only 3 options?
Using then make my tile rotate in the Z axis, and I want it to rotate on the y axis. The docs point me to
get_orthogonal_index()
But I dindt understand how to use it
The value goes from 0 to 24, where 0 is no rotation. The documentation of get_orthogonal_index says:
This function considers a discretization of rotations into 24 points on unit sphere, lying along the vectors (x,y,z) with each component being either -1, 0, or 1, and returns the index of the point best representing the orientation of the object. It is mainly used by the GridMap editor. For further details, refer to the Godot source code.
What the 24 rotations are is not easy to visualize. However, suffice to say they are the The Rotational Symmetries of the Cube. In other words, they are all the ways you can take a nondescript cube and rotate it, such that it looks the same after the rotation (it is rotated, but being a nondescript cube, it looks the same).
Now, the issue is in what order are these rotations?
Well, wonder no more, thanks to the magic of looking at Godot source code, these are the rotations:
// --- x --- --- y --- --- z ---
Basis( 1, 0, 0, 0, 1, 0, 0, 0, 1), // 0
Basis( 0, -1, 0, 1, 0, 0, 0, 0, 1), // 1
Basis(-1, 0, 0, 0, -1, 0, 0, 0, 1), // 2
Basis( 0, 1, 0, -1, 0, 0, 0, 0, 1), // 3
Basis( 1, 0, 0, 0, 0, -1, 0, 1, 0), // 4
Basis( 0, 0, 1, 1, 0, 0, 0, 1, 0), // 5
Basis(-1, 0, 0, 0, 0, 1, 0, 1, 0), // 6
Basis( 0, 0, -1, -1, 0, 0, 0, 1, 0), // 7
Basis( 1, 0, 0, 0, -1, 0, 0, 0, -1), // 8
Basis( 0, 1, 0, 1, 0, 0, 0, 0, -1), // 9
Basis(-1, 0, 0, 0, 1, 0, 0, 0, -1), // 10
Basis( 0, -1, 0, -1, 0, 0, 0, 0, -1), // 11
Basis( 1, 0, 0, 0, 0, 1, 0, -1, 0), // 12
Basis( 0, 0, -1, 1, 0, 0, 0, -1, 0), // 13
Basis(-1, 0, 0, 0, 0, -1, 0, -1, 0), // 14
Basis( 0, 0, 1, -1, 0, 0, 0, -1, 0), // 15
Basis( 0, 0, 1, 0, 1, 0, -1, 0, 0), // 16
Basis( 0, -1, 0, 0, 0, 1, -1, 0, 0), // 17
Basis( 0, 0, -1, 0, -1, 0, -1, 0, 0), // 18
Basis( 0, 1, 0, 0, 0, -1, -1, 0, 0), // 19
Basis( 0, 0, 1, 0, -1, 0, 1, 0, 0), // 20
Basis( 0, 1, 0, 0, 0, 1, 1, 0, 0), // 21
Basis( 0, 0, -1, 0, 1, 0, 1, 0, 0), // 22
Basis( 0, -1, 0, 0, 0, -1, 1, 0, 0) // 23
These are Basis. They describe the orientation by specifying the direction of the axis.
The three first numbers are the x axis, followed by three numbers for the y axis, and three more for the z axis.
The first one, is no rotation at all:
Basis( 1, 0, 0, 0, 1, 0, 0, 0, 1), // 0
Notice that the x axis is 1,0,0, which means it is oriented towards the x. The y axis is 0,1,0… you guested oriented towards the y, and 0,0,1 for the z being just the z. So no rotation, as expected.
As you can see the first four indexes gives you rotation that keep the z axis untouched. Thus, you see rotation around the z axis.
Since you want rotation around the y axis, let us pick the ones that keep the y axis untouched:
Basis( 1, 0, 0, 0, 1, 0, 0, 0, 1), // 0
Basis(-1, 0, 0, 0, 1, 0, 0, 0, -1), // 10
Basis( 0, 0, 1, 0, 1, 0, -1, 0, 0), // 16
Basis( 0, 0, -1, 0, 1, 0, 1, 0, 0), // 22
As per the order… 0 is no rotation. 10 is half turn, since the other axis are flipped. Thus, either 0, 22, 10, 16 or 0, 16, 10, 22, depending if you want a positive or negative rotation.

DirectX 9 - drawing a 2D sprite in its exact dimensions

I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).
DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.

Three.js WebGL GPGU Birds Example modify shape & number of vertices

I've been tasked with re-purposing the core of this Three.js example for a project demo. http://threejs.org/examples/#webgl_gpgpu_birds The problem is that I've also been asked to change the elements from birds formed of 3 triangles to a different shape. To form the new shape I'll need 3-4x that many triangles/vertices.
Because of the nature of how the example is set up and the fact that the number of birds and their vertices are being created, organized and animated through the buffer geometry and shaders, doing this is difficult (at least for me so far). I've gone through the demo, Tried changing everything I can to change the looping point to what the shader sees as a single bird.
Does anyone smarter than me have any insight (or experience) tweaking this demo to modify the shapes? I have successfully updated the triangles and vertices to what the new shape should be by adding more triangles/vertices to the shape I need.
EXAMPLE: (adding 3 more of these to form the new triangles. )
verts_push(
2, 0, 0,
1, 1, 0,
0, 0, 0
);
verts_push(
-2, 0, 0,
-1, 1, 0,
0, 0, 0
);
verts_push(
0, 0, -15,
20, 0, 0,
0, 0, 0
);
EXAMPLE: Doubling the number of triangles/vertices.
var triangles = BIRDS * 6;
var points = triangles * 3;
var vertices = new THREE.BufferAttribute( new Float32Array( points * 3 ), 3 );
But somewhere in the code I'm not able to find what I need to modify/multiply so that the shader correctly counts the vertices and knows that the birds are no longer 3 triangles with 9 vertices but now 6 triangles with 18 vertices (or more). I either get errors or or odd shapes with vertices I don't want moving, moving because whatever I've added isn't of the right size of vertices to what the shader is looking for. So far I have had no luck in being able to get it to work properly. Any help would be appreciated!
After a lot of trial and error I figured out what I needed to modify to both add more triangles and edit what the example considered a single bird. For my purposes I tripled the amount of triangles per "bird" by making the following changes.
Edit 1: Multiplied the number of triangles by an additional 3
var triangles = BIRDS * 3 * 3;
Edit 2: Increase the number of vertices sets pushed from 3 to 9
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
Edit 3: In the for loop make the following change to the birdVertex attribute
birdVertex.array[ v ] = v % (9 * 3);
This successfully gave me enough vertices to play with in order to start creating new shapes as well as controlling the points I wanted to within the birdVS shader.
This worked even with older versions of Three.js (I tested with r.76)

WebGL LookAt works on -z but broke on +z?

I'm trying to build my own camera class in WebGL for learning purposes. It's almost complete but I have 1 slight issue with the lookAt function. If I translate my object in the -z direction then it will work fine but once I move it in the +z direction it acts as if it's doing the opposite and the object will begin to stretch across the screen if I move the camera up or down.
This is what I currently have:
var zAxis = new Vector(cameraPosition.x, cameraPosition.y, cameraPosition.z)
.subtract(target)
.normalize();
var xCross = math.cross(up.flatten(3), zAxis.flatten(3));
var xAxis = new Vector(xCross[0], xCross[1], xCross[2]).normalize();
var yCross = math.cross(zAxis.flatten(3), xAxis.flatten(3));
var yAxis = new Vector(yCross[0], yCross[1], yCross[2]).normalize();
var orientation = [
xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
zAxis.x, zAxis.y, zAxis.z, 0,
0, 0, 0, 1
];
var translation = [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
cameraPosition.x, cameraPosition.y, cameraPosition.z, 1
];
this.multiply(translation);
this.multiply(orientation);
Does anyone know what I've done wrong?

How to rotate (after a 90 degrees rotation in x axis ) in the new coordinates on the y axis in openGL es 2.0

I'm rotation a cube 90 degrees in x axis, after that I want to rotate in another 90 degrees in y axis but it does get the expected(from me) result since it was rotated before
I'd like rotation to happen lets say in world coordinates ... My current code I think is resetting the identity matrix but if I remove that line nothing renders.Here is my code:
public void onDrawFrame(GL10 arg0) {
// GLES20.glEnable(GLES20.GL_TEXTURE_CUBE_MAP);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
GLES20.glUseProgram(iProgId);
cubeBuffer.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 0, cubeBuffer);
GLES20.glEnableVertexAttribArray(iPosition);
texBuffer.position(0);
GLES20.glVertexAttribPointer(iTexCoords, 3, GLES20.GL_FLOAT, false, 0, texBuffer);
GLES20.glEnableVertexAttribArray(iTexCoords);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_CUBE_MAP, iTexId);
GLES20.glUniform1i(iTexLoc, 0);
Matrix.setIdentityM(m_fIdentity, 0);
if(rotating == true)
{
rotate();
}
Matrix.rotateM(m_fIdentity, 0, -xAngle, 0, 1, 0);
Matrix.rotateM(m_fIdentity, 0, -yAngle, 1, 0, 0);
Matrix.multiplyMM(m_fVPMatrix, 0, m_fViewMatrix, 0, m_fIdentity, 0);
Matrix.multiplyMM(m_fVPMatrix, 0, m_fProjMatrix, 0, m_fVPMatrix, 0);
// Matrix.translateM(m_fVPMatrix, 0, 0, 0, 1);
GLES20.glUniformMatrix4fv(iVPMatrix, 1, false, m_fVPMatrix, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, 36, GLES20.GL_UNSIGNED_SHORT, indexBuffer);
// GLES20.glDisable(GLES20.GL_TEXTURE_CUBE_MAP);
}

Resources