triangle is not drawn when glOrtho z range between 0 and 1.
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho( -1,1,-1,1,-1,1 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
// Draw a triagnle in z 0.5
glBegin( GL_TRIANGLES );
glColor3f( 1, 0, 0 );
glVertex3f( -0.5, -0.5, 0.5 );
glVertex3f( 0.5, -0.5, 0.5 );
glVertex3f( 0.0, 0.5, 0.5 );
glEnd();
It displays a red triangle. Its fine for me.
But when I change near clipping plane to 0, It displays nothing.
Triangle is drawing with z 0.5, and near and far is between 0 and 1.
But Triangle is not drawn why ?
Following code is used to display the Triangle in z 0.5
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho( -1,1,-1,1,0,1 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
// Draw a triagnle in z 0.5 This triangle is not displayed.
glBegin( GL_TRIANGLES );
glColor3f( 1, 0, 0 );
glVertex3f( -0.5, -0.5, 0.5 );
glVertex3f( 0.5, -0.5, 0.5 );
glVertex3f( 0.0, 0.5, 0.5 );
glEnd();
Note that your triangle is being drawn BEHIND the camera. The positive Z direction is backward, toward the screen, and the negative Z direction is forward, toward the far plane. However, in glOrtho, the near and far planes are only distances in front of the camera, not actual Z coordinates.
In your first example, the near plane is -1, and the far plane is 1, so the frustum has a range of -1 unit in front of the camera (1 unit behind) to 1 unit in front of the camera. (from 1 behind to 1 in front) Therefore, your object behind the camera was drawn.
In your second example, the near plane is 0 units in front of the camera, and the far plane is 1 unit in front of the camera (from origin to 1 in front), so your object behind the camera was not drawn.
To fix it, you should change your triangle to be drawn in front of the camera by drawing it at -0.5 z instead.
// Setup ortho projection from the origin.
glOrtho( -1,1,-1,1,0,1 );
...
// Draw a triangle in front of the camera, at -0.5 z
glVertex3f( -0.5, -0.5, -0.5 );
glVertex3f( 0.5, -0.5, -0.5 );
glVertex3f( 0.0, 0.5, -0.5 );
There are also other methods such as reversing the Z axis, if you prefer to use positive Z values to represent forward:
// Setup ortho projection from the origin, with the Z axis reversed.
// (the far plane is behind the near plane.)
glOrtho( -1,1,-1,1,0,-1 );
...
// Draw a triangle at 0.5 z
glVertex3f( -0.5, -0.5, 0.5 );
glVertex3f( 0.5, -0.5, 0.5 );
glVertex3f( 0.0, 0.5, 0.5 );
The nearVal and farVal values do not specify the z values of the clipping planes, but the distance from the camera. In fixed-function OpenGL, eye space is defined with camera at origin, looking at -z direction.
glOrtho() is just defined so that positive values refer to positions in front of the viewer (so it is consistent to the behavior of glFrustum()), and negative vaules are behind the viewer, so actually, the planes are at z=-nearVal and z=-farVal in eye space.
Related
Just found a uv picture
As follows I created a triangle, three vertices and corresponding uv coordinates
let geometry = new THREE.BufferGeometry();
geometry.setAttribute(
"position",
new THREE.BufferAttribute(
new Float32Array([-1.0, 1.0, 0, -1.0, -1.0, 0, 1.0, 1.0, 0]),
3
)
);
geometry.setAttribute(
"uv",
new THREE.BufferAttribute(new Float32Array([0.0, 1.0, 0, 0, 1.0, 1.0]), 2)
);
let mesh = new THREE.Mesh(
geometry,
new THREE.MeshBasicMaterial({
map: new THREE.TextureLoader().load("/1.png"),
})
);
As shown in the figure below, the vertex order is counterclockwise, so the side shooting from the screen to us is the front.
When I clone a copy and do x-axis scaling -1
let clonedMesh = mesh.clone(true);
scene.add(mesh);
clonedMesh.scale.set(-1, 1, 1);
clonedMesh.position.x = 2;
scene.add(clonedMesh);
The order becomes clockwise as follows, the front should be shot into the screen, and MeshBasicMaterial only renders the front by default, so it stands to reason that the clonedMesh should not be able to see it.
I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).
DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.
I am trying to position my text model mesh on screen. Using the code below, it draws mesh as the code suggests; with the left of the mesh at the center of the screen. But, I would like to position it at the left of edge of the screen, and this is where I get stuck. If I un-comment the Matrix.translateM line, I would think the position will now be at the left of the screen, but it seems that the position is being scaled (!?)
A few scenarios I have tried:
a.) Matrix.scaleM only (no Matrix.translateM) = the left of the mesh is positioned 0.0f (center of screen), has correct scale.
b.) Matrix.TranslateM only (no Matrix.scaleM) = the left of the mesh is positioned -1.77f at the left of screen correctly, but scale incorrect.
c.) Matrix.TranslateM then Matrix.scaleM, or Matrix.scaleM then Matrix.TranslateM = the scale is correct, but position incorrect. It seems the position is scaled and is very much closer to the center than to the left of the screen.
I am using OpenGL ES 2.0 in Android Studio programming in Java.
Screen bounds (as setup from Matrix.orthoM)
left: -1.77, right: 1.77 (center is 0.0), top: -1.0, bottom: 1.0 (center is 0.0)
Mesh height is 1.0f, so if no Matrix.scaleM, the mesh takes the entire screen height.
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f; // 64px height to projection matrix
Matrix.setIdentityM(modelMatrix, 0);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale); // these two lines
//Matrix.translateM(modelMatrix, 0, -ratio, 0.0f, 0.0f); // these two lines
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, modelMatrix, 0);
Thanks, Ed Halferty and Matic Oblak, you are both correct. As Matic suggested, I have now put the Matrix.TranslateM first, then Matrix.scaleM second. I have also ensured that the MVPMatrix is indeed modelviewprojection, and not projectionviewmodel.
Also, now with Matrix.translateM for the model mesh to -1.0f, it is to the left edge of the screen, which is better than -1.77f in any case.
Correct position + scale, thanks!
float ratio = (float) 1920.0f / 1080.0f;
float scale = 64.0f / 1080.0f;
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, -1.0f, 0.0f, 0.0f);
Matrix.scaleM(modelMatrix, 0, scale, scale, scale);
Matrix.setIdentityM(mMVPMatrix, 0);
Matrix.orthoM(mMVPMatrix, 0, -ratio, ratio, -1.0f, 1.0f, -1.0f, 1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, modelMatrix, 0, mMVPMatrix, 0);
I'm using three.js to create an interactive data visualisation. This visualisation involves rendering 68000 nodes, where each different node has a different size and color.
Initially I tried to do this by rendering meshes, but that proved to be very expensive. My current attempt is to use a three.js particle system, with each point being a node in the visualisation.
I can control the color * size of the point, but only to a certain point. On my card, the maximum size for a gl point seems to be 63. As I zoom in to the visualisation, points get larger - to a point, and then remain at 63 pixels.
I'm using a vertex & fragment shader currently:
vertex shader:
attribute float size;
attribute vec3 ca;
varying vec3 vColor;
void main() {
vColor = ca;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( 300.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader:
uniform vec3 color;
uniform sampler2D texture;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( color * vColor, 1.0 );
gl_FragColor = gl_FragColor * texture2D( texture, gl_PointCoord );
}
These are copied almost verbatim from one of the three.js examples.
I'm totally new to GLSL, but I'm looking for a way to draw points larger than 63 pixels. Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise? Are there any other work-arounds I can use to draw points larger than 63 pixels?
You can make your own point system by making arrays of unit quads + the center point then expanding by size in GLSL.
So, you'd have 2 buffers. One buffer is just a 2D unitQuad repeated for how ever many points you want to draw.
var unitQuads = new Float32Array([
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
];
The second one is your points except the positions need to be repeated 4 times each
var points = new Float32Array([
p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z,
p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z,
p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z,
p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z,
p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z,
]);
Setup your buffers and attributes
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, unitQuads, gl.STATIC_DRAW);
gl.enableVertexAttribArray(unitQuadLoc);
gl.vertexAttribPointer(unitQuadLoc, 2, gl.FLOAT, false, 0, 0);
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, points, gl.STATIC_DRAW);
gl.enableVertexAttribArray(pointLoc);
gl.vertexAttribPointer(pointLoc, 3, gl.FLOAT, false, 0, 0);
In your GLSL shader, compute the gl_PointSize you want then multiply the unitQuad by that size in view space or screen space. Screen space would match what gl_Point does but often people want their points to scale in 3D like normal stuff in which case view space is what you want.
attribute vec2 a_unitQuad;
attribute vec4 a_position;
uniform mat4 u_view;
uniform mat4 u_viewProjection;
void main() {
float fake_gl_pointsize = 150;
// Get the xAxis and yAxis in view space
// these are unit vectors so they represent moving perpendicular to the view.
vec3 x_axis = view[0].xyz;
vec3 y_axis = view[1].xyz;
// multiply them by the desired size
x_axis *= fake_gl_pointsize;
y_axis *= fake_gl_pointsize;
// multiply them by the unitQuad to make a quad around the origin
vec3 local_point = vec3(x_axis * a_unitQuad.x + y_axis * a_unitQuad.y);
// add in the position you actually want the quad.
local_point += a_position;
// now do the normal math you'd do in a shader.
gl_Position = u_viewProjection * local_point;
}
I'm not sure that made any sense but there's more complicated but a working sample here
Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise?
Not in WebGL.
You can draw your particle system as a series of quads (ie: two triangles). But that's about it.
Running on iPad.
Mapping a texture that is 256x256 onto a quad. I'm trying to render it exactly the same size as the actual image. The quad looks correct (shape is right, texture mapped correctly), but it is only ~75% of the size of the actual .png.
Not sure why.
The code is characterized as follows (excerpts below):
Screen is 768x1024. Windows is 768x1024 as well.
glViewport(0, 0, 768, 1024); // aspect ratio 1:1.333
glOrthof(-0.5f, 0.5f, -0.666f, 0.666f, -1.0f, 1.0f); // matching aspect ratio with 0,0 centered
// Sets up an array of values to use as the sprite vertices.
//.25 of 1024 is 256 pixels so the quad (centered on 0,0) spans -0.125,
//-0.125 to 0.125, 0.125 (bottom left corner and upper right corner)
GLfloat spriteVertices[] = {
-0.125f, -0.125f,
0.125f, -0.125f,
-0. 125f, 0.125f,
0.125f, 0.125f,
};
// Sets up an array of values for the texture coordinates.
const GLshort spriteTexcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
followed by the appropriate calls to:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords);
then
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Why is my sprite smaller than 256x256 when rendered?
Your output is 192x192 (approx) because your quad is the wrong size. It's 0.25x0.25 and the "unit length" direction is X which is 768 wide, so it's 0.25 * 768 = 192. If you switched your glOrthof so that top/bottom were -0.5 and +0.5 (with appropriate correction to X) it would work.