Cameras and Modelview in OpenGL ES (WebGL) - opengl-es

I'm having a little trouble with my OpenGL transformations -- I have a vertex shader that sets gl_Position to projection * view * model * vertex. I have code that generates a view matrix by inverting the model matrix of a camera in space, but when I set the object the camera is looking at to rotate in space, it seems as if the camera is rotating instead.
What could be causing this?

Apparently I had projection * model * view * vertex instead. Oops!

Related

Using a custom vertex shader with THREE.Points

I see that Threejs has a Points Material to draw a geometry as points rather than as triangles. However, I want to manipulate the vertices using my own vertex shader, using a Shader Material. In WebGL, I think I could just call gl_drawArrays using gl.Points instead of gl.Triangles. How can I tell the renderer to draw the geometry as points? Is there a better way to go about this?
little addition, I had no joy until I added gl_PointSize to my vertex shader:
void main(){
gl_PointSize = 100.;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.);
}
found the answer in the GPU particle system example.
Found my solution right after asking the question. Just create a THREE.Points object instead of THREE.Mesh using whatever geometry and the Shader Material you want to use.
THREE.Points(geometry, new THREE.ShaderMaterial(parameters));

webgl - model coordinates to screen coordinates

Im having an issue with calculating screen coordinates of a vertex. this is not a specifically a webgl issue, more of a general 3d graphics issue.
the sequence of matrix transformations that Im using is:
result_vec4 = perspective_matrix * camera_matrix * model_matrix * vertex_coords_vec4
model_matrix being the transformation of a vertex in its local coordinate system into the global scene coord system.
so my understanding is that the final result_vec4 is in clip space? which should then be in the [-1,1] range. which is not what Im getting... result_vec4 just ends up containing some standard values for the coords, not corresponding to the correct screen position of the vertex.
does anyone have any ideas as to what might be the issue here?
thank you very much for any thoughts.
To go in clip space you need to project result_vec4 on the hyperplane w=1 using:
result_vec4 /= result_vec4.w
By applying this perspective division result_vec4.xyz will be in [-1,1].

scene node transform order

I have a simple question, in scene graph implemention. A scene node record transform info relation parent scene node. As we know, The full transform matrix is built by Trans(local) * Trans(parent) * Trans(root). The local transform matrix is create from translation, scale and rotation, and the order is SRT, scale * rotation * translation. But now I have a model, it is not modeled at its geometry center. So before apply rotation to rotate the model, we need to apply a local space translation, translate the origin to geometry center first, then we can rotate around any axis. In this case, we need a transform order translate * rotate, not the default rotate * translate.
Question is in Ogre like engine, how to rotation scene node which attaches the model as i describe above.
I have solved myself. First, create a scene node which is used to translate to model center. The model is attached to this scene node. Then do whatever you want, the scene node we created before is a child of any other normal scene node.

3D sprites, writing correct depth buffer information

I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.

How do I use a vertex shader to multiply my vertex data by a uniform?

This is a question that came from an earlier problem I had. Basically, I was trying to implement orthographic scaling in my shader by modifying the scale components of my projection matrix, but it wasn't possible. What I had to actually do was scale the verts before "sending them in" to my shader via a draw. That works like a charm...
But, of course, the issue is that in software now I'm responsible for scaling all my verts before handing them off to the shader. This makes me wonder if it would be possible to have a vertex shader do this. I imagine it is, but I can't figure it out.
What I'm doing is just going through all 4 of my verts (held in float vertices[8]) and doing *= scale;. To be slightly more accurate, I multiply the X and Y components separately by scaleX and scaleY.
How can I do this same thing in a vertex shader?
replace gl_Vertex with (gl_Vertex * scale) everywhere in your vertex shader. Or if you're using a user-defined input for your coordinate, put * scale on that.

Resources