Translation and rotation around center - opengl-es

I'm trying to achieve something simple: Set a translation on X axis, and rotate the object around it's center by a fixed angle.
To achieve this, as far my current knowledge, it's necessary to move the object to the center, rotate, and move back to the original position. Okay. The problem I get although, is that it looks like the object rotate it's local axis and do the last translation along these axis, so it ends in a wrong position.
This is my code:
public void draw(GL10 gl) {
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glTranslatef(-x, 0, 0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES10.GL_UNSIGNED_SHORT, indicesBuffer);
}
Before the rotation the object should be at 0,0,0. It rotates correctly. But then it comes near to the screen as if the x axis would be pointing to me (80°).
Note: I let only "opengl" as tag, since this is a general OpenGL question, the answer should not be Android related.

This is the deprecated way of doing this, but I guess that is no excuse for not answering the question.
OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex. For example, If a vertex is transformed by MA first, and transformed by MB second, then OpenGL performs MB x MA first before multiplying the vertex. So, the last transform comes first and the first transform occurs last in your code.
gl.glPushMatrix();
gl.glTranslatef(globalX, 0, 0);
gl.glTranslatef(localX, 0 ,0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(-globalX, 0, 0);
gl.glPopMatrix();
First move from where you are in a hierarchy of transforms to the origin.
Then rotate around that origin.
Apply some local movement along any axis.
Move the object back to its global positioning.
Use glPushMatrix() and glPopMatrix() to undo changes for elements in the same level of relative positioning, this is having the same parent element to which they are relatively positioned.
The push preserves translations from previous (parent) objects that OpenGL applies after operations in the local code above, as it is the order of a common stack (LIFO), in this case the matrix stack.

Related

Changing an object's position without changing its position visually

I have an instance of Object3D and visually it's at (0, 0, 0) but its actual position is shifted (and positions of this object's internals are shifted as well). So, basically geometry of this object is shifted in such a way that it compensates for position shift. So, I want to keep it visuallyy at (0, 0, 0) and change its position to (0, 0, 0).
In simple words, I want to shift object's position vector to this object's geometry position.
How can I do it?

How do I correctly render a texture in a quad in Metal?

My issue is because a quad is just two triangles. The texture is not rendered consistently on each triangle, and the texture is broken across the border between the two triangles. I'm using a screenshot of my lovely Minecraft house as an example texture:
Rendered textured quad
As you see, from the top left of the screenshot and the bottom right, it seem to have been cut or folded or something. It's just distorted. And the distortion I speak of is NOT from the fact that it's being applied to a trapezoid, it's that it's inconsistently being applied to the two triangles that constitute the trapezoid.
Original screenshot
So how can I fix this?
In viewDidLoad:
let VertexDescriptor = MTLVertexDescriptor()
let Attribute1Offset = MemoryLayout<simd_float3>.stride
let Attribute2Offset = Attribute1Offset+MemoryLayout<simd_float4>.stride
VertexDescriptor.attributes[0].format = .float3
VertexDescriptor.attributes[1].format = .float4
VertexDescriptor.attributes[1].offset = Attribute1Offset
VertexDescriptor.attributes[2].format = .float2
VertexDescriptor.attributes[2].offset = Attribute2Offset
VertexDescriptor.layouts[0].stride = Attribute2Offset+MemoryLayout<simd_float2>.stride
PipelineDescriptor.vertexDescriptor = VertexDescriptor
let TextureLoader = MTKTextureLoader(device: Device)
Texture = try? TextureLoader.newTexture(URL: Bundle.main.url(forResource: "Texture.png", withExtension: nil)!)
Vertices:
//First four = position, second four = color, last two = texture coordinates
let Vertices: [Float] = [-0.5, 0.5, 0, 0, 1, 1, 0, 1, 0, 0,
0.5, 0.5, 0, 0, 0, 1, 1, 1, 1, 0,
1, -1, 0, 0, 0, 1, 0, 1, 1, 1,
-1, -1, 0, 0, 0, 0, 1, 1, 0, 1]
Types in Shaders.metal
typedef struct {
float4 Position [[attribute(0)]];
float4 Color [[attribute(1)]];
float2 TexCoord [[attribute(2)]];
} VertexIn;
typedef struct {
float4 Position [[position]];
float4 Color;
float2 TexCoord;
} VertexOut;
Bear with me, I use PascalCase because I think camelCase is ugly. I just don't like it. Well anyways, how do I correctly place a texture in a quad made of two triangles so it won't look all weird?
As you know, Metal performs perspective-correct vertex attribute interpolation on your behalf by using the depth information provided by the z coordinate of your vertex positions.
You're subverting this process by distorting the "projected" shape of the quad without providing perspective information to the graphics pipeline. This means that you need to pass along a little extra information in order to get correct interpolation. Specifically, you need to include "depth" information in your texture coordinates and perform the "perspective" divide manually in the fragment shader.
For a fully-general solution, consult this answer, but for a simple fix in the case of symmetrical scaling about the vertical axis of the quad, use float3 texture coordinates instead of float2 and set the x and z coordinates such that z is the scale factor introduced by your pseudo-perspective projection and when x is divided by z, the result is what x would have been without the divide.
For example, if the distance between the top two vertices is half that of the bottom two vertices (as appears to be the case in your screenshot), set the upper-left texture coordinate to (0, 0, 0.5) and the upper-right texture coordinate to (0.5, 0, 0.5). Pass these "3D" texture coordinates through to your fragment shader, then divide by z before sampling:
half4 color = myTexture.sample(mySampler, in.texCoords.xy / in.texCoords.z);

Drawing equilateral triangle in the middle

I want to draw a triangle in the middle of the scene so I pushed vertices like this and it worked:
geom.vertices.push(new THREE.Vector3(-1, 0, 0));
geom.vertices.push(new THREE.Vector3(1, 0, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
My question is, if I reorder push methods, triangle is not being rendered and I don't understand why, for instance:
geom.vertices.push(new THREE.Vector3(1, 0, 0));
geom.vertices.push(new THREE.Vector3(-1, 0, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
Why is that happening? I can't find anything in documentation about this.
In 3D, each face has a normal attribute computed when you call geometry.computeFaceNormals() in three.js. If a face normal points toward the camera, the face will be drawn, otherwise it won't. How to decide which side of the face will be the front side ? It depends of the order of the vertices. If you look at a face and if the indices of its 3 vertices grow clockwise, you are watching the front side. Otherwise it is the backside. It is called the winding order.
I guess you did not change their indices in the face declaration, new THREE.Face3(0,1,2) but if you change the order of the vertices in their array, it is the same of course.
So actually you are now looking your face from the backside. You can see it by changing the camera's point of view (change the order and move the camera to the back in this fiddle ).
If you don't want to worry about the winding order and be able to watch the faces from any point of view, three.js offers you that in the material parameters : just add
side:THREE.DoubleSide //default is THREE.FrontSide, there also is THREE.BackSide

glDrawPixels() with 0.375 translation

I've noticed some strange behaviour with glDrawPixels() when using a 0.375 translation. This is my GL initialization:
width = 640; height = 480;
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity( );
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity( );
glTranslatef(0.375, 0.375, 0.0);
Now I want to draw a 640x30 pixel buffer to the very last 30 rows of my GL window. Hence, I do the following:
glRasterPos2i(0, 480);
glDrawPixels(640, 30, GL_RGBA, GL_UNSIGNED_BYTE, pixelbuffer);
Unfortunately, nothing gets drawn using this code. glGetError() also returns 0. The interesting thing is that as soon as I remove the call to glTranslatef(0.375, 0.375, 0.0) everything works fine!
So could somebody explain to me why this 0.375 translation on both axes confuses glDrawPixels()? Is this somehow rounded to 1.0 internally making my call to glDrawPixels() suddenly want to draw beyond the context's boundaries and thus it gets clipped by OpenGL? This is the only explanation I can think of but I don't understand why OpenGL should round a 0.375 translation to 1.0... it should be rounded down to 0.0 instead, shouldn't it?
The point (0,480) actually straddles one of your clipping planes given your projection matrix. Your sub-pixel shift hack pushes the point beyond the breaking point and the raster position is clipped. In GL, glRasterPos (...) will invalidate all following raster operations as long as the initial position is clipped (which in this case, it is).
You could try glRasterPos2i (0, 479). This is altogether more meaningful given the dimensions of your window anyway. You could also drop the whole charade and use glWindowPos2i (...) instead of relying on your projection and modelview matrices to position the raster coordinate in window-space.
I can't answer your question on why glTranslatef stops glDrawPixels from working, but I can tell you that isn't the way to select where to draw. Check the man page for glDrawPixels for a bit more info. It will tell you about glRasterPos and glWindowPos

Translating with GLKMatrix4Translate Seems to Move About the Camera, Not the Origin

I'm trying to enable a user to pan up/down and left/right an object in OpenGL ES. I'm using GLKit for all of the drawing and movement. I've enabled touch events to track how the user wants to move the object. I'm using GLKMatrix4Translate to slide the pan the object, but it has a rotational component to it as well for some reason.
I gather the translation points from the user's touch and store them in a CGPoint:
CGPoint center;
I use center.x and center.y for the X and Y positions I want to translate to. I perform the translation with this line:
GLKMatrix4 modelViewMatrix = GLKMatrix4Translate(GLKMatrix4Identity, center.x, center.y, 0.0f);
Any ideas?
I figured out what the problem was here. I stopped using GLKMatrix4Translate and replaced that with GLKMatrix4MakeLookAt. GLKMatrix4MakeLookAt allows you to move the camera around which gives the effect I was looking for.
Simply using this code results in the same problem I was already seeing. The model rotates as it pans.
GLKMatrix4MakeLookAt(0, 0, 7,
center.x, center.y 0,
0, 1, 0);
What this is saying is that you want the camera to always look at (0,0,7) with the center at (center.x, center.y, 0) with the y-axis pointing up. The pointing of the eye is the problem. If the model is rotating (which it is), you need to point the eye at the newly rotated point.
Replacing the above with the code below seems to do the trick.
GLKMatrix4MakeLookAt(rotation.x, rotation.y, 7,
center.x, center.y, 0,
0, 1, 0);

Resources