OpenGL ortho, perspective and frustum projections - opengl-es

45I am trying to understand OpenGL projections on a single point. I am using QGLWidget for rendering context and QMatrix4x4 for projection matrix. Here is the draw function
attribute vec4 vPosition;
uniform mat4 projection;
uniform mat4 modelView;
void main()
{
gl_Position = projection* vPosition;
}
void OpenGLView::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(programObject);
glViewport(0, 0, width(), height());
qreal aspect = (qreal)800 / ((qreal)600);
const qreal zNear = 3.0f, zFar = 7.0f, fov = 45.0f;
QMatrix4x4 projection;
projection.setToIdentity();
projection.ortho(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.frustum(-1.0f,1.0f,-1.0f,1.0f,-20.0f,20.0f);
// projection.perspective(fov,aspect,zNear, zFar);
position.setToIdentity();
position.translate(0.0f, 0.0f, -5.0f);
position.rotate(0,0,0, 0);
QMatrix4x4 mvpMatrix = projection * position;
for (int r=0; r<4; r++)
for (int c=0; c<4; c++)
tempMat[r][c] = mvpMatrix.constData()[ r*4 + c ];
glUniformMatrix4fv(projection, 1, GL_FALSE, (float*)&tempMat[0][0]);
//Draw point at 0,0
GLfloat f_RefPoint[2];
glUniform4f(color,1, 0,1,1);
glPointSize(15);
f_RefPoint[0] = 0;
f_RefPoint[1] = 0;
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 2, GL_FLOAT, 0, 0, f_RefPoint);
glDrawArrays (GL_POINTS, 0, 1);
}
Observations:
1) projection.ortho: the point rendered on the window and translating the point with different z-axis value has no effect
2) projection.frustum: the point is drawn on the windown only the point is translated as translate(0.0f, 0.0f, -20.0f)
3) projection.perspective: the point is never rendered on the screen.
Could someone help me understand this behaviour?

The ortho projection works this way. I suggest you search for some images or some videos about the differences between different projections.
I don't know how you see a point translation in Z coordinate but if you would have a square it would become smaller by translating it further away (with ortho it would stay the same). There is an issue here as you use -20.0f for zNear while this value should be positive. The values inserted into this method should in most cases be generated with field of view, aspect ratio... Anyway you will not be able to see anything closer then zNear and anything further then zFar.
This is the same as frustum but already takes parameters as field of view, aspect ratio. The reason you do not see anything is your zNear is at 3.0f and the point is .0f length away. By translating the point you will be able to see it but try translating it by anything from 3.0f to 7.0f (3.0f is your zNear and 7.0f is your zFar). Alternatives are increasing zFar or translating the projection matrix backwards. Or mostly in your case I suggest adding some "look at" system on the projection matrix as it will give you some easy-to-use tools to manipulate your "camera", in most cases you can set a point you are looking from, a point you are looking at and up vector.

Related

OpenGL simple antialiased polygon grid shader

How to make a test grid pattern with antialiased lines in a fragment shader?
I remember I found this challenging, so I'll post the answer here for my future self and for anyone who wants the same effect.
This shader is meant to be rendered "above" the already textured plane in a separate render call. The reason I'm doing that - is because in my program I am generating the texture of the surface through several render calls, slowly building it up layer by layer. And then I wanted to make a simple black grid over it, so I make the last render call to do this.
That's why the base color here is (0,0,0,0), basically a nothing. Then I can use GL mixing patterns to overlay the result of this shader over whatever my texture is.
Note that you needn't do that separately. You can just as easily modify this code to display a certain color (like smooth grey) or even a texture of your choice. Simply pass the texture to the shader and modify the last line accordingly.
Also note that I use constants that I set up during shader compillation. Basically, I just load the shader string, but before passing it to a shader compiler - I search and replace the __CONSTANT_SOMETHING with an actual value I want. Don't forget that that's all text, so you need to replace it with text, for example:
//java code
shaderCode = shaderCode.replaceFirst("__CONSTANT_SQUARE_SIZE", String.valueOf(GlobalSettings.PLANE_SQUARE_SIZE));
If I could share with you the code I use for anti-aliased grids, it might help the complexity. All I've done is use the texture coordinates to paint a grid on a plane. I used GLSL's genType fract(genType x) to repeat texture space. Then I used the absolute value function to essentially calculate each pixel's distance to the grid line. The rest of the operations are to interpret that as a color.
You can play with this code directly on Shadertoy.com by pasting it into a new shader.
If you want to use it in your code, the only lines you need are the part starting at the gridSize variable and ending with the grid variable.
iResolution.y is the screen height, uv is the texture coordinate of your plane.
gridSize and width should probably be supplied with a uniform variable.
void mainImage(out vec4 fragColor, in vec2 fragCoord) {
// aspect correct pixel coordinates (for shadertoy only)
vec2 uv = fragCoord / iResolution.xy * vec2(iResolution.x / iResolution.y, 1.0);
// get some diagonal lines going (for shadertoy only)
uv.yx += uv.xy * 0.1;
// for every unit of texture space, I want 10 grid lines
float gridSize = 10.0;
// width of a line on the screen plus a little bit for AA
float width = (gridSize * 1.2) / iResolution.y;
// chop up into grid
uv = fract(uv * gridSize);
// abs version
float grid = max(
1.0 - abs((uv.y - 0.5) / width),
1.0 - abs((uv.x - 0.5) / width)
);
// Output to screen (for shadertoy only)
fragColor = vec4(grid, grid, grid, 1.0);
}
Happy shading!
Here're my shaders:
Vertex:
#version 300 es
precision highp float;
precision highp int;
layout (location=0) in vec3 position;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform vec2 coordShift;
uniform mat4 modelMatrix;
out highp vec3 vertexPosition;
const float PLANE_SCALE = __CONSTANT_PLANE_SCALE; //assigned during shader compillation
void main()
{
// generate position data for the fragment shader
// does not take view matrix or projection matrix into account
// TODO: +3.0 part is contingent on the actual mesh. It is supposed to be it's lowest possible coordinate.
// TODO: the mesh here is 6x6 with -3..3 coords. I normalize it to 0..6 for correct fragment shader calculations
vertexPosition = vec3((position.x+3.0)*PLANE_SCALE+coordShift.x, position.y, (position.z+3.0)*PLANE_SCALE+coordShift.y);
// position data for the OpenGL vertex drawing
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Note that I calculate VertexPosition here and pass it to the fragment shader. This is so that my grid "moves" when the object moves. The thing is, in my app I have the ground basically stuck to the main entity. The entity (call it character or whatever) doesn't move across the plane or changes its position relative to the plane. But to create the illusion of movement - I calculate the coordinate shift (relative to the square size) and use that to calculate vertex position.
It's a bit complicated, but I thought I would include that. Basically, if the square size is set to 5.0 (i.e. we have a 5x5 meter square grid), then coordShift of (0,0) would mean that the character stands in the lower left corner of the square; coordShift of (2.5,2.5) would be the middle, and (5,5) would be top right. After going past 5, the shifting loops back to 0. Go below 0 - it loops to 5.
So basically the grid ever "moves" within one square, but because it is uniform - the illusion is that you're walking on an infinite grid surface instead.
Also note that you can make the same thing work with multi-layered grids, for example where every 10th line is thicker. All you really need to do is make sure your coordShift represents the largest distance your grid pattern shifts.
Just in case someone wonders why I made it loop - it's for precision sake. Sure, you could just pass raw character's coordinate to the shader, and it'll work fine around (0,0), but as you get 10000 units away - you will notice some serious precision glitches, like your lines getting distorted or even "fuzzy" like they're made out of brushes.
Here's the fragment shader:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
void main()
{
// calculate deriviatives
// (must be done at the start before conditionals)
float dXy = abs(dFdx(vertexPosition.z)) / 2.0;
float dYy = abs(dFdy(vertexPosition.z)) / 2.0;
float dXx = abs(dFdx(vertexPosition.x)) / 2.0;
float dYx = abs(dFdy(vertexPosition.x)) / 2.0;
// find and fill horizontal lines
int roundPos = int(vertexPosition.z / squareSize);
float remainder = vertexPosition.z - float(roundPos)*squareSize;
float width = max(dYy, dXy) * 2.0;
if (remainder <= width)
{
float diff = (width - remainder) / width;
fragColor = vec4(color_l1, diff);
return;
}
if (remainder >= (squareSize - width))
{
float diff = (remainder - squareSize + width) / width;
fragColor = vec4(color_l1, diff);
return;
}
// find and fill vertical lines
roundPos = int(vertexPosition.x / squareSize);
remainder = vertexPosition.x - float(roundPos)*squareSize;
width = max(dYx, dXx) * 2.0;
if (remainder <= width)
{
float diff = (width - remainder) / width;
fragColor = vec4(color_l1, diff);
return;
}
if (remainder >= (squareSize - width))
{
float diff = (remainder - squareSize + width) / width;
fragColor = vec4(color_l1, diff);
return;
}
// fill base color
fragColor = vec4(0,0,0, 0);
return;
}
It is currently built for a 1-pixel thick lines only, but you can control thickness by controlling the "width"
Here, the first important part is dfdx / dfdy functions. These are GLSL functions, and I'll simply say that they let you determine how much space in WORLD coordinates your fragment takes on the screen, based on the Z-distance of that spot on your plane.
Well, that was a mouthful. I'm sure you can figure it out if you read docs for them though.
Then I take the maximum of those outputs as width. Basically, depending on the way your camera is looking you want to "stretch" the width of your line a bit.
remainder - is basically how far this fragment is from the line that we want to draw in world coordinates. If it's too far - we don't need to fill it.
If you simply take the max here, you will get a non-antialiased line 1 pizel wide. It'll basically look like a perfect 1-pixel line shape from MS paint.
But increasing width, you make those straight segments stretch further and overlap.
You can see that I compare remainder with line width here. The greater the width - the bigger the remainder can be to "hit" it. I have to compare this from both sides, because otherwise you're only looking at pixels that are close to the line from the negative coord side, and discount the positive, which could still be hitting it.
Now, for the simple antialiasing effect, we need to make those overlapping segments "fade out" as they near their ends. For this purpose, I calculate the fraction to see how deeply the remainder is inside the line. When the fraction equals 1, this means that our line that we want to draw basically goes straight through the middle of the fragment that we're currently drawing. As the fraction approaches 0, it means the fragment is farther and farther away from the line, and should thus be made more and more transparent.
Finally, we do this from both sides for horizontal and vertical lines separately. We have to do them separate because dFdX / dFdY needs to be different for vertical and horizontal lines, so we can't do them in one formula.
And at last, if we didn't hit any of the lines close enough - we fill the fragment with transparent color.
I'm not sure if that's THE best code for the task - but it works. If you have suggestions let me know!
p.s. shaders are written for Opengl-ES, but they should work for OpenGL too.

Applying a perspective transformation matrix from GIMP into a GLSL shader

So I'm trying to add a rotation and a perspective effect to an image into the vertex shader. The rotation works just fine but I'm unable to make the perspective effect. I'm working in 2D.
The rotation matrix is generated from the code but the perspective matrix is a bunch of hardcoded values I got from GIMP by using the perspective tool.
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.58302f, -0.29001f, 103.0f,
-0.00753f, 0.01827f, 203.0f,
-0.00002f, -0.00115f, 1.0f
});
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
uniform mat3 u_rotation;
uniform mat3 u_perspective;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
vec3 vec = vec3(a_texCoord0 * 500.0, 1.0);
vec = vec * u_perspective;
vec = vec3((vec.xy / vec.z) / 500.0, 0.0);
vec -= vec3(0.5, 0.5, 0.0);
vec = vec * u_rotation;
v_texCoords = vec.xy + vec2(0.5);
gl_Position = u_projTrans * a_position;
}
For the rotation, I'm offsetting the origin so that it rotates around the center instead of the top left corner.
Pretty much everything I know about GIMP's perspective tool comes from http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch10.ps This was suggesting I would be able to reproduce what GIMP does after reading it, but it turns out I can't. The result shows nothing (no pixel) while removing the perspective part shows the image rotating properly.
As mentioned in the link, I'm dividing by vec.z to convert my homogeneous coordinates back to a 2D point. I'm not using the origin shifting for the perspective transformation as it was mentioned in the link that the top left corner was used as an origin. p.11:
There is one thing to be careful about - the origin of GIMP
coordinates is at the upper left, with y increasing downwards.
EDIT:
Thanks to #Rabbid76's answer, it's now showing something! However, it's not transforming my texture like the matrix was transforming my image on GIMP.
My transformation matrix on GIMP was supposed to do something a bit like that:
But instead, it looks like something like that:
This is what I think from what I can see from the actual result:
https://imgur.com/X56rp8K (Image used)
(As pointed out, it texture parameter is clamp to edge instead of clamp to border, but that's beside the point)
It looks like it's doing the exact opposite of what I'm looking for. I tried offsetting the origin to the center of the image and to the bottom left before applying the matrix without success. This is a new result but it's still the same problem: How to apply the GIMP perspective matric into a GLSL shader?
EDIT2:
With more testing, I can confirm that it's doing the "opposite". Using this simple downscale transformation matrix:
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.75f, 0f, 50f,
0f, 0.75f, 50f,
0f, 0f, 1.0f
});
The result is an upscaled version of the image:
If I invert the matrix programmatically, it works for the simple scaling matrix! But for the perspective matrix, it shows that:
https://imgur.com/v3TLe2d
EDIT3:
Thanks to #Rabbid76 again it turned out applying the rotation after the perspective matrix does the rotation before and I end up with a result like this: https://imgur.com/n1vWq0M
It is almost it! The only problem is that the image is VERY squished. It's just like the perspective matrix was applied multiple times. But if you look carefully, you can see it rotating while in perspective just like I want it. The problem now is how to unsquish it to get a result just like I had in GIMP. (The root problem is still the same, how to take a GIMP matrix and apply it in a shader)
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
The matrix
0.58302 -0.29001 103.0
-0.00753 0.01827 203.0
-0.00002 -0.00115 1.0f
is a 2D perspective transformation matrix. It operates with 2D Homogeneous coordinate.
See 2D affine and perspective transformation matrices
Since the matrix which is displayed in GIMP is the transformation from the perspective to the orthogonal view, the inverse matrix has to be used for the transformation.
The inverse matrix can be calculated by calling inv().
The matrix is setup to performs a operation of a Cartesian coordinate in the range [0, 500], to a Homogeneous coordinates in the range [0, 500].
Your assumption is correct, you have to scale the input from the range [0, 1] to [0, 500] and the output from [0, 500] to [0, 1].
But you have to scale the 2D Cartesian coordinates
Further you have to do the rotation after the perspective projection and the Perspective divide.
It may be necessary (dependent on the bitmap and the texture coordinate attributes), that you have to flip the V coordinate of the texture coordinates.
And most important, the transformation has to be done per fragment in the fragment shader.
Note, since this transformation is not linear (it is perspective transformation), it is not sufficient to to calculate the texture coordinates on the corner points.
vec2 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// uv_h: perspective devide and downscale [0, 500] -> [0, 1]
vec3 uv_p = vec3(uv_h.xy / uv_h.z / scale, 1.0);
// rotate
uv_p = vec3(uv_p.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0);
return uv_p.xy;
}
Of course you can do the transformation in the vertex shader too.
But then you have to pass the 2d homogeneous coordinate to from the vertex shader to the fragment shader
This is similar to set a clip space coordinates to gl_Position.
The difference is that you have a 2d homogeneous coordinate and not a 3d. and you have to do the Perspective divide manually in the fragment shader:
Vertex shader:
attribute vec2 a_texCoord0;
varying vec3 v_texCoords_h;
uniform mat3 u_perspective
vec3 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// downscale
return vec3(uv_h.xy / scale, uv_h.z);
}
void main()
{
v_texCoords_h = Project2D( a_texCoord0 );
.....
}
Fragment shader:
varying vec3 v_texCoords_h;
uniform mat3 u_rotation;
void main()
{
// perspective divide
vec2 uv = vertTex.xy / vertTex.z;
// rotation
uv = (vec3(uv.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0)).xy;
.....
}
See the preview, where I used the following 2D projection matrix, which is the inverse matrix from that one which is displayed in GIMP:
2.452f, 2.6675f, -388.0f,
0.0f, 7.7721f, -138.0f,
0.00001f, 0.00968f, 1.0f
Further note, in compare to u_projTrans, u_perspective is initialized in row major order.
Because of that you have to multiply the vector from the left to u_perspective:
vec_h = vec3(vec.xy * 500.0, 1.0) * u_perspective;
But you have to multiply the vector from the right to u_projTrans:
gl_Position = u_projTrans * a_position;
See GLSL Programming/Vector and Matrix Operations
and Data Type (GLSL)
Of course this may change if you transpose the matrix when you set it by glUniformMatrix*

OpenGL, Projection Matrix - Front of box is smaller...?

I'm in the process of learning WebGL and I'm trying to understand how to build a perspective matrix. I think I almost have it... I'm just stuck on 1 small problem which is that when I multiply my verts by the projection matrix I expect the front of the box that is being looked at to get bigger, but instead it gets smaller and the back gets bigger. I've attached a screen shot:
(the green side is the front)
My perspective matrix looks like this..
var aspectRatio = 600 / 600;
var fieldOfView = 30;
var near = 1;
var far = 2;
myPerspectiveMatrix = [
1 / Math.tan(fieldOfView / 2), 0, 0, 0,
0, 1 / Math.tan(fieldOfView / 2), 0, 0,
0, 0, (near + far) / (near - far), (2 * (near * far)) / (near - far),
0, 0, -1, 0
];
app.uniformMatrix4fv(uPerspectiveMatrix, false, new Float32Array(myPerspectiveMatrix));
And my vertex shader is..
attribute vec3 aPosition;
attribute vec4 aColor;
uniform mat4 uModelMatrix;
uniform mat4 uPerspectiveMatrix;
varying lowp vec4 vColor;
void main()
{
gl_Position = uPerspectiveMatrix * vec4(aPosition, 5.0);
//gl_Position = uPerspectiveMatrix * uModelMatrix * vec4(aPosition, 2.0);
vColor = aColor;
}
What's likely happening here is that your triangles are being drawing in the wrong clockwise order (clockwise as opposed to counter-clockwise, or vice versa), so you are seeing the "inside" of the box.
There are myriad ways of fixing this. My recommendation would be to fix the clockwise order of the indices you are using to draw the box.
Alternatively, the quick fix would be to perhaps change the "front face" using glFrontFace.

Retrieve Vertices Data in THREE.js

I'm creating a mesh with a custom shader. Within the vertex shader I'm modifying the original position of the geometry vertices. Then I need to access to this new vertices position from outside the shader, how can I accomplish this?
In lieu of transform feedback (which WebGL 1.0 does not support), you will have to use a passthrough fragment shader and floating-point texture (this requires loading the extension OES_texture_float). That is the only approach to generate a vertex buffer on the GPU in WebGL. WebGL does not support pixel buffer objects either, so reading the output data back is going to be very inefficient.
Nevertheless, here is how you can accomplish this:
This will be a rough overview focusing on OpenGL rather than anything Three.js specific.
First, encode your vertex array this way (add a 4th component for index):
Vec4 pos_idx : xyz = Vertex Position, w = Vertex Index (0.0 through NumVerts-1.0)
Storing the vertex index as the w component is necessary because OpenGL ES 2.0 (WebGL 1.0) does not support gl_VertexID.
Next, you need a 2D floating-point texture:
MaxTexSize = Query GL_MAX_TEXTURE_SIZE
Width = MaxTexSize;
Height = min (NumVerts / MaxTexSize, 1);
Create an RGBA floating-point texture with those dimensions and use it as FBO color attachment 0.
Vertex Shader:
#version 100
attribute vec4 pos_idx;
uniform int width; // Width of floating-point texture
uniform int height; // Height of floating-point texture
varying vec4 vtx_out;
void main (void)
{
float idx = pos_idx.w;
// Position this vertex so that it occupies a unique pixel
vec2 xy_idx = vec2 (float ((int (idx) % width)) / float (width),
floor (idx / float (width)) / float (height)) * vec2 (2.0) - vec2 (1.0);
gl_Position = vec4 (xy_idx, 0.0f, 1.0f);
//
// Do all of your per-vertex calculations here, and output to vtx_out.xyz
//
// Store the index in the W component
vtx_out.w = idx;
}
Passthrough Fragment Shader:
#version 100
varying vec4 vtx_out;
void main (void)
{
gl_FragData [0] = vtx_out;
}
Draw and Read Back:
// Draw your entire vertex array for processing (as `GL_POINTS`)
glDrawArrays (GL_POINTS, 0, NumVerts);
// Bind the FBO's color attachment 0 to `GL_TEXTURE_2D`
// Read the texture back and store its results in an array `verts`
glGetTexImage (GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, verts);

Perspective correct texturing of trapezoid in OpenGL ES 2.0

I have drawn a textured trapezoid, however the result does not appear as I had intended.
Instead of appearing as a single unbroken quadrilateral, a discontinuity occurs at the diagonal line where its two comprising triangles meet.
This illustration demonstrates the issue:
(Note: the last image is not intended to be a 100% faithful representation, but it should get the point across.)
The trapezoid is being drawn using GL_TRIANGLE_STRIP in OpenGL ES 2.0 (on an iPhone). It's being drawn completely facing the screen, and is not being tilted (i.e. that's not a 3D sketch you're seeing!)
I have come to understand that I need to perform "perspective correction," presumably in my vertex and/or fragment shaders, but I am unclear how to do this.
My code includes some simple Model/View/Projection matrix math, but none of it currently influences my texture coordinate values. Update: The previous statement is incorrect, according to comment by user infact.
Furthermore, I have found this tidbit in the ES 2.0 spec, but do not understand what it means:
The PERSPECTIVE CORRECTION HINT is not supported because OpenGL
ES 2.0 requires that all attributes be perspectively interpolated.
How can I make the texture draw correctly?
Edit: Added code below:
// Vertex shader
attribute vec4 position;
attribute vec2 textureCoordinate;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
texCoord = textureCoordinate;
}
// Fragment shader
uniform sampler2D texture;
varying mediump vec2 texCoord;
void main()
{
gl_FragColor = texture2D(texture, texCoord);
}
// Update and Drawing code (uses GLKit helpers from iOS)
- (void)update
{
float fov = GLKMathDegreesToRadians(65.0f);
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
projectionMatrix = GLKMatrix4MakePerspective(fov, aspect, 0.1f, 50.0f);
viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); // zoom out
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shaders[SHADER_DEFAULT]);
GLKMatrix4 modelMatrix = GLKMatrix4MakeScale(0.795, 0.795, 0.795); // arbitrary scale
GLKMatrix4 modelViewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix);
GLKMatrix4 modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, GL_FALSE, modelViewProjectionMatrix.m);
glBindTexture(GL_TEXTURE_2D, textures[TEXTURE_WALLS]);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, wall.vertexArray);
glVertexAttribPointer(ATTRIB_TEXTURE_COORDINATE, 2, GL_FLOAT, GL_FALSE, 0, wall.texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, wall.vertexCount);
}
(I'm taking a bit of a punt here, because your picture does not show exactly what I would expect from texturing a trapezoid, so perhaps something else is happening in your case - but the general problem is well known)
Textures will not (by default) interpolate correctly across a trapezoid. When the shape is triangulated for drawing, one of the diagonals will be chosen as an edge, and while that edge is straight through the middle of the texture, it is not through the middle of the trapezoid (picture the shape divided along a diagonal - the two triangles are very much not equal).
You need to provide more than a 2D texture coordinate to make this work - you need to provide a 3D (or rather, projective) texture coordinate, and perform the perspective divide in the fragment shader, post-interpolation (or else use a texture lookup function which will do the same).
The following shows how to provide texture coordinates for a trapezoid using old-school GL functions (which are a little easier to read for demonstration purposes). The commented-out lines are the 2d texture coordinates, which I have replaced with projective coordinates to get the correct interpolation.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,640,0,480,1,1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
const float trap_wide = 600;
const float trap_narrow = 300;
const float mid = 320;
glBegin(GL_TRIANGLE_STRIP);
glColor3f(1,1,1);
// glTexCoord4f(0,0,0,1);
glTexCoord4f(0,0,0,trap_wide);
glVertex3f(mid - trap_wide/2,10,-10);
// glTexCoord4f(1,0,0,1);
glTexCoord4f(trap_narrow,0,0,trap_narrow);
glVertex3f(mid - trap_narrow/2,470,-10);
// glTexCoord4f(0,1,0,1);
glTexCoord4f(0,trap_wide,0,trap_wide);
glVertex3f(mid + trap_wide/2,10,-10);
// glTexCoord4f(1,1,0,1);
glTexCoord4f(trap_narrow,trap_narrow,0,trap_narrow);
glVertex3f(mid + trap_narrow/2,470,-10);
glEnd();
The third coordinate is unused here as we're just using a 2D texture. The fourth coordinate will divide the other two after interpolation, providing the projection. Obviously if you divide it through at the vertices, you'll see you get the original texture coordinates.
Here's what the two renderings look like:
If your trapezoid is actually the result of transforming a quad, it might be easier/better to just draw that quad using GL, rather than transforming it in software and feeding 2D shapes to GL...
What you are trying here is Skewed texture. A sample fragment shader is as follows :
precision mediump float;
varying vec4 vtexCoords;
uniform sampler2D sampler;
void main()
{
gl_FragColor = texture2DProj(sampler,vtexCoords);
}
2 things which should look different are :
1) We are using varying vec4 vtexCoords; . Texture co-ordinates are 4 dimensional.
2) texture2DProj() is used instead of texture2D()
Based on length of small and large side of your trapezium you will assign texture co-ordinates. Following URL might help :
http://www.xyzw.us/~cass/qcoord/
The accepted answer gives the correct solution and explanation but for those looking for a bit more help on the OpenGL (ES) 2.0 pipeline...
const GLfloat L = 2.0;
const GLfloat Z = -2.0;
const GLfloat W0 = 0.01;
const GLfloat W1 = 0.10;
/** Trapezoid shape as two triangles. */
static const GLKVector3 VERTEX_DATA[] = {
{{-W0, 0, Z}},
{{+W0, 0, Z}},
{{-W1, L, Z}},
{{+W0, 0, Z}},
{{+W1, L, Z}},
{{-W1, L, Z}},
};
/** Add a 3rd coord to your texture data. This is the perspective divisor needed in frag shader */
static const GLKVector3 TEXTURE_DATA[] = {
{{0, 0, 0}},
{{W0, 0, W0}},
{{0, W1, W1}},
{{W0, 0, W0}},
{{W1, W1, W1}},
{{0, W1, W1}},
};
////////////////////////////////////////////////////////////////////////////////////
// frag.glsl
varying vec3 v_texPos;
uniform sampler2D u_texture;
void main(void)
{
// Divide the 2D texture coords by the third projection divisor
gl_FragColor = texture2D(u_texture, v_texPos.st / v_texPos.p);
}
Alternatively, in the shader, as per #maverick9888's answer, You can use texture2Dproj though for iOS / OpenGLES2 it still only supports a vec3 input...
void main(void)
{
gl_FragColor = texture2DProj(u_texture, v_texPos);
}
I haven't really benchmarked it properly but for my very simple case (a 1d texture really) the division version seems a bit snappier.

Resources