OpenGL - Texture origin is top-left - user-interface

I have a little problem:
I know that in UVs, the origin (0, 0) is at the bottom left.
I advanced in the discovery of opengl (I use lwjgl 3), and I wanted to make an interface.
In my opinion, I just need to display a texture that includes the text to display a button.
So I created, as usual, a new shader (vertexShader + fragmenShader).
But by great surprise, my texture is upside down! Why?
It's very weird.
this is my vertexShader:
#version 330
in vec3 position;
in vec2 texCoords;
out vec2 pass_texCoords;
void main() {
gl_Position = vec4(position, 1);
pass_texCoords = texCoords;
}
This is my fragmentShader:
#version 330
in vec2 pass_texCoords;
out vec4 out_Color;
uniform sampler2D texSampler;
void main() {
out_Color = texture(texSampler, pass_texCoords);
}
This is how I create my vao
(for the moment I plan to display a triangle):
my vetrices : -0.5, 0.5, 0, -0.5, -0.5, 0, 0.5, -0.5, 0
my texCoords : 0, 1, 0, 0, 1, 0
my indices : 0, 1, 2
And now how I display it:
//bind shader, bind VAO, enable VBOs
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture);
GL11.glDrawElements(GL11.GL_TRIANGLES, 3, GL11.GL_UNSIGNED_INT, 0);
//disableVBOs, unbind VAO, unbind shader
My triangle looks like (but the texture inside is inverted) :
| \
| \
| \
|_________
Need more code?
I know how invert the texture manually, but I think that the best way is to find the problem...
But if someone know how add an user-interface in a better way, I'm more than interrested!

Related

Jittering vertices

I'm trying to get rid of spatial jitter. To be more precise, I'm trying to replicate Three.js behavior concerning matrix manipulation.
My current rendering pipeline (WebGL with m4.js)
There is a scene object, a camera and a mesh. The mesh has its position (mesh.position) set to position and the camera is floating somewhere near it.
Vertices have positions relative to mesh.position:
new Float32Array([
-5, 0, -5,
5, 0, -5,
0, 0, 5
])
Important part of render loop:
const position = {x: 6428439.8443510765, y: 0, z: 4039717.5286310893}; // mesh is located there
camera.updateMatrixWorld();
camera.updateMatrixWorldInverse();
let modelViewMatrix = m4.multiply(camera.matrixWorldInverse, mesh.matrixWorld);
material.uniforms.projectionMatrix = {type: 'Matrix4fv', value: camera.projectionMatrix};
material.uniforms.modelViewMatrix = {type: 'Matrix4fv', value: modelViewMatrix};
material.use();
mesh.draw(material);
Vertex shader:
#version 300 es
precision highp float;
in vec3 position;
in vec3 color;
out vec3 vColor;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main() {
vColor = color;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Fragment shader:
#version 300 es
precision highp float;
out vec4 FragColor;
in vec3 vColor;
uniform float uSample;
void main() {
FragColor = vec4(vColor * uSample, 1);
}
Result:
When moving or rotating the camera around the mesh spatial jitter effect is observed, which is not the expected behavior.
I've implemented the same scene using Three.js, and as expected you can see no jitter while moving vertices or camera: Codepen link. Three.js must work exactly the same as my implemetation, but obviously I'm missing something.
It turns out that m4.js which is a part of webgl-3d-math library used on webgl2fundamentals.org utilizes Float32Array objects for storing matrices. Maybe this approach positively affects performance, but it also causes some confusion as JavaScript's Number uses 64-bit floats.

OpenGL, Projection Matrix - Front of box is smaller...?

I'm in the process of learning WebGL and I'm trying to understand how to build a perspective matrix. I think I almost have it... I'm just stuck on 1 small problem which is that when I multiply my verts by the projection matrix I expect the front of the box that is being looked at to get bigger, but instead it gets smaller and the back gets bigger. I've attached a screen shot:
(the green side is the front)
My perspective matrix looks like this..
var aspectRatio = 600 / 600;
var fieldOfView = 30;
var near = 1;
var far = 2;
myPerspectiveMatrix = [
1 / Math.tan(fieldOfView / 2), 0, 0, 0,
0, 1 / Math.tan(fieldOfView / 2), 0, 0,
0, 0, (near + far) / (near - far), (2 * (near * far)) / (near - far),
0, 0, -1, 0
];
app.uniformMatrix4fv(uPerspectiveMatrix, false, new Float32Array(myPerspectiveMatrix));
And my vertex shader is..
attribute vec3 aPosition;
attribute vec4 aColor;
uniform mat4 uModelMatrix;
uniform mat4 uPerspectiveMatrix;
varying lowp vec4 vColor;
void main()
{
gl_Position = uPerspectiveMatrix * vec4(aPosition, 5.0);
//gl_Position = uPerspectiveMatrix * uModelMatrix * vec4(aPosition, 2.0);
vColor = aColor;
}
What's likely happening here is that your triangles are being drawing in the wrong clockwise order (clockwise as opposed to counter-clockwise, or vice versa), so you are seeing the "inside" of the box.
There are myriad ways of fixing this. My recommendation would be to fix the clockwise order of the indices you are using to draw the box.
Alternatively, the quick fix would be to perhaps change the "front face" using glFrontFace.

OpenGl basic Vertex Shader

I am new in shader concepts and I am trying to implement a sprite of 8x8 in OpenGL ES.
I want to move the texture in the vertex shader but I cant figure out how to this, my code may be wrong, feel free to correct me
If I change this line in the vertex shader, the texture scale but I want to move not scale!:
v_TexCoordinate = a_TexCoordinate*vec2(1.5,1.5);
So I should apply adition but I dont know how to do it( maybe there is another way)
Vertex shader:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
uniform mat4 u_TextureMatrix;
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
This is my draw fuction
private void drawMagia()
{
GLES20.glUseProgram(mMagiaProgramHandle);
mTextureMatrixHandle = GLES20.glGetUniformLocation(mMagiaProgramHandle, "u_TextureMatrix");
mMagiaTextureCoordinateHandle = GLES20.glGetAttribLocation(mMagiaProgramHandle, "a_TexCoordinate");
mMagiaPositions.position(0);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
0, mMagiaPositions);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the normal information
mMagiaNormals.position(0);
GLES20.glVertexAttribPointer(mNormalHandle, mNormalDataSize, GLES20.GL_FLOAT, false,
0, mMagiaNormals);
GLES20.glEnableVertexAttribArray(mNormalHandle);
// Pass in the texture coordinate information
mMagiaTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, mTextureCoordinateDataSize, GLES20.GL_FLOAT, false,
0, mMagiaTextureCoordinates);
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// Pass in the modelview matrix.
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mTextureMatrixHandle, 1, false, mTextureMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and
// stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
// Pass in the combined matrix.
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
// Pass in the light position in eye space.
GLES20.glUniform3f(mLightPosHandle, mLightPosInEyeSpace[0], mLightPosInEyeSpace[1], mLightPosInEyeSpace[2]);
// Draw the square.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
You can add some offsets instead of multiplying.
v_TexCoordinate = a_TexCoordinate + vec2(1.5,1.5);
Also your texture should be clamped

Working around gl_PointSize limitations in three.js / webGL

I'm using three.js to create an interactive data visualisation. This visualisation involves rendering 68000 nodes, where each different node has a different size and color.
Initially I tried to do this by rendering meshes, but that proved to be very expensive. My current attempt is to use a three.js particle system, with each point being a node in the visualisation.
I can control the color * size of the point, but only to a certain point. On my card, the maximum size for a gl point seems to be 63. As I zoom in to the visualisation, points get larger - to a point, and then remain at 63 pixels.
I'm using a vertex & fragment shader currently:
vertex shader:
attribute float size;
attribute vec3 ca;
varying vec3 vColor;
void main() {
vColor = ca;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( 300.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader:
uniform vec3 color;
uniform sampler2D texture;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( color * vColor, 1.0 );
gl_FragColor = gl_FragColor * texture2D( texture, gl_PointCoord );
}
These are copied almost verbatim from one of the three.js examples.
I'm totally new to GLSL, but I'm looking for a way to draw points larger than 63 pixels. Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise? Are there any other work-arounds I can use to draw points larger than 63 pixels?
You can make your own point system by making arrays of unit quads + the center point then expanding by size in GLSL.
So, you'd have 2 buffers. One buffer is just a 2D unitQuad repeated for how ever many points you want to draw.
var unitQuads = new Float32Array([
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
];
The second one is your points except the positions need to be repeated 4 times each
var points = new Float32Array([
p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z,
p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z,
p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z,
p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z,
p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z,
]);
Setup your buffers and attributes
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, unitQuads, gl.STATIC_DRAW);
gl.enableVertexAttribArray(unitQuadLoc);
gl.vertexAttribPointer(unitQuadLoc, 2, gl.FLOAT, false, 0, 0);
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, points, gl.STATIC_DRAW);
gl.enableVertexAttribArray(pointLoc);
gl.vertexAttribPointer(pointLoc, 3, gl.FLOAT, false, 0, 0);
In your GLSL shader, compute the gl_PointSize you want then multiply the unitQuad by that size in view space or screen space. Screen space would match what gl_Point does but often people want their points to scale in 3D like normal stuff in which case view space is what you want.
attribute vec2 a_unitQuad;
attribute vec4 a_position;
uniform mat4 u_view;
uniform mat4 u_viewProjection;
void main() {
float fake_gl_pointsize = 150;
// Get the xAxis and yAxis in view space
// these are unit vectors so they represent moving perpendicular to the view.
vec3 x_axis = view[0].xyz;
vec3 y_axis = view[1].xyz;
// multiply them by the desired size
x_axis *= fake_gl_pointsize;
y_axis *= fake_gl_pointsize;
// multiply them by the unitQuad to make a quad around the origin
vec3 local_point = vec3(x_axis * a_unitQuad.x + y_axis * a_unitQuad.y);
// add in the position you actually want the quad.
local_point += a_position;
// now do the normal math you'd do in a shader.
gl_Position = u_viewProjection * local_point;
}
I'm not sure that made any sense but there's more complicated but a working sample here
Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise?
Not in WebGL.
You can draw your particle system as a series of quads (ie: two triangles). But that's about it.

Perspective correct texturing of trapezoid in OpenGL ES 2.0

I have drawn a textured trapezoid, however the result does not appear as I had intended.
Instead of appearing as a single unbroken quadrilateral, a discontinuity occurs at the diagonal line where its two comprising triangles meet.
This illustration demonstrates the issue:
(Note: the last image is not intended to be a 100% faithful representation, but it should get the point across.)
The trapezoid is being drawn using GL_TRIANGLE_STRIP in OpenGL ES 2.0 (on an iPhone). It's being drawn completely facing the screen, and is not being tilted (i.e. that's not a 3D sketch you're seeing!)
I have come to understand that I need to perform "perspective correction," presumably in my vertex and/or fragment shaders, but I am unclear how to do this.
My code includes some simple Model/View/Projection matrix math, but none of it currently influences my texture coordinate values. Update: The previous statement is incorrect, according to comment by user infact.
Furthermore, I have found this tidbit in the ES 2.0 spec, but do not understand what it means:
The PERSPECTIVE CORRECTION HINT is not supported because OpenGL
ES 2.0 requires that all attributes be perspectively interpolated.
How can I make the texture draw correctly?
Edit: Added code below:
// Vertex shader
attribute vec4 position;
attribute vec2 textureCoordinate;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
texCoord = textureCoordinate;
}
// Fragment shader
uniform sampler2D texture;
varying mediump vec2 texCoord;
void main()
{
gl_FragColor = texture2D(texture, texCoord);
}
// Update and Drawing code (uses GLKit helpers from iOS)
- (void)update
{
float fov = GLKMathDegreesToRadians(65.0f);
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
projectionMatrix = GLKMatrix4MakePerspective(fov, aspect, 0.1f, 50.0f);
viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); // zoom out
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shaders[SHADER_DEFAULT]);
GLKMatrix4 modelMatrix = GLKMatrix4MakeScale(0.795, 0.795, 0.795); // arbitrary scale
GLKMatrix4 modelViewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix);
GLKMatrix4 modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, GL_FALSE, modelViewProjectionMatrix.m);
glBindTexture(GL_TEXTURE_2D, textures[TEXTURE_WALLS]);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, wall.vertexArray);
glVertexAttribPointer(ATTRIB_TEXTURE_COORDINATE, 2, GL_FLOAT, GL_FALSE, 0, wall.texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, wall.vertexCount);
}
(I'm taking a bit of a punt here, because your picture does not show exactly what I would expect from texturing a trapezoid, so perhaps something else is happening in your case - but the general problem is well known)
Textures will not (by default) interpolate correctly across a trapezoid. When the shape is triangulated for drawing, one of the diagonals will be chosen as an edge, and while that edge is straight through the middle of the texture, it is not through the middle of the trapezoid (picture the shape divided along a diagonal - the two triangles are very much not equal).
You need to provide more than a 2D texture coordinate to make this work - you need to provide a 3D (or rather, projective) texture coordinate, and perform the perspective divide in the fragment shader, post-interpolation (or else use a texture lookup function which will do the same).
The following shows how to provide texture coordinates for a trapezoid using old-school GL functions (which are a little easier to read for demonstration purposes). The commented-out lines are the 2d texture coordinates, which I have replaced with projective coordinates to get the correct interpolation.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,640,0,480,1,1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
const float trap_wide = 600;
const float trap_narrow = 300;
const float mid = 320;
glBegin(GL_TRIANGLE_STRIP);
glColor3f(1,1,1);
// glTexCoord4f(0,0,0,1);
glTexCoord4f(0,0,0,trap_wide);
glVertex3f(mid - trap_wide/2,10,-10);
// glTexCoord4f(1,0,0,1);
glTexCoord4f(trap_narrow,0,0,trap_narrow);
glVertex3f(mid - trap_narrow/2,470,-10);
// glTexCoord4f(0,1,0,1);
glTexCoord4f(0,trap_wide,0,trap_wide);
glVertex3f(mid + trap_wide/2,10,-10);
// glTexCoord4f(1,1,0,1);
glTexCoord4f(trap_narrow,trap_narrow,0,trap_narrow);
glVertex3f(mid + trap_narrow/2,470,-10);
glEnd();
The third coordinate is unused here as we're just using a 2D texture. The fourth coordinate will divide the other two after interpolation, providing the projection. Obviously if you divide it through at the vertices, you'll see you get the original texture coordinates.
Here's what the two renderings look like:
If your trapezoid is actually the result of transforming a quad, it might be easier/better to just draw that quad using GL, rather than transforming it in software and feeding 2D shapes to GL...
What you are trying here is Skewed texture. A sample fragment shader is as follows :
precision mediump float;
varying vec4 vtexCoords;
uniform sampler2D sampler;
void main()
{
gl_FragColor = texture2DProj(sampler,vtexCoords);
}
2 things which should look different are :
1) We are using varying vec4 vtexCoords; . Texture co-ordinates are 4 dimensional.
2) texture2DProj() is used instead of texture2D()
Based on length of small and large side of your trapezium you will assign texture co-ordinates. Following URL might help :
http://www.xyzw.us/~cass/qcoord/
The accepted answer gives the correct solution and explanation but for those looking for a bit more help on the OpenGL (ES) 2.0 pipeline...
const GLfloat L = 2.0;
const GLfloat Z = -2.0;
const GLfloat W0 = 0.01;
const GLfloat W1 = 0.10;
/** Trapezoid shape as two triangles. */
static const GLKVector3 VERTEX_DATA[] = {
{{-W0, 0, Z}},
{{+W0, 0, Z}},
{{-W1, L, Z}},
{{+W0, 0, Z}},
{{+W1, L, Z}},
{{-W1, L, Z}},
};
/** Add a 3rd coord to your texture data. This is the perspective divisor needed in frag shader */
static const GLKVector3 TEXTURE_DATA[] = {
{{0, 0, 0}},
{{W0, 0, W0}},
{{0, W1, W1}},
{{W0, 0, W0}},
{{W1, W1, W1}},
{{0, W1, W1}},
};
////////////////////////////////////////////////////////////////////////////////////
// frag.glsl
varying vec3 v_texPos;
uniform sampler2D u_texture;
void main(void)
{
// Divide the 2D texture coords by the third projection divisor
gl_FragColor = texture2D(u_texture, v_texPos.st / v_texPos.p);
}
Alternatively, in the shader, as per #maverick9888's answer, You can use texture2Dproj though for iOS / OpenGLES2 it still only supports a vec3 input...
void main(void)
{
gl_FragColor = texture2DProj(u_texture, v_texPos);
}
I haven't really benchmarked it properly but for my very simple case (a 1d texture really) the division version seems a bit snappier.

Resources