So, I'm trying to make sense of some code before I copy and paste it into my application. In openGL I'm seeing some variables typed as in and out. I see no such thing in the following code snippet. From what I understand, the vertex shader "magically" gets the input for the "in" typed variables via the program, which incidentally can have a fragment and vertex shader attached to it(the program). Heres the code:
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
varying vec2 vPosition;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0, 1.0);
vPosition = aPlotPosition;
}
</script>
So, my question is, by attaching an appropriate program here, aVertexPosition and aPlotPosition will both be properly initialized and furthermore, vPosition could be used somewhere else in my app, namely the fragment shader?
Let me try and explain how the GPU pipeline I/O works:
Each Vertex has a set of attributes associated with it. Given your example code:
attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
You are saying that each vertex has a 2D vertex position and plot position. If you added:
attribute vec3 vNormal;
Then every vertex would also have a normal. You could think of these as vertex "properties".
You must tell the GPU where to fetch the values for each of these attributes.
Each vertex attribute is assigned an attribute array index when the shader is compiled. You must enable each attribute array index that your shader requires
enableVertexAttribArray(int attributeIndex);
Once you've enabled it, you want to bind the attribute array to a vertex buffer.
bindBuffer(ARRAY_BUFFER, buffer);
You now describe how to fetch the attribute with this call:
vertexAttribPointer(int attributeIndex, int count, int type, bool normalized, int stride, int offset);
Given your example code:
vertexAttribPointer(0, 2, FLOAT, false, 16, 0); // vertex position
vertexAttribPointer(1, 2, FLOAT, false, 16, 8); // plot position
16 or the stride is the number of bytes between each vertex. Each vertex consists of 4 floats and each float is 4 bytes wide. The offset is where the attribute starts within a vertex. The vertex position is at the 0th byte of the vertex and plot position is at the 8th.
You can think of these as describing how to index into an array. The Nth vertex:
aVertexPosition.x = BUFFER[offset + N * stride + sizeof(FLOAT) * 0];
aVertexPosition.y = BUFFER[offset + N * stride + sizeof(FLOAT) * 1];
Vertex attributes are fetched automatically for you by the GPU and filled in before your vertex shader function is executed. Yes your vertex shader main is called once for every single vertex you draw.
The output of the vertex shader stage are the 'varying' variables. They are 'varying' because they are interpolated across the surface of the primitive (triangle) between vertices. You write the values out for each vertex but when the triangle is rasterized into fragments, each fragment gets the interpolated value of each varying variable. The fragment shader gets run for every fragment (pixel) that is "covered" by the draw call. If you draw a small triangle that covers a 4x4 patch of pixels then the fragment shader is executed 16 times.
Concisely:
Vertex Shader Inputs: Vertex Attributes & Uniform values (not covered)
Vertex Shader Outputs: Varying Values at each vertex
Fragment Shader Inputs: Varying Values for a given fragment (pixel)
Fragment Shader Outputs: Color & Depth values which are stored in the color and depth buffer.
Vertex Shader is run for every vertex in the draw call.
Fragment shader is run for every "covered" or "lit" fragment (pixel) in the draw call.
In other, newer versions of OpenGL, which have more shader stages than just vertex and fragment, in and out are used instead of attribute and varying.
attribute corresponds to in for a vertex shader.
varying corresponds to out for a vertex shader.
varying corresponds to in for a fragment shader.
(I haven't actually used in and out, so this description may be inaccurate. Please feel free to improve my answer by editing. I don't know how uniforms fit in.)
Related
In a scenario where vertices are displaced in the vertex shader, how to retrieve their transformed positions in WebGL / Three.js?
Other questions here suggest to write the positions to a texture and then read the pixels, but the resulting value don't seem to be correct.
In the example below the position is passed to the fragment shader without any transformations:
// vertex shader
varying vec4 vOut;
void main() {
gl_Position = vec4(position, 1.0);
vOut = vec4(position, 1.0);
}
// fragment shader
varying vec4 vOut;
void main() {
gl_FragColor = vOut;
}
Then reading the output texture, I would expect pixel[0].r to be identical to positions[0].x, but that is not the case.
Here is a jsfiddle showing the problem:
https://jsfiddle.net/brunoimbrizi/m0z8v25d/2/
What am I missing?
Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.
width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
The positions in the vertex shader need a nudge of 1.0 / width.
The vertex shader needs to know about width and height, they can be passed in as uniforms.
Each vertex needs an attribute with its index so it can be correctly mapped.
Each position should be one pixel in the resulting texture.
The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.
Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/
You're not writing the vertices out correctly.
https://jsfiddle.net/ogawzpxL/
First off you're clipping the geometry, so your vertices actually end outside the view, and you see the middle of the quad without any vertices.
You can use the uv attribute to render the entire quad in the view.
gl_Position = vec4( uv * 2. - 1. , 0. ,1.);
Everything in the buffer represents some point on the quad. What seems to be tricky is when you render, the pixel will sample right next to your vertex. In the fiddle i've applied an offset to the world space thing by how much it would be in pixel space, and it didn't really work.
The reason why it seems to work with points is that this is all probably wrong :) If you want to transform only the vertices, then you need to store them properly in the texture. You can use points for this, but ideally they wouldn't be spaced out so much. In your scenario, they would fill the first couple of rows of the texture (since it's much larger than it could be).
You might start running into problems as soon as you try to apply this to something other than PlaneGeometry. In which case this problem has to be broken down.
Having a 16-bit uint texture in my C++ code, I would like to use it for z-testing in an OpenGL ES 3.0 app. How can I achieve this?
To give some context, I am making an AR app where virtual objects can be occluded by real objects. The depth texture of real environment is generated, but I can't figure out how to apply it.
In my app, I first use glTexImage2D to render backdrop image from the camera feed, then I draw some virtual objects. I would like the objects to be transparent based on a depth texture. Ideally, the occlusion testing needs to be not binary, but gradual, so that I can alpha blend the objects with background near the occlusion edges.
I can pass and read the depth texture in the fragment shader, but not sure how to use it for z-testing instead of rendering.
Lets assume you have a depth texture uniform sampler2D u_depthmap and the internal format of the depth texture is a floating point format.
To read the texel from the texture, where the current fragment is on, you have to know the size of the viewport (uniform vec2 u_vieport_size). gl_FragCoord contains the window relative coordinate (x, y, z, 1/w) values for the fragment. So the texture coordinate for the depth map is calcualted by:
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
The depth from the depth texture u_depthmap is given in range [0.0, 1.0], because of the internal floating point format. The depth of the fragment is contained in the gl_FragCoord.z, in range [0.0, 1.0], too.
That means that the depth of the map and the depth of the fragment can be calculated as follows:
uniform sampler2D u_depthmap;
uniform vec2 u_vieport_size;
void mian()
{
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
float map_depth = texture(u_depthmap, map_uv).x;
float frag_depth = gl_FragCoord.z;
.....
}
Note, map_depth and frag_depth are both in the range [0.0, 1.0]. If the were generated both with the same projection (especially the same near and far plane), then they are comparable. This means you have to ensure that the shader generates the same depth values as the ones in the depth map, for the same point in the world. If this is not the case, then you have to linearize the depth values and you have to calculate the view space Z-coordinate.
I want to map a texture in the form of a lower right euclidean triangle to a hyperbolic triangle on the Poincare Disk, which looks like this:
Here's the texture (the top left triangle of the texture is transparent and unused). You might recognise this as part of Escher's Circle Limit I:
And this is what my polygon looks like (it's centred at the origin, which means that two edges are straight lines, however in general all three edges will be curves as in the first picture):
The centre of the polygon is the incentre of the euclidean triangle formed by its vertices and I'm UV mapping the texture using it's incentre, dividing it into the same number of faces as the polygon has and mapping each face onto the corresponding polygon face. However the the result looks like this:
If anybody thinks this is solvable using UV mapping I'd be happy to provide some example code, however I'm beginning to think this might not be possible and I'll have to write my own shader functions.
UV mapping is a method of mapping a texture onto an OpenGL polygon. The texture is always sampled in Euclidean space using xy coordinates in the range of (0, 1).
To overlay your texture onto a triangle on a Poincare disc, keep hold of the Euclidean coordiantes in your vertices, and use these to sample the texture.
The following code is valid for OpenGL 3.0 ES.
Vertex shader:
#version 300 es
//these should go from 0.0 to 1.0
in vec2 euclideanCoords;
in vec2 hyperbolicCoords;
out vec2 uv;
void main() {
//set z = 0.0 and w = 1.0
gl_Position = vec4(hyperbolicCoords, 0.0, 1.0);
uv = euclideanCoords;
}
Fragment shader:
#version 300 es
uniform sampler2D escherImage;
in vec2 uv;
out vec4 colour;
void main() {
colour = texture(escherImage, uv);
}
I am rendering a simple torus in WebGL. Rotating the vertices works fine but I have a problem with the normals. When rotated around a single axis, they keep the correct direction but when the rotation around a second axis increases, the normals start rotating the wrong way up until one of the rotations are 180°, then the normals are rotating in the complete opposite of what they should.
I assume the problem lies with the quaternion used for rotation, but I have not been able to determine what is wrong.
Here is a (slightly modified, but it still shows the problem) jsfiddle of my project: https://jsfiddle.net/dt509x8h/1/
In the html-part of the fiddle there is a div containing all the data from the obj-file I am reading to generate the torus (although a lower resolution one).
vertex shader:
attribute vec4 aVertexPosition;
attribute vec3 aNormalDirection;
uniform mat4 uMVPMatrix;
uniform mat3 uNMatrix;
varying vec3 nrm;
void main(void) {
gl_Position = uMVPMatrix * aVertexPosition;
nrm = aNormalDirection * uNMatrix;
}
fragment shader:
varying vec3 nrm;
void main(void) {
gl_FragColor = vec4(nrm, 1.0);
}
Updating the matrices (run when there has been input):
mat4.perspective(pMatrix, Math.PI*0.25, width/height, clipNear, clipFar); //This is actually not run on input, it is just here to show the creation of the perspective matrix
mat4.fromRotationTranslation(mvMatrix, rotation, position);
mat3.normalFromMat4(nMatrix, mvMatrix);
mat4.multiply(mvpMatrix, pMatrix, mvMatrix);
var uMVPMatrix = gl.getUniformLocation(shaderProgram, "uMVPMatrix");
var uNMatrix = gl.getUniformLocation(shaderProgram, "uNMatrix");
gl.uniformMatrix4fv(uMVPMatrix, false, mvpMatrix);
gl.uniformMatrix3fv(uNMatrix, false, nMatrix);
Creating the rotation quaternion (called when mouse has moved):
var d = vec3.fromValues(lastmousex-mousex, mousey-lastmousey, 0.0);
var l = vec3.length(d);
vec3.normalize(d,d);
var axis = vec3.cross(vec3.create(), d, [0,0,1]);
vec3.normalize(axis, axis);
var q = quat.setAxisAngle(quat.create(), a, l*scale);
quat.multiply(rotation, q, rotation);
Rotating the torus only around the Y-axis, the normals point in the right directions:
Rotating the torus around two axes. The normals are pointing all over the place:
I am using glMatrix v2.3.2 for all matrix and quaternion operations.
Update:
It seems that rotating only around the Z axis (by setting the input axis for quat.setAxisAngle explicitly to [0,0,1], or by using quat.rotateZ) also causes the normals to rotate in the opposite direction.
Zeroing the z-component of the axis does not help.
Update2:
Rotating by quat.rotateX(q, q, l*scale); quat.rotateY(q, q, l*scale); quat.multiply(rotation, q, rotation); Seems correct, but as soon as rotation around Z is introduced the z normals starts to move around.
Using the difference in x or y mouse-values instead of l causes all normals to move, and so does using largely different scale-values for x and y.
Update3: changing the order of multiplication in the shader to uNMatrix * aNormalDirection causes the normals to always rotate the wrong way.
In my case, the problem was with how I loaded the data from an .obj-file. I had inverted the z-position of all vertices, but the normals were generated from the non-inverted vertices.
Using non-inverted z-positions and flipping the normal-matrix multiplication fixed the issues.
I have read some tutorials about GLSL.
In certain position attribute is a vec4 in some vec3.
I know that the matrix operations need a vec4, but is it worth to send an additional element?
Isn't it better to send vec3 and later cast in the shader vec4(position, 1.0)?
Less data in memory - it will be faster? Or we should pack an extra element to avoid casting?
Any tips what should be better?
layout(location = 0) in vec4 position;
MVP*position;
or
layout(location = 0) in vec3 position;
MVP*vec4(position,1.0);
For vertex attributes, this will not matter. The 4th component is automatically expanded to 1.0 when it is absent.
That is to say, if you pass a 3-dimensional vertex attribute pointer to a 4-dimensional vector, GL will fill-in W with 1.0 for you. I always go with this route, it avoids having to explicitly write vec4 (...) when doing matrix multiplication on the position and it also avoids wasting memory storing the 4th component.
This works for 2D coordinates too, by the way. A 2D coordinate passed to a vec4 attribute becomes vec4 (x, y, 0.0, 1.0). The general rule is this: all missing components are replaced with 0.0 except for W, which is replaced with 1.0.
However, to people who are unaware of the behavior of GLSL in this circumstance, it can be confusing. I suppose this is why most tutorials never touch on this topic.