Calculate TBN matrix in vertex shader vs fragment shader? - three.js

I think we can calculate TBN matrix in both vertex shader and fragment shader.
// calculate in vs
varying mat3 vTbnMatrix;
void main() {
vec3 n = normalMatrix * aNormal;
vec3 t = normalMatrix * vec3(aTangent.xyz);
vec3 b = cross(n, t) * aTangent.w;
vTbnMatrix = mat3(t, b, n);
}
// calculate in fs
varying vec3 vNormal; // transformed by normalMatrix
varying vec3 vTangent; // transformed by normalMatrix
varying vec3 vBitantent;
void main() {
mat3 tbnMatrix = mat3(vTangent, vBitangent, vNormal);
}
What's the difference? I think we can calculate TBN matrix in the vertex shader as it can speed up the program.
But when I dive into the source code of THREE.js and PlayCanvas engine, they both calculate TBN matrix in the fragment shader.
What's the advantage of calculating in the fragment shader?

The TBN matrix should, in theory, be orthonormal.
In three.js, the normal, tangent, and bi-tangent are passed as varyings, interpolated across the face of the primitive, renormalized, and the TBN matrix is constructed in the fragment shader. This way, the matrix columns are (1) of unit length, and (2) presumably close to orthogonal.
You could compute the bi-tangent in the fragment shader, instead, to force it to be orthogonal to the interpolated normal and tangent, but doing so may not make that much difference, as the interpolated normal and tangent are not guaranteed to be orthogonal anyway.
three.js r.102

Related

When does interpolation happen between the vertex and fragment shaders in this WebGL program?

Background
I'm looking at this example code from the WebGL2 library PicoGL.js.
It describes a single triangle (three vertices: (-0.5, -0.5), (0.5, -0.5), (0.0, 0.5)), each of which is assigned a color (red, green, blue) by the vertex shader:
#version 300 es
layout(location=0) in vec4 position;
layout(location=1) in vec3 color;
out vec3 vColor;
void main() {
vColor = color;
gl_Position = position;
}
The vColor output is passed to the fragment shader:
#version 300 es
precision highp float;
in vec3 vColor;
out vec4 fragColor;
void main() {
fragColor = vec4(vColor, 1.0);
}
and together they render the following image:
Question(s)
My understanding is that the vertex shader is called once per vertex, whereas the fragment shader is called once per pixel.
However, the fragment shader references the vColor variable, which is only assigned once per call to each vertex, but there are many more pixels than vertices!
The resulting image clearly shows a color gradient - why?
Does WebGL automatically interpolate values of vColor for pixels in between vertices? If so, how is the interpolation done?
Yes, WebGL automatically interpolates between the values supplied to the 3 vertices.
Copied from this site
A linear interpolation from one value to another would be this
formula
result = (1 - t) * a + t * b
Where t is a value from 0 to 1 representing some position between a and b. 0 at a and 1 at b.
For varyings though WebGL uses this formula
result = (1 - t) * a / aW + t * b / bW
-----------------------------
(1 - t) / aW + t / bW
Where aW is the W that was set on gl_Position.w when the varying was
as set to a and bW is the W that was set on gl_Position.w when the
varying was set to b.
The site linked above shows how that formula generates perspective correct texture mapping coordinates when interpolating varyings
It also shows an animation of the varyings changing
The khronos OpenGL wiki - Fragment Shader has the answer. Namely:
Each fragment has a Window Space position, a few other values, and it contains all of the interpolated per-vertex output values from the last Vertex Processing stage.
(Emphasis mine)

Draw GL_TRIANGLE_STRIP based on centre point and size

I am rendering TRIANGLE_STRIPS in OpenGL ES 2.0. I was wondering, would it be possible to modify the vertex shader such that instead of feeding it 4 texture vertices, you give it only one vertex that represents the centre of the TRIANGLE_STRIP, with a parameter for texture width and a height?
Assuming my texture vertex is:
GLfloat textureVertices[] = {
x, y
};
Can the vertex shader be modified to work with texSize uniform, which would represent the width/height of the TRIANGLE_STRIP? :
attribute highp vec4 position;
attribute lowp vec4 inputPointCoordinate;
uniform mat4 MVP;
uniform lowp vec4 vertexColor;
uniform float texSize;
varying lowp vec2 textureCoordinate;
varying lowp vec4 color;
void main()
{
gl_Position = MVP*position;
textureCoordinate = inputPointCoordinate.xy;
color = vertexColor;
}
No, at least not in the vertex shader. You need to get the 3 different points in the vertex shader with different attribute values so you can receive the coordinate in the fragment shader which is interpolated.
What you actually can do is pass a center into the vertex shader which is the multiplied with the same matrix as the vertex coordinates. Beside that you would need some kind of radius (or the texture dimensions vector) which will probably need to be scaled if the matrix contains the scale as well. Then you can take both of these values and pass them to the fragment shader (using varying). In the fragment shader you then need to compute the texture coordinates from those 2 parameters and the fragment position.
A simular procedure is used to draw a very nice circle or sphere using only 2 triangles (a square) but I do not suggest you do this as you will only lose on performance plus it is quite a lot of work...

glsl vector*matrix different to hlsl

I have two (identical) shaders, one in hlsl and one in glsl. In the pixel shader, I am multiplying a vector by a matrix for normal transformations.
The code is essentially:
HLSL
float3 v = ...;
float3x3 m = ...;
float3 n = mul(v, m);
GLSL
vec3 v = ...;
mat3 m = ...;
vec3 n = v * m;
This should do a row vector multiplication, yet in glsl it doesn't. If I explicitly type out the algorithm, it works for both.
Both the glsl and hlsl spec, from what I can tell, says they should do a row vector multiply if the vector is on the left hand side, which it is.
The other confusing thing is that I multiply a vector by a matrix in the vertex shader with the vector on the left, yet that works fine in both glsl and hlsl. This leads me to guess that it is only an issue in the fragment/pixel shader.
I pass the matrix from the vertex shader to the fragment shader using:
out vec3 out_vs_TangentToWorldX;
out vec3 out_vs_TangentToWorldY;
out vec3 out_vs_TangentToWorldZ;
out_vs_TangentToWorldX = tangent * world3D;
out_vs_TangentToWorldY = binormal * world3D;
out_vs_TangentToWorldZ = normal * world3D;
and in the fragment shader I reconstruct it with:
in vec3 out_vs_TangentToWorldX;
in vec3 out_vs_TangentToWorldY;
in vec3 out_vs_TangentToWorldZ;
mat3 tangentToWorld;
tangentToWorld[0] = out_vs_TangentToWorldX;
tangentToWorld[1] = out_vs_TangentToWorldY;
tangentToWorld[2] = out_vs_TangentToWorldZ;
HLSL matrices are row-major, GLSL are column-major. So if you pass your matrix into GLSL shader using the same memory layout as you pass it into HLSL, then your HLSL rows will become GLSL columns. And you should use column-major multiplication in your GLSL shader to get same effect as in HLSL.
Just use
vec3 n = m * v;

WebGL - which API to use?

I want to draw multiple polygon shapes (where each shape has it's own set of vertices).
I want to be able to position these shapes independently of each other.
Which API can i use to set the a_Position for the vertex shader?
A) gl.vertexAttrib3f
B) gl.vertexAttribPointer + gl.enableVertexAttribArray
thanks.
Your question makes it sound like you're really new to WebGL? Maybe you should read some tutorials? But in answer to your question:
gl.vertexAttrib3f only lets you supply a single constant value to a GLSL attribute so you'll need to use gl.vertexAttribPointer and gl.enableVertexAttribArray. You'll also need to set up buffers with your vertex data.
gl.vertexAttrib3f only point is arguably to let you pass in a constant in the case that you have a shader that uses multiple attributes but you don't have data for all of them. For example lets say you have a shader that uses both textures and so needs texture coordinates and it also has vertex colors. Something like this
vertex shader
attribute vec4 a_position;
attribute vec2 a_texcoord;
attribute vec4 a_color;
varying vec2 v_texcoord;
varying vec4 v_color;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * a_position;
// pass texcoord and vertex colors to fragment shader
v_texcoord = a_texcoord;
v_color = v_color;
}
fragment shader
precision mediump float;
varying vec2 v_texcoord;
varying vec4 v_color;
uniform sampler2D u_texture;
void main() {
vec4 textureColor = texture2D(u_texture, v_texcoord);
// multiply the texture color by the vertex color
gl_FragColor = textureColor * v_color;
}
This shader requires vertex colors. If your geometry doesn't have vertex colors then you have 2 options (1) use a different shader (2) turn off the attribute for vertex colors and set it to a constant color, probably white.
gl.disableVertexAttribArray(aColorLocation);
gl.vertexAttrib4f(aColorLocation, 1, 1, 1, 1);
Now you can use the same shader even though you have no vertex color data.
Similarly if you have no texture coordinates you could pass in a white 1 pixel shader and set the texture coordinates to some constant.
gl.displayVertexAttribArray(aTexcoordLocation);
gl.vertexAttrib2f(aTexcoordLocation, 0, 0);
gl.bindTexture(gl.TEXTURE_2D, some1x1PixelWhiteTexture);
In that case you could also decide what color to draw with by setting the vertex color attribute.
gl.vertexAttrib4f(aColorLocation, 1, 0, 1, 1); // draw in magenta

GLSL: gl_FragCoord issues

I am experimenting with GLSL for OpenGL ES 2.0. I have a quad and a texture I am rendering. I can successfully do it this way:
//VERTEX SHADER
attribute highp vec4 vertex;
attribute mediump vec2 coord0;
uniform mediump mat4 worldViewProjection;
varying mediump vec2 tc0;
void main()
{
// Transforming The Vertex
gl_Position = worldViewProjection * vertex;
// Passing The Texture Coordinate Of Texture Unit 0 To The Fragment Shader
tc0 = vec2(coord0);
}
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, tc0);
}
So far so good. However, I'd like to do some pixel-based filtering, e.g. Median. So, I'd like to work in pixel coordinates rather than in normalized (tc0) and then convert the result back to normalized coords. Therefore, I'd like to use gl_FragCoord instead of a uv attribute (tc0). But I don't know how to go back to normalized coords because I don't know the range of gl_FragCoords. Any idea how I could get it? I have got that far, using a fixed value for 'normalization', though it's not working perfectly as it is causing stretching and tiling (but at least is showing something):
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, vec2(gl_FragCoord) / vec2(256, 256));
}
So, the simple question is, what should I use in the place of vec2(256, 256) so that I could get the same result as if I were using the uv coords.
Thanks!
gl_FragCoord is in screen coordinates, so to get normalized coords you need to divide by the viewport width and height. You can use a uniform variable to pass that information to the shader, since there is no built in variable for it.
You can also sample the texture by un-normalized coordinates if:
sampling by texture() from GL_TEXTURE_RECTANGLE
sampling by texelFetch() from a regular texture or texture buffer

Resources