glsl vector*matrix different to hlsl - matrix

I have two (identical) shaders, one in hlsl and one in glsl. In the pixel shader, I am multiplying a vector by a matrix for normal transformations.
The code is essentially:
HLSL
float3 v = ...;
float3x3 m = ...;
float3 n = mul(v, m);
GLSL
vec3 v = ...;
mat3 m = ...;
vec3 n = v * m;
This should do a row vector multiplication, yet in glsl it doesn't. If I explicitly type out the algorithm, it works for both.
Both the glsl and hlsl spec, from what I can tell, says they should do a row vector multiply if the vector is on the left hand side, which it is.
The other confusing thing is that I multiply a vector by a matrix in the vertex shader with the vector on the left, yet that works fine in both glsl and hlsl. This leads me to guess that it is only an issue in the fragment/pixel shader.
I pass the matrix from the vertex shader to the fragment shader using:
out vec3 out_vs_TangentToWorldX;
out vec3 out_vs_TangentToWorldY;
out vec3 out_vs_TangentToWorldZ;
out_vs_TangentToWorldX = tangent * world3D;
out_vs_TangentToWorldY = binormal * world3D;
out_vs_TangentToWorldZ = normal * world3D;
and in the fragment shader I reconstruct it with:
in vec3 out_vs_TangentToWorldX;
in vec3 out_vs_TangentToWorldY;
in vec3 out_vs_TangentToWorldZ;
mat3 tangentToWorld;
tangentToWorld[0] = out_vs_TangentToWorldX;
tangentToWorld[1] = out_vs_TangentToWorldY;
tangentToWorld[2] = out_vs_TangentToWorldZ;

HLSL matrices are row-major, GLSL are column-major. So if you pass your matrix into GLSL shader using the same memory layout as you pass it into HLSL, then your HLSL rows will become GLSL columns. And you should use column-major multiplication in your GLSL shader to get same effect as in HLSL.
Just use
vec3 n = m * v;

Related

Calculate TBN matrix in vertex shader vs fragment shader?

I think we can calculate TBN matrix in both vertex shader and fragment shader.
// calculate in vs
varying mat3 vTbnMatrix;
void main() {
vec3 n = normalMatrix * aNormal;
vec3 t = normalMatrix * vec3(aTangent.xyz);
vec3 b = cross(n, t) * aTangent.w;
vTbnMatrix = mat3(t, b, n);
}
// calculate in fs
varying vec3 vNormal; // transformed by normalMatrix
varying vec3 vTangent; // transformed by normalMatrix
varying vec3 vBitantent;
void main() {
mat3 tbnMatrix = mat3(vTangent, vBitangent, vNormal);
}
What's the difference? I think we can calculate TBN matrix in the vertex shader as it can speed up the program.
But when I dive into the source code of THREE.js and PlayCanvas engine, they both calculate TBN matrix in the fragment shader.
What's the advantage of calculating in the fragment shader?
The TBN matrix should, in theory, be orthonormal.
In three.js, the normal, tangent, and bi-tangent are passed as varyings, interpolated across the face of the primitive, renormalized, and the TBN matrix is constructed in the fragment shader. This way, the matrix columns are (1) of unit length, and (2) presumably close to orthogonal.
You could compute the bi-tangent in the fragment shader, instead, to force it to be orthogonal to the interpolated normal and tangent, but doing so may not make that much difference, as the interpolated normal and tangent are not guaranteed to be orthogonal anyway.
three.js r.102

Opengl ES diffuse shader not working

I'm implementing simple ray tracing for spheres in a fragment shader and I'm currently working on the function that computes color for a diffusely shaded sphere. Here is the code for the function:
vec3 shadeSphere(vec3 point, vec4 sphere, vec3 material) {
vec3 color = vec3(1.,2.,3.);
vec3 N = (point - sphere.xyz) / sphere.w;
vec3 diffuse = max(dot(Ldir, N), 0.0);
vec3 ambient = material/5;
color = ambient + Lrgb * diffuse * max(0.0, N * Ldir);
return color;
}
I'm getting errors on the two lines where I'm using the max function. I got the code for the line where I'm assigning max(dot(Ldir,N),0.0) from the webgl cheat sheet which uses the function max(dot(ec_light_dir, ec_normal), 0.0);
For some reason, my implementation is not working as I'm getting the error:
ERROR: 0:38: 'max' : no matching overloaded function found
What could be the problem with either of the these max functions I'm using?
There's 2 max statements in your shader. It's the 2nd one that's the problem
max(0.0, N * LDir) makes no sense. N is a vec3. There's no version of max that takes max(float, vec3). There is a version of max that's max(vec3, float) so swap that to be
`max(N * LDir, 0.0)`
and it might work. Basically your shader is NOT an ES 2.0 shader. Maybe it's being used on a driver that is not spec compliant (ie, the driver has a bug). WebGL tries to follow the spec 100%
The dot product is a scalar value not a vec3, you need to either store it in a float
float diffuse = max(dot(Ldir, N),0.0);
or initialize a vec3 with it
vec3 diffuse = vec3(max(dot(Ldir, N),0.0));
Same goes for the ambient term. Usually both diffuse and ambient terms are just scalars.

Weird precision behavior in OpenGL ES 2.0

To implement this idea, I wrote the following two versions of my vertex and fragment shaders:
// Vertex:
precision highp int;
precision highp float;
uniform vec4 r_info;
attribute vec2 s_coords;
attribute vec2 r_coords;
varying vec2 t_coords;
void main (void) {
int w = int(r_info.w);
int x = int(r_coords.x) + int(r_coords.y) * int(r_info.y);
int y = x / w;
x = x - y * w;
y = y + int(r_info.x);
t_coords = vec2(x, y) * r_info.z;
gl_Position = vec4(s_coords, 0.0, 1.0);
}
// Fragment:
precision highp float;
uniform sampler2D sampler;
uniform vec4 color;
varying vec2 t_coords;
void main (void) {
gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, t_coords).a);
}
vs.
// Vertex:
precision highp float;
attribute vec2 s_coords;
attribute vec2 r_coords;
varying vec2 t_coords;
void main (void) {
t_coords = r_coords;
gl_Position = vec4(s_coords, 0.0, 1.0);
}
// Fragment:
precision highp float;
precision highp int;
uniform vec4 r_info;
uniform sampler2D sampler;
uniform vec4 color;
varying vec2 t_coords;
void main (void) {
int w = int(r_info.w);
int x = int(t_coords.x) + int(t_coords.y) * int(r_info.y);
int y = x / w;
x = x - y * w;
y = y + int(r_info.x);
gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, vec2(x, y) * r_info.z).a);
}
The only difference between them (I hope) is the location where the texture coordinates are transformed. In the first version, the math happens in the vertex shader, in the second one it happens in the fragment shader.
Now, the official OpenGL ES SL 1.0 Specifications state that "[t]he vertex language must provide an integer precision of at least 16 bits, plus a sign bit" and "[t]he fragment language must provide an integer precision of at least 10 bits, plus a sign bit" (chapter 4.5.1). If I understand correctly, this means that given just a minimal implementation, the precision I should be able to get in the vertex shader should be better than that in the fragment shader, correct? For some reason, though, the second version of the code works correctly while the first version leads to a bunch of rounding errors. Am I missing something???
Turns out I fundamentally misunderstood how things work... Maybe I still do, but let me answer my question based on my current understanding:
I thought that for every pixel that is rendered, first the Vertex Shader and then the Fragment Shader are executed. But, if I now understand correctly, the Vertex Shader is only called once for each vertex of the triangle primitives (kind-of makes sense given it's name, too...).
So, the first version of my code above only calculates the correct texture coordinate at the actual corner points (vertices) of the triangles that I'm drawing. For all other pixels in the triangle, the texture coordinate is simply a linear interpolation between those corner coordinates. Of course, since my formula isn't linear (including rounding and modulo-operations), this leads to the wrong texture-coordinates for each individual pixel.
The second version, though, applies the non-linear transformation to the texture coordinates at each pixel location, giving the correct texture coordinates everywhere.
So, the generalized learning (and the reason I didn't just delete the question):
All non-linear texture-coordinate transformation must be done in the fragment shader.

get the view coordinate in a point sprite

If you pass a varying view-space position from the vertex shader to a fragment shader then the fragment shader can know the fragment's position relative to the camera (0,0,0 in view-space). This is useful for per-pixel lighting etc. E.g.:
precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec4 pos;
void main() {
pos = (mvMatrix * vec4(vertex,1.0));
gl_Position = pMatrix * pos;
}
However, if you are rendering gl_POINTS and setting the gl_PointSize in the vertex shader, how can the fragment shader determine each fragment's position (as the pos passed in the example above will be for the sprite's centre-point)?
Simple answer: stop using point sprites. Really, they're terrible.
Less simple answer: stop passing the view-space position to the fragment shader entirely. Instead, use gl_FragCoord to compute the view-space position, based on viewport data and so forth. There's even sample GLSL code for it:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
You'll need to give your fragment shader the viewport and invPersMatrix values. gl_DepthRange is built-in. eyePos is what you're looking for.
There's probably a faster way to do it that takes advantage of the fact that you're drawing a screen-aligned quad. It would involve the point size and using gl_PointCoord.

How can a fragment shader use the color values of the previously rendered frame?

I am learning to use shaders in OpenGL ES.
As an example: Here's my playground fragment shader which takes the current video frame and makes it grayscale:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
highp vec4 theColor = texture2D(videoFrame, textureCoordinate);
highp float avrg = (theColor[0] + theColor[1] + theColor[2]) / 3.0;
theColor[0] = avrg; // r
theColor[1] = avrg; // g
theColor[2] = avrg; // b
gl_FragColor = theColor;
}
theColor represents the current pixel. It would be cool to also get access to the previous pixel at this same coordinate.
For sake of curiousity, I would like to add or multiply the color of the current pixel to the color of the pixel in the previous render frame.
How could I keep the previous pixels around and pass them in to my fragment shader in order to do something with them?
Note: It's OpenGL ES 2.0 on the iPhone.
You need to render the previous frame to a texture, using a Framebuffer Object (FBO), then you can read this texture in your fragment shader.
The dot intrinsic function that Damon refers to is a code implementation of the mathematical dot product. I'm not supremely familiar with OpenGL so I'm not sure what the exact function call is, but mathematically a dot product goes like this :
Given a vector a and a vector b, the 'dot' product a 'dot' b produces a scalar result c:
c = a.x * b.x + a.y * b.y + a.z * b.z
Most modern graphics hardware (and CPUs, for that matter) are capable of performing this kind of operation in one pass. In your particular case, you could compute your average easily with a dot product like so:
highp vec4 = (1/3, 1/3, 1/3, 0) //or zero
I always get the 4th component in homogeneous vectors and matrices mixed up for some reason.
highp float avg = theColor DOT vec4
This will multiple each component of theColor by 1/3 (and the 4th component by 0), and then add them together.

Resources