Weird precision behavior in OpenGL ES 2.0 - opengl-es

To implement this idea, I wrote the following two versions of my vertex and fragment shaders:
// Vertex:
precision highp int;
precision highp float;
uniform vec4 r_info;
attribute vec2 s_coords;
attribute vec2 r_coords;
varying vec2 t_coords;
void main (void) {
int w = int(r_info.w);
int x = int(r_coords.x) + int(r_coords.y) * int(r_info.y);
int y = x / w;
x = x - y * w;
y = y + int(r_info.x);
t_coords = vec2(x, y) * r_info.z;
gl_Position = vec4(s_coords, 0.0, 1.0);
}
// Fragment:
precision highp float;
uniform sampler2D sampler;
uniform vec4 color;
varying vec2 t_coords;
void main (void) {
gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, t_coords).a);
}
vs.
// Vertex:
precision highp float;
attribute vec2 s_coords;
attribute vec2 r_coords;
varying vec2 t_coords;
void main (void) {
t_coords = r_coords;
gl_Position = vec4(s_coords, 0.0, 1.0);
}
// Fragment:
precision highp float;
precision highp int;
uniform vec4 r_info;
uniform sampler2D sampler;
uniform vec4 color;
varying vec2 t_coords;
void main (void) {
int w = int(r_info.w);
int x = int(t_coords.x) + int(t_coords.y) * int(r_info.y);
int y = x / w;
x = x - y * w;
y = y + int(r_info.x);
gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, vec2(x, y) * r_info.z).a);
}
The only difference between them (I hope) is the location where the texture coordinates are transformed. In the first version, the math happens in the vertex shader, in the second one it happens in the fragment shader.
Now, the official OpenGL ES SL 1.0 Specifications state that "[t]he vertex language must provide an integer precision of at least 16 bits, plus a sign bit" and "[t]he fragment language must provide an integer precision of at least 10 bits, plus a sign bit" (chapter 4.5.1). If I understand correctly, this means that given just a minimal implementation, the precision I should be able to get in the vertex shader should be better than that in the fragment shader, correct? For some reason, though, the second version of the code works correctly while the first version leads to a bunch of rounding errors. Am I missing something???

Turns out I fundamentally misunderstood how things work... Maybe I still do, but let me answer my question based on my current understanding:
I thought that for every pixel that is rendered, first the Vertex Shader and then the Fragment Shader are executed. But, if I now understand correctly, the Vertex Shader is only called once for each vertex of the triangle primitives (kind-of makes sense given it's name, too...).
So, the first version of my code above only calculates the correct texture coordinate at the actual corner points (vertices) of the triangles that I'm drawing. For all other pixels in the triangle, the texture coordinate is simply a linear interpolation between those corner coordinates. Of course, since my formula isn't linear (including rounding and modulo-operations), this leads to the wrong texture-coordinates for each individual pixel.
The second version, though, applies the non-linear transformation to the texture coordinates at each pixel location, giving the correct texture coordinates everywhere.
So, the generalized learning (and the reason I didn't just delete the question):
All non-linear texture-coordinate transformation must be done in the fragment shader.

Related

In which coordinate space should the vertex be passed to fragment shader?

I am clipping off on the fragment shader (setting the transparency to 0/1) based on the cut off vertex (v_cutPos) and the current vertex (v_currPos) that I get from the vertex shader. These two vertices are passed as world coords.
Now, the cut off logic works fine. But the cut itself is not smooth (it has to follow a certain shape). And when I pass the same vertices after converting to clip space, the cut is much smoother (or finer.)
Is there any explanation to this?
//fragment shader
precision highp float;
varying mediump vec4 v_color;
varying vec4 v_currPos;
varying vec4 v_cutPos;
/* returns 0 if pt is inside box otherwise 1 */
float insideCutArea(vec2 pt, vec2 cutPos)
{
return float(pt.y > cutPos.y);
}
void main(void)
{
float transparency = insideCutArea(v_currPos.xy, v_cutPos.xy);
gl_FragColor = vec4(v_color.xyz, v_color.w * transparency);
}
//vertex shader
varying mediump vec4 v_color;
uniform vec3 cutPos;
varying vec4 v_currPos;
varying vec4 v_cutPos;
void main(void)
{
//-------------------
other transformations
-------------------//
v_cutPos = myPMVMatrix * vec4(cutPos,1.0); //cut is not fine when not multiplying with the matrix
gl_Position = myPMVMatrix * vec4(validVertex, 1.0);
v_currPos = myPMVMatrix * vec4(validVertex, 1.0); //cut is not fine when not multiplying with the matrix
v_color = color;
}
PS: This question was previously closed due to not much clarity. I
have created this again with code explaining what I have done.

get the view coordinate in a point sprite

If you pass a varying view-space position from the vertex shader to a fragment shader then the fragment shader can know the fragment's position relative to the camera (0,0,0 in view-space). This is useful for per-pixel lighting etc. E.g.:
precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec4 pos;
void main() {
pos = (mvMatrix * vec4(vertex,1.0));
gl_Position = pMatrix * pos;
}
However, if you are rendering gl_POINTS and setting the gl_PointSize in the vertex shader, how can the fragment shader determine each fragment's position (as the pos passed in the example above will be for the sprite's centre-point)?
Simple answer: stop using point sprites. Really, they're terrible.
Less simple answer: stop passing the view-space position to the fragment shader entirely. Instead, use gl_FragCoord to compute the view-space position, based on viewport data and so forth. There's even sample GLSL code for it:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
You'll need to give your fragment shader the viewport and invPersMatrix values. gl_DepthRange is built-in. eyePos is what you're looking for.
There's probably a faster way to do it that takes advantage of the fact that you're drawing a screen-aligned quad. It would involve the point size and using gl_PointCoord.

An outline/sharp transition in a fragment shader

I would like to create a sharp transition effect between pixels in my fragment shader, but I'm not sure how I could do this.
In my vertex shader I have a varying float x; and in my fragment shader I use this value to set the opacity of the color. I quantize the current value to produce a layering effect. What I'd like to do is at a very minimal level of the effect to produce a distinct border (a different color entirely). For example, if x>0.1 and for any neighboring pixel x<0.1 then the resulting color should be black.
It don't see any way in GLSL to gain access to neighbouring pixels (I could be wrong). How could I achieve such an effect. I'm limited to OpenGL-ES2.0 (though if not possible at all on this version, then any solution would be helpful).
You are correct that you cannot access neighboring pixels, this is due to the fact that there is no guarantee which order the pixels are written, they are all drawn in parallel. If you could access neighboring pixels in the framebuffer you would get inconsistent results.
However you can do this in a post-process if you want. Draw your whole scene into a framebuffer texture, and then draw that texture to the screen with a filtering shader.
When drawing from a texture in your shader you can sample neighboring texels all you want, so you could easily compare the delta between two neighboring texels.
If your OpenGL ES implementation supports the OES_standard_derivatives extension, you can get the rate of change of your variable by forward/backward differencing with neighboring pixels in the 2×2 quad being shaded:
float outline(float t, float threshold, float width)
{
return clamp(width - abs(threshold - t) / fwidth(t), 0.0, 1.0);
}
This function returns the coverage for a line of the specified width where t ≈ threshold, using fwidth to determine how far it is from the cutoff. Note that fwidth(t) is equivalent to abs(dFdx(t)) + abs(dFdy(t)) and calculates the width in Manhattan distance, which may overfatten diagonal lines. If you prefer Euclidean distance:
float outline(float t, float threshold, float width)
{
float dx = dFdx(t);
float dy = dFdy(t);
float ewidth = sqrt(dx * dx + dy * dy);
return clamp(width - abs(threshold - t) / ewidth, 0.0, 1.0);
}
In addition to Pivot's implementation based on derivatives, you can grab neighboring pixels from a source image using an offset based on the pixel dimensions of that source. The inverse of the width or height in pixels is the offset from the current texture coordinate that you'll need to use here.
For example, here is a vertex shader I've used to calculate these offsets for the eight pixels that surround a central one:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp float texelWidth;
uniform highp float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
and here's a fragment shader that uses this to perform Sobel edge detection:
precision mediump float;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
float h = -topLeftIntensity - 2.0 * topIntensity - topRightIntensity + bottomLeftIntensity + 2.0 * bottomIntensity + bottomRightIntensity;
float v = -bottomLeftIntensity - 2.0 * leftIntensity - topLeftIntensity + bottomRightIntensity + 2.0 * rightIntensity + topRightIntensity;
float mag = length(vec2(h, v));
gl_FragColor = vec4(vec3(mag), 1.0);
}
I pass in the texelWidth and texelHeight uniforms, which are 1/width and 1/height of the image, respectively. This does require you to track the input image width and height, but it should work on all OpenGL ES devices, not just those with the derivative extensions.
I do the texture offset calculations in the vertex shader for two reasons: so that offset calculations only need to be performed once per vertex instead of once per fragment, and more importantly because some of the tile-based deferred renderers react very poorly to dependent texture reads where texture offsets are calculated in a fragment shader. The performance can be up to 20X higher for a shader program that removes these dependent texture reads on these devices.

GLSL Shader - How to calculate the height of a texture?

In this question I asked how to create a "mirrored" texture and now I want to move this "mirrored" image down on the y-axis about the height of the image.
I tried something like this with different values of HEIGHT but I cannot find a proper solution:
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
void main()
{
highp vec4 pos = a_position;
pos.y = pos.y - HEIGHT;
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * pos;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
What you are actually changing in your code snippet is the Y position of your vertices... this is most certainly not what you want to do.
a_position is your model-space position; the coordinate system that is centered around your quad (I'm assuming you're using a quad to display the texture).
If instead you do the modification in screen-space, you will be able to move the image up and down etc... so change the gl_Position value:
((u_projectionMatrix * u_modelViewMatrix) * pos + Vec4(0,HEIGHT,0,0))
Note that then you will be in screen-space; so check the dimensions of your viewport.
Finally, a better way to achieve the effect you want to do is to use a rotation matrix to flip and tilt the image.
You would then combine this matrix with the rotation of you image (combine it with the modelviewmatrix).
You can choose to either multiply the model matrices by the view projection on the CPU:
original_mdl_mat = ...;
rotated_mdl_mat = Matrix.CreateTranslation(0, -image.Height, 0) * Matrix.CreateRotationY(180) * original_mdl_mat;
mvm_original_mat = Projection * View * original_mdl_mat;
mvm_rotated_mat = Projection * View * rotated_mdl_mat;
or on the GPU:
uniform highp mat4 u_model;
uniform highp mat4 u_viewMatrix;
uniform highp mat4 u_projectionMatrix;
gl_Position = (u_projectionMatrix * u_model * u_viewMatrix) * pos;
The coordinates passed to texture2D always sample the source in the range [0, 1) on both axes, regardless of the original texture size and aspect ratio. So a kneejerk answer is that the height of a texture is always 1.0.
If you want to know the height of the source image comprising the texture in pixels then you'll need to supply that yourself — probably as a uniform — since it isn't otherwise exposed.

Desktop GLSL without ftransform()

I'm porting a codebase of mine from fixed-function OpenGL 1.x to OpenGL 2.x - Technically OpenGL ES 2.0, but I'm still coding on the desktop, just keeping in mind the limitations that ES 2.0 imposes which are similar to the 3.1 'new' profile.
Problem is, it seems like for anything other than 2D, creating a shader passing in the modelviewprojection matrix as a uniform does not work. Normally I get a black screen, but if I set the Z value of all my vertices to 0 I get stuff to show up.
Putting my shaders in RenderMonkey works when I have ES 2.0 mode enabled, but on standard desktop GL it's just a black screen (no compiler errors/warnings):
vert shader:
uniform mat4 mvp_matrix;
uniform mat4 obj_matrix;
uniform vec4 u_color;
attribute vec3 a_vertex;
attribute vec2 a_texcoord0;
varying vec4 v_color;
varying vec2 v_texcoord0;
void main(void)
{
v_color = u_color;
gl_Position = mvp_matrix * (obj_matrix * vec4(a_vertex, 1.0));
v_texcoord0 = a_texcoord0;
}
frag shader:
uniform sampler2D t_texture0;
varying vec2 v_texcoord0;
varying vec4 v_color;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * v_color;
}
I am passing in the matrices as glUniformMatrix4fv(location, 1, GL_FALSE, mvpMatrix);
This shader works like gold for anything drawn in 2D. What am I doing wrong here? Or am I required to use ftransform() on desktop GL?
One thing I think needs a bit of clarification:
A model matrix transforms an object from object coordinates to world coordinates.
A view matrix transforms the world coordinates to eye coordinates.
A projection matrix converts eye coordinates to clip coordinates.
Based on standard naming conventions, the mvpMatrix is projection * view * model, in that order. There is no other matrices that you need to multiply by. Projection is your projection matrix (either ortho or perspective), view is the camera transform matrix (NOT the modelview), and model is the position, scale, and rotation of your object.
I believe the issue either lies in either multiplying matrices that don't need to be multiplied together or in multiplying matrices in the wrong order. (matrix multiplication isn't commutative)
If you haven't already solved this, I would recommend sending all 3 matrices over separately and later dumping the values back to make sure there are no issues sending the matrices over.
Vertex shader:
attribute vec4 a_vertex;
attribute vec2 a_texcoord0;
varying vec2 v_texcoord0;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main(void)
{
gl_Position = projection * view * model * a_vertex;
v_texcoord0 = a_texcoord0;
}
Fragment Shader:
uniform sampler2D t_texture0;
uniform vec4 u_color;
varying vec2 v_texcoord0;
void main(void)
{
vec4 color = texture2D(t_texture0, v_texcoord0);
gl_FragColor = color * u_color;
}
Also I moved the color uniform to the frag shader, passing it through as a varying is unnecessary when all the vertices will have the same color.

Resources