GLSL - different precision in different parts of fragment shader - opengl-es

I have a simple fragment shader that draws test grid pattern.
I don't really have a problem - but I've noticed a weird behavior that's inexplicable to me. Don't mind weird constants - they get filled during shader assembly before compilation. Also, vertexPosition is actual calculated position in world space, so I can move the shader texture when the mesh itself moves.
Here's the code of my shader:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
So, with this code everything works well.
I then wanted to optimize it a little bit by moving calculations that only concern horizontal lines after the vertical lines are drawn. Because these calculations are useless if the vertical lines check is true. Like this:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_base = __CONSTANT_COLOR_BASE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
float minWidthX;
float minWidthY;
vec3 color_green = vec3(0.0,1.0,0.0);
void main()
{
// calculate l1 border positions
float dimention = squareSize;
int roundX = int(vertexPosition.x / dimention);
int roundY = int(vertexPosition.z / dimention);
float remainderX = vertexPosition.x - float(roundX)*dimention;
float remainderY = vertexPosition.z - float(roundY)*dimention;
vec3 dyX = dFdy(vec3(vertexPosition.x, vertexPosition.y, 0));
vec3 dxX = dFdx(vec3(vertexPosition.x, vertexPosition.y, 0));
minWidthX = max(length(dxX),length(dyX));
//Fill l1 suqares
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
minWidthY = max(length(dxY),length(dyY));
if (remainderY <= minWidthY)
{
fragColor = vec4(color_l1, 1.0);
return;
}
// fill base color
fragColor = vec4(color_base, 1.0);
return;
}
But even while seemingly this should not affect the result - it does. By quite a bit.
Below are the two screenshots. The first one is the original code, the second - is the "optimized" one. Which works bad.
Original version:
Optimized version (looks much worse):
Notice how the lines became "fuzzy" even though seemingly no numbers should have changed at all.
Note: this isn't because minwidthX/Y are global. I initially optimized by making them local.
I also initially moved RoundY and remainderY calculation below the X check as well, and the result is the same.
Note 2: I tried adding highp keyword for each of those calculations specifically, but that doesn't change anything (not that I expected it to, but I tried nevertheless)
Could anyone please explain to me why this happens? I would like to know for my future shaders, and actually I would like to optimize this one as well. I would like to understand the principle behind precision loss here, because it doesn't make any sense to me.

For the answer I'll refer to OpenGL ES Shading Language 3.20 Specification, which is the same as OpenGL ES Shading Language 3.00 Specification in this point.
8.14.1. Derivative Functions
[...] Derivatives are undefined within non-uniform control flow.
and further
3.9.2. Uniform and Non-Uniform Control Flow
When executing statements in a fragment shader, control flow starts as uniform control flow; all fragments enter the same control path into main(). Control flow becomes non-uniform when different fragments take different paths through control-flow statements (selection, iteration, and jumps).[...]
That means, that the result of the derivative functions in the first case (of your question) is well defined.
But in the second case it is not:
if (remainderX <= minWidthX)
{
fragColor = vec4(color_l1, 1.0);
return;
}
vec3 dyY = dFdy(vec3(0, vertexPosition.y, vertexPosition.z));
vec3 dxY = dFdx(vec3(0, vertexPosition.y, vertexPosition.z));
because the return statement acts like a selection. And all the code after the code block with the return statement is in non-uniform control flow.

Related

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

Raycasting with InstancedMesh, InstancedBufferGeometry, custom shader

Basic, I can't get raycasting to work with them. My guess is my matrix coordinate calculation method is wrong. Don't know how to do it right.
I set vertex position and offset in vertexShader, and in InstancedMesh, I set the same offset, expecting the the raycast can get the an instanceID, but nothing intersects. You can find my entire code here.
I tried to adapt an official raycasting example here, but can't figure out where I did wrong. My hodgepodge uses: InstancedMesh, InstancedBufferGeometry, custom shader together. My objective is to learn how it works.
My question is where I did wrong?
My vertex shader:
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec4 color;
attribute vec3 offset;
varying vec3 vPosition;
varying vec4 vColor;
void main() {
vColor = vec4(color);
vPosition = offset*1.0 + position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition, 1.0 );
// if gl_Position not set, nothing is shown
}
My InstancedMesh matrix setting:
for(let i = 0; i < SQUARE_COUNT; i++) {
transform.position.set(offsets[i], offsets[i+1], offsets[i+2] )
transform.updateMatrix()
mesh.setMatrixAt(i, transform.matrix)
}
The offsets is set before as following:
for(let i = 0; i < SQUARE_COUNT; i++ ) {
offsets.push( 0 + i*0.05, 0 + i*0.05, 0 + i*0.05); // same is set in InstancedMesh
colors.push( Math.random(), Math.random(), Math.random(), Math.random() );
}
The raycaster has no awareness of any nonstandard transformation that you do in your vertex shader. It's just the way it works. It has no way of knowing that you are doing:
vPosition = offset*1.0 + position;
in your shader.
It works by assuming that you are running the bog standard vertex shader with no additional transforms. It assumes that every object you are casting against has a well defined/computed bounding box as well.
If you are going to use raycasting, you may have to make a non-rendered scene that represents your objects in their final rendered positions, and cast against that.

Weird behaviour with OpenGL Uniform Buffers on OSX

I am having some weird behaviour with uniform buffers in my hobby OpenGL4.1 engine.
On windows everything works fine (both Intel and Nvidia GPUs) but on my MacBook (also Intel) this isn't working.
So to explain what is happening on OSX: if I hardcode all my Uniform Buffer variables in the actual fragment shader code then I am able to render perfectly fine but if I set them back to the variables - I get nothing.
Had a look at the OpenGL state using apitrace and all the variables values are perfect so I am a bit confused as to what is going on here.
I am hoping this is just a code bug and not some underlying issue with the drivers.
Below is the fragment shader code where if I hardcode all the DirectionLight variables everything works fine.
#version 410
struct DirectionalLightData
{
vec4 Colour;
vec3 Direction;
float Intensity;
};
layout(std140) uniform ObjectBuffer
{
mat4 Model;
};
layout(std140) uniform FrameBuffer
{
mat4 Projection;
mat4 View;
DirectionalLightData DirectionalLight;
vec3 ViewPos;
};
uniform sampler2D PositionMap;
uniform sampler2D NormalMap;
uniform sampler2D AlbedoSpecMap;
layout(location = 0) in vec2 TexCoord;
out vec4 FinalColour;
float CalcDiffuseContribution(vec3 lightDir, vec3 normal)
{
return max(dot(normal, -lightDir), 0.0f);
}
float CalcSpecularContribution(vec3 lightDir, vec3 viewDir, vec3 normal, float specularExponent)
{
vec3 reflectDir = reflect(lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
return pow(max(dot(normal, halfwayDir), 0.0f), specularExponent);
}
float CalcDirectionLightFactor(vec3 viewDir, vec3 lightDir, vec3 normal)
{
float diffuseFactor = CalcDiffuseContribution(lightDir, normal);
float specularFactor = CalcSpecularContribution(normal, viewDir, normal, 1.0f);
return diffuseFactor * specularFactor;
}
void main()
{
vec3 position = texture(PositionMap, TexCoord).rgb;
vec3 normal = texture(NormalMap, TexCoord).rgb;
vec3 albedo = texture(AlbedoSpecMap, TexCoord).rgb;
vec3 viewDir = normalize(ViewPos - position);
float directionLightFactor = CalcDirectionLightFactor(viewDir, DirectionalLight.Direction, normal) * DirectionalLight.Intensity;
FinalColour.rgb = albedo * directionLightFactor * DirectionalLight.Colour.rgb;
FinalColour.a = 1.0f * DirectionalLight.Colour.a;
}
Here is the order of where I update and bind the UBO (I have pulled these from apitrace as there is too much code to copy paste here):
glGetActiveUniformBlockName(5, 0, 255, NULL, FrameBuffer);
glGetUniformBlockIndex(5, FrameBuffer) = 0;
glGetActiveUniformBlockName(5, 1, 255, NULL, ObjectBuffer);
glGetUniformBlockIndex(5, ObjectBuffer) = 1;
glBindBuffer(GL_UNIFORM_BUFFER, 1);
glMapBufferRange(GL_UNIFORM_BUFFER, 0, 172,GL_MAP_WRITE_BIT);
memcpy(0x10b9f8000, [binary data, size = 172 bytes], 172);
glUnmapBuffer(GL_UNIFORM_BUFFER);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, 2);
glBindBufferBase(GL_UNIFORM_BUFFER, 1, 1);
glBindBuffer(GL_UNIFORM_BUFFER, 2);
glMapBufferRange(GL_UNIFORM_BUFFER, 0, 64, GL_MAP_WRITE_BIT);
memcpy(0x10b9f9000, [binary data, size = 64 bytes], 64);
glUnmapBuffer(GL_UNIFORM_BUFFER);
glUniformBlockBinding(5, 1, 0);
glUniformBlockBinding(5, 0, 1);
glDrawArrays(GL_TRIANGLES, 0, 6);
Note that the FrameBuffer UBO has ID 1 and ObjectBuffer UBO has ID 2
I think when you are using std140 layout your data members should be byte aligned so you cannot mix vec4 and vec3 or float keep all variables mat4 and vec4 else dnt use std140 layout and in application side calculate ubo alignment and offsets of your variables on ubo and set values. See usage of GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT.
As experiment change all variables to mat4 and vec4 and see your issue should go away.
If you did not use the std140 layout for a block, you will need to query the byte offset for each uniform within the block. The OpenGL specification explains the storage of each of the basic types, but not the alignment between types. Struct members, just like regular uniforms, each have a separate offset that must be individually queried.
After a few days of digging I seem to have found the issue.
I was not calling glBindBufferBase() after binding a different shader program.
Such a silly mistake caused me so much grief.
Thanks everyone for the help.

Drawing a circle in fragment shader

I am a complete noob when it comes to creating shaders. Or better said, I just learned about it yesterday.
I am trying to create a really simple circle. I thouht I finally figured it out but it turns out to be to large. It should match the DisplayObject size where the filter is applied to.
The fragment shader:
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.0),1.0);
}
Example using Pixi.js:
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
Pixi.js's vTextureCoord do not go from 0 to 1.
From the docs
V4 filters differ from V3. You can't just add in the shader and assume that texture coordinates are in the [0,1] range.
...
Note: vTextureCoord multiplied by filterArea.xy is the real size of bounding box.
If you want to get the pixel coordinates, use uniform filterArea, it will be passed to the filter automatically.
uniform vec4 filterArea;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
They are in pixels. That won't work if we want something like "fill the ellipse into a bounding box". So, lets pass dimensions too! PIXI doesnt do it automatically, we need a manual fix:
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
Lets combine it in shader!
uniform vec4 filterArea;
uniform vec2 dimensions;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 normalizedCoord = pixelCoord / dimensions;
Here's your snippet updated.
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
void main() {
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 uv = pixelCoord / dimensions;
uv -= 0.5;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
It seems you've stumbled upon weird floating point precision problems: texture coordinates (vTextureCoord) in your fragment shader aren't strictly in (0, 1) range. Here's what I've got when I've added line gl_FragColor = vec4(vTextureCoord, 0, 1):
It seems good, but if we inspect it closely, lower right pixel should be (1, 1, 0), but it isn't:
The problem goes away if instead of setting size to 500 by 500 we use power-of-two size (say, 512 by 512), the problem goes away:
The other possible way to mitigate the problem would be to try to circumvent Pixi's code that computes projection matrix and provide your own that transforms smaller quad into desired screen position.

offset error after the 2nd texture access

I want my fragment shader to travers a serialized quad tree.
When a inner node is found the rg vales are interpreded as an index into the same texture.
A blue value of 0 marks an inner node.
In a first step a pointer is read from a 2x2 subimage at position 0x0 using the provided uv coords.
Then that pointer is used to access another 2x2 portion of the same texture.
However for each child of the root node there is an increasing offset error that results in the wrong color.
Here is my shader (for debug porpusses the loop is fixed at one iteration, so only 2 levels of the quad tree get accessed).
Also for debugging I did put a red 2x2 image at the location of the top left child a green image for the top right, blue for the bottom left and yellow for the bottom right child.
The resulting image is this:
I am completly clueless. Can one of you think of a reason why this is happening?
I checkt all the coordinate conversion and calculations 3 times they are all correct.
Here is the shader:
// virtual_image.fs
precision highp float;
uniform sampler2D t_atlas;
uniform sampler2D t_tree;
uniform vec2 gridpoolSize;
uniform vec2 atlasTileSize;
uniform vec2 atlasSize;
varying vec2 v_texcoord;
const float LEAF_MARKER = 1.0;
const float NODE_MARKER = 0.0;
const float CHANNEL_PERECISION = 255.0;
vec2 decode(const vec2 vec){
return vec * CHANNEL_PERECISION;
}
void main ()
{
vec4 oc = vec4(1); // output color
vec4 tColor = texture2D(t_tree, v_texcoord); // only for debuging
vec4 aColor = texture2D(t_atlas, v_texcoord); // only for debuging
// oc = mix(tColor, aColor, 0.5);
highp vec2 localCoord = v_texcoord;
// by convention the root node starts at [0,0]
// so we read the first pointer relative to that point
// we use the invertedGridpoolSize to convert the local coords in local coords of the first grid at [0,0]
highp vec3 pointer = texture2D(t_tree, localCoord / gridpoolSize).rgb;// pointer is correct at this point!
for(int i = 0 ; i < 1; i++) {
// divides the local coords into 4 quadrants
localCoord = fract(localCoord * 2.0); // localCoord is correct!
// branch
if(pointer.b <= NODE_MARKER + 0.1){
highp vec2 index = decode(pointer.rg);// index is correct!
highp vec2 absoluteCoord = (localCoord + index) / gridpoolSize;// absoluteCoord is correct!
// we have a inner node get next pointer and continue
pointer = texture2D(t_tree, absoluteCoord).rgb;
oc.rgb = pointer.rgb; // this point in the code introduces a growing offset, I don't know where this comes from. BUG LOCATION
//gl_FragColor = vec4(1,0,0,1);
} else {
if(pointer.b >= LEAF_MARKER - 0.1){
// we have a leaf
vec2 atlasCoord = ((decode(pointer.rg) * atlasTileSize) / atlasSize) + (localCoord * (atlasTileSize / atlasSize));
vec4 atlasColor = texture2D(t_atlas, atlasCoord);
//vec4 atlasCoordColor = vec4(atlasCoord,0,1);
//gl_FragColor = mix(atlasColor, vec4(localCoord, 0, 1), 1.0);
//gl_FragColor = vec4(localCoord, 0, 1);
oc = vec4(1,0,1,1);
} else {
// we have an empty cell
oc = vec4(1,0,1,1);
}
}
}
//oc.rgb = pointer;
//oc.rgb = oc.rgb * (255.0 / 20.0 );
gl_FragColor = oc;
}
For details on how to serialize a quad tree as a texture take a look at this paper: Octree Textures on the GPU
It turns out that its a rounding problem.
The code in the decode function hast to be changed to:
vec2 decode(const vec2 vec){
return floor(0.5 + (vec * CHANNEL_PERECISION))
}
The values returns should have been indexes of int but where slightly to small like 5.99 instead of 6.

Resources