Loss of precision in GLSL fragment shader - opengl-es

I am now using opengl-es and I use the gl shading language. I hope to render to texture but I found a loss of precision. For example, when I write a float value of 0.5 to the texture, I found the actual value stored in the texture was approximately 0.498. What should I do to achieve higher precision?

You probably should consider storing your values (if just one value per pixel/texel) via packing-unpacking your values:
vec4 packFloat(const float value) {
const vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
const vec4 bitMsk = vec4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
vec4 res = fract(value * bitSh);
res -= res.xxyz * bitMsk;
return res;
}
float unpackFloat(const vec4 value) {
const vec4 bitSh = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
return (dot(value, bitSh));
}
This might be okay for storing values for something like depth-maps
And this would be kind of a 32 bit range for each pixel/texel

Try adding the highp precision qualifier in front of your variables.

Render to a texture that uses more than 8 bits per component. If you don't have the appropriate OpenGL ES extensions for that, then generally there's not much you can do.

Even the next higher precision might not be enough, because the final stage of the rendering pipeline scales the pixel values to a range of 0..1, the end points inclusive. Thus 1 will be represented as 255, which suggest a factor of 1/255 instead of 1/256.
The same applies to all precisions: 0.5 can't be represented exactly.

Related

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

Shader - Unexpected behaviour when dividing with a high value

I have this line:
gl_FragColor = vec4(worldPos.x / maxX, worldPos.z / maxZ, 1.0, 1.0);
Where worldPos.x and worldPos.y goes from 0 to 19900. maxX and maxZ are float uniforms. It works as expected when maxX and maxZ are set to 5000.0 (a gradient to white and above 5000 it's all white), but when maxX and maxZ are set to 19900.0 it all turns blue. Why is that and how to get around it? Hardcoding the values doesn't make a difference, i.e:
gl_FragColor = vec4(worldPos.x / 5000.0, worldPos.z / 5000.0, 1.0, 1.0);
works as expected while:
gl_FragColor = vec4(worldPos.x / 19900.0, worldPos.z / 19900.0, 1.0, 1.0);
makes it all blue. This only happens on some devices and not on others.
Update:
Adding highp modifier (as suggested by Michael below) solved it for one device, but when testing on another it didn't make any difference. Then I tried to do the division on the CPU (also suggested by Michael) like this:
in java, before passing it as uniform:
float maxX = 1.0f / 19900.0f;
float maxZ = 1.0f / 19900.0f;
program.setUniformf(maxXUniform, maxX);
program.setUniformf(maxZUniform, maxZ);
in shader:
uniform float maxX;
uniform float maxZ;
...
gl_FragColor = vec4(worldPos.x * maxX, worldPos.z * maxZ, 1.0, 1.0);
...
Final sulotion:
This still didn't cut it. Now the values are too small so when passed in to the shader they turn 0 due to too low float precision. Then I tried to multiply it by 100 before passing it in, and then multiplying it by 0.01 inside the shader.
in java:
float maxX = 100.0f / 19900.0f;
float maxZ = 100.0f / 19900.0f;
program.setUniformf(maxXUniform, maxX);
program.setUniformf(maxZUniform, maxZ);
in shader:
uniform float maxX;
uniform float maxZ;
...
gl_FragColor = vec4(worldPos.x * 0.01 * maxX, worldPos.z * 0.01 * maxZ, 1.0, 1.0);
...
And that solved the problem. Now the highp modifier isn't needed. Maybe it isn't the prettiest sulotion but it's efficient and robust.
I guess you're running OpenGL ES? Well,the floating precision sucks on many,usually quite old, devices.I had similar issues on several occasions when implementing cascaded shadows mapping in shaders for mobile hardware.
Make sure you use highp qualifier for those variables. (note - that might not solve the issue, but is worth to try)
Another possible solution: don't perform the division in the shader. That's a quite heavy operation for many old and weak implementations anyway. Try to avoid division, sqrt(),pow().Run shader profiler and you will be surprised to find out how much those ops are HEAVY! (iOS emulator on Mac has a nice shader profiler) Try to pass the results directly as uniforms.I am not sure that would be a problem in your case,as I can't see any of these variables bound to per-fragment execution.
And if it still doesn't help, then usually there is nothing you can do about that. That's the old hardware/GLSL implementation issue. But I am sure,if you calculate that on CPU and upload the results as uniforms, that should solve the issue.

Incorrect reading from DataTextures in shader

I have a program that works great when my POT DataTextures are 1:1 (width:height) in their texel dimensions, however when they are 2:1 or 1:2 in texel dimensions it appears that the texels are being incorrectly read and applied. I'm using continuous indexes (1,2,3,4,5...) to access the texels using the two functions below.
I'm wondering if there is something wrong with how I am accessing the texel data, or perhaps if my use of a Float32Array for the integer indexes needs to be switched to a Uint8Array or something else? Thanks in advance!
This function finds the uv for textures that have one texel per particle cloud in my visualization:
float texelSizeX = 1.0 / uPerCloudBufferWidth;
float texelSizeY = 1.0 / uPerCloudBufferHeight;
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
perCloudUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
This function finds the uv for textures that contain one texel for each particle contained in all of the clouds:
float pTexelSizeX = 1.0 / uPerParticleBufferWidth;
float pTexelSizeY = 1.0 / uPerParticleBufferHeight;
vec2 perParticleUV = vec2(
mod(aParticleIndex, uPerParticleBufferWidth)*pTexelSizeX,
floor(aParticleIndex / uPerParticleBufferHeight)*pTexelSizeY );
perParticleUV += vec2(0.5*pTexelSizeX, 0.5*pTexelSizeY);
Shouldn't this
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
be this?
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferWidth)*texelSizeY ); // <=- use width
And same for the other? Divide by width not height

Encode floating point data in a RGBA texture

I wrote some WebGL code that is based on floating point textures. But while testing it on a few more devices I found that support for the OES_texture_float extension isn't as widespread as I had thought. So I'm looking for a fallback.
I have currently a luminance floating point texture with values between -1.0 and 1.0. I'd like to encode this data in a texture format that is available in WebGL without any extensions, so probably a simple RGBA unsigned byte texture.
I'm a bit worried about the potential performance overhead because the cases where this fallback is needed are older smartphones or tablets which already have much weaker GPUs than a modern desktop computer.
How can I emulate floating point textures on a device that doesn't support them in WebGL?
If you know your range is -1 to +1 the simplest way is to just to convert that to some integer range and then convert back. Using the code from this answer which packs a value that goes from 0 to 1 into a 32bit color
const vec4 bitSh = vec4(256. * 256. * 256., 256. * 256., 256., 1.);
const vec4 bitMsk = vec4(0.,vec3(1./256.0));
const vec4 bitShifts = vec4(1.) / bitSh;
vec4 pack (float value) {
vec4 comp = fract(value * bitSh);
comp -= comp.xxyz * bitMsk;
return comp;
}
float unpack (vec4 color) {
return dot(color , bitShifts);
}
Then
const float rangeMin = -1.;
const float rangeMax = -1.;
vec4 convertFromRangeToColor(float value) {
float zeroToOne = (value - rangeMin) / (rangeMax - rangeMin);
return pack(value);
}
float convertFromColorToRange(vec4 color) {
float zeroToOne = unpack(color);
return rangeMin + zeroToOne * (rangeMax - rangeMin);
}
This should be a good starting point: http://aras-p.info/blog/2009/07/30/encoding-floats-to-rgba-the-final/
It's intended for encoding to 0.0 to 1.0, but should be straightforward to remap to your required range.

Implementing a 32-bit heightmap vertex shader in threejs

I am attempting to repurpose the heightmap shader example found here into one that will work with 32-bits of precision instead of 8. The work-in-progress code is on github: https://github.com/bgourlie/three_heightmap
The height map is being generated in .NET. The heights are within 0f...200f and converted into a 32-bit color value (Unity's Color struct) using the following method:
private static Color DepthToColor(float height)
{
var depthBytes = BitConverter.GetBytes(height);
int enc = BitConverter.ToInt32(depthBytes, 0);
return new Color((enc >> 24 & 255)/255f, (enc >> 16 & 255)/255f, (enc >> 8 & 255)/255f,
(enc & 255)/255f);
}
The color data is encoded as a png, with the result looking like this:
The vertex shader is taking this image data and coverting the RBGA values back to the original height value (using the technique answered in my question here):
uniform sampler2D bumpTexture; // defined in heightmap.js
varying float vAmount;
varying vec2 vUV;
void main()
{
vUV = uv;
vec4 bumpData = texture2D( bumpTexture, uv );
vAmount = dot(bumpData, vec4(1.0, 255.0, 65025.0, 16581375.0));
// Uncomment to see a "flatter" version
//vAmount = dot(bumpData, vec4(1.0, 1.0/255.0, 1.0/65025.0, 1.0/160581375.0));
// move the position along the normal
vec3 newPosition = position + normal * vAmount;
gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
}
The result is definitely messed up:
I can make it flatter by changing this line:
vAmount = dot(bumpData, vec4(1.0, 1.0/255.0, 1.0/65025.0, 1.0/16581375.0));
This will give me a much flatter image, which at least shows a nice outline of the generated terrain, but with an almost entirely flat plane (there is slight, albeit unnoticeable variation):
I assume I'm doing a few things wrong, I just don't know what. I'm not sure if I'm encoding the original float correctly. I'm not sure if I'm decoding it correctly in the vertex shader (the value I'm getting is certainly outside the range of 0...200). I'm also not very experienced in 3d graphics in general. So any pointers as to what I'm doing wrong, or generally how to achieve this would be greatly appreciated.
Again, the self contained work-in-progress code can be found here: https://github.com/bgourlie/three_heightmap
Your colour:
return new Color((enc >> 24 & 255)/255f, (enc >> 16 & 255)/255f, (enc >> 8 & 255)/255f,
(enc & 255)/255f);
... contains the most significant byte of enc in r, the second most significant in g, etc.
This:
vAmount = dot(bumpData, vec4(1.0, 255.0, 65025.0, 160581375.0));
builds vAmount with r in the least significant byte, g in the next-least significant, etc (though the multiplicands should be 256, 65536, etc*). So the bytes are in the incorrect order. The flatter version:
vAmount = dot(bumpData, vec4(1.0, 1.0/255.0, 1.0/65025.0, 1.0/160581375.0));
gets the bytes in the correct order but scales the output values into the range [0.0, 1.0], which is probably why it looks essentially flat.
So switch the order of encoding or of decoding the bytes and pick an appropriate scale.
(*) think about it this way: the smallest number that can go on any channel is 1.0 / 255.0. The least significant channel will be in the range [0, 1.0] — from 0 / 255.0 to 255.0 / 255.0. You want to scale the next channel so that its smallest value is the next thing on that scale. So its smallest value should be 256 / 255.0. So you need to turn 1.0 / 255.0 into 256.0 / 255.0. You achieve that by multiplying by 256, not by 255.
If you encode a wide integer into the components of a RGBA vector it's essential that you turn off filtering so that no interpolation happens between the values. Also OpenGL may internally convert to a different format, but that should only reduce your sample depth.

Resources