How do I convert a vec4 rgba value to a float? - opengl-es

I packed some float data in a texture as an unsigned_byte, my only option in webgl. Now I would like unpack it in the vertex shader. When I sample a pixel I get a vec4 which is really one of my floats. How do I convert from the vec4 to a float?

The following code is specifically for the iPhone 4 GPU using OpenGL ES 2.0. I have no experience with WebGL so I cant claim to know how the code will work in that context. Furthermore the main problem here is that highp float is not 32 bits but is instead 24 bit.
My solution is for fragment shaders - I didnt try it in the vertex shader but it shouldnt be any different. In order to use the you will need to get the RGBA texel from a sampler2d uniform and make sure that the values of each R,G,B and A channels are between 0.0 and 255.0 . This is easy to achieve as follows:
highp vec4 rgba = texture2D(textureSamplerUniform, texcoordVarying)*255.0;
You should be aware though that the endianess of your machine will dictate the correct order of your bytes. The above code assumes that floats are stored in big-endian order. If you see your results are wrong then just swap the order of the data by writing
rgba.rgba=rgba.abgr;
immediately after the line where you set it. Alternatively swap the indices on rgba. I think the above line is more intutive though and less prone to careless errors.
I am not sure if it works for all given input. I tested for a large range of numbers and found that decode32 and encode32 are NOT exact inverses. Ive also left out the code I used to test it.
#pragma STDGL invariant(all)
highp vec4 encode32(highp float f) {
highp float e =5.0;
highp float F = abs(f);
highp float Sign = step(0.0,-f);
highp float Exponent = floor(log2(F));
highp float Mantissa = (exp2(- Exponent) * F);
Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa));
highp vec4 rgba;
rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0));
rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0);
rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0)));
rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0)));
return rgba;
}
highp float decode32(highp vec4 rgba) {
highp float Sign = 1.0 - step(128.0,rgba[0])*2.0;
highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0;
highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000);
highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 ));
return Result;
}
void main()
{
highp float result;
highp vec4 rgba=encode32(-10.01);
result = decode32(rgba);
}
Here are some links on IEEE precision I found useful. Link1. Link2. Link3.

Twerdster posted some excellent code in his answer. So all credit go to him. I post this new answer, since comments don't allow for nice syntax colored code blocks, and i wanted to share some code. But if you like the code, please upvote Twerdster original answer.
In Twerdster previous post he mentioned that the decode and encode might not work for all values.
To further test this, and validate the result i made a java program. While porting the code i tried to stayed as close as possible to the shader code (therefore i implemented some helper functions).
Note: I also use a store/load function to similate what happens when you write/read from a texture.
I found out that:
You need a special case for the zero
You might also need special case for infinity, but i did not implement that to keep the shader simple (eg: faster)
Because of rounding errors sometimes the result was wrong therefore:
subtract 1 from exponent when because of rounding the mantissa is not properly normalised (eg mantissa < 1)
Change float Mantissa = (exp2(- Exponent) * F); to float Mantissa = F/exp2(Exponent); to reduce precision errors
Use float Exponent = floor(log2(F)); to calc exponent. (simplified by new mantissa check)
Using these small modifications i got equal output on almost all inputs, and got only small errors between the original and encoded/decoded value when things do go wrong, while in Twerdster's original implementation rounding errors often resulted in the wrong exponent (thus the result being off by factor two).
Please note that this is a Java test application which i wrote to test the algorithm. I hope this will also work when ported to the GPU. If anybody tries to run it on a GPU, please leave a comment with your experience.
And for the code with a simple test to try different numbers until it failes.
import java.io.PrintStream;
import java.util.Random;
public class BitPacking {
public static float decode32(float[] v)
{
float[] rgba = mult(255, v);
float sign = 1.0f - step(128.0f,rgba[0])*2.0f;
float exponent = 2.0f * mod(rgba[0],128.0f) + step(128.0f,rgba[1]) - 127.0f;
if(exponent==-127)
return 0;
float mantissa = mod(rgba[1],128.0f)*65536.0f + rgba[2]*256.0f +rgba[3] + ((float)0x800000);
return sign * exp2(exponent-23.0f) * mantissa ;
}
public static float[] encode32(float f) {
float F = abs(f);
if(F==0){
return new float[]{0,0,0,0};
}
float Sign = step(0.0f,-f);
float Exponent = floor(log2(F));
float Mantissa = F/exp2(Exponent);
if(Mantissa < 1)
Exponent -= 1;
Exponent += 127;
float[] rgba = new float[4];
rgba[0] = 128.0f * Sign + floor(Exponent*exp2(-1.0f));
rgba[1] = 128.0f * mod(Exponent,2.0f) + mod(floor(Mantissa*128.0f),128.0f);
rgba[2] = floor(mod(floor(Mantissa*exp2(23.0f -8.0f)),exp2(8.0f)));
rgba[3] = floor(exp2(23.0f)*mod(Mantissa,exp2(-15.0f)));
return mult(1/255.0f, rgba);
}
//shader build-in's
public static float exp2(float x){
return (float) Math.pow(2, x);
}
public static float[] step(float edge, float[] x){
float[] result = new float[x.length];
for(int i=0; i<x.length; i++)
result[i] = x[i] < edge ? 0.0f : 1.0f;
return result;
}
public static float step(float edge, float x){
return x < edge ? 0.0f : 1.0f;
}
public static float mod(float x, float y){
return x-y * floor(x/y);
}
public static float floor(float x){
return (float) Math.floor(x);
}
public static float pow(float x, float y){
return (float)Math.pow(x, y);
}
public static float log2(float x)
{
return (float) (Math.log(x)/Math.log(2));
}
public static float log10(float x)
{
return (float) (Math.log(x)/Math.log(10));
}
public static float abs(float x)
{
return (float)Math.abs(x);
}
public static float log(float x)
{
return (float)Math.log(x);
}
public static float exponent(float x)
{
return floor((float)(Math.log(x)/Math.log(10)));
}
public static float mantissa(float x)
{
return floor((float)(Math.log(x)/Math.log(10)));
}
//shorter matrix multiplication
private static float[] mult(float scalar, float[] w){
float[] result = new float[4];
for(int i=0; i<4; i++)
result[i] = scalar * w[i];
return result;
}
//simulate storage and retrieval in 4-channel/8-bit texture
private static float[] load(int[] v)
{
return new float[]{v[0]/255f, v[1]/255f, v[2]/255f, v[3]/255f};
}
private static int[] store(float[] v)
{
return new int[]{((int) (v[0]*255))& 0xff, ((int) (v[1]*255))& 0xff, ((int) (v[2]*255))& 0xff, ((int) (v[3]*255))& 0xff};
}
//testing until failure, and some specific hard-cases separately
public static void main(String[] args) {
//for(float v : new float[]{-2097151.0f}){ //small error here
for(float v : new float[]{3.4028233e+37f, 8191.9844f, 1.0f, 0.0f, 0.5f, 1.0f/3, 0.1234567890f, 2.1234567890f, -0.1234567890f, 1234.567f}){
float output = decode32(load(store(encode32(v))));
PrintStream stream = (v==output) ? System.out : System.err;
stream.println(v + " ?= " + output);
}
//System.exit(0);
Random r = new Random();
float max = 3200000f;
float min = -max;
boolean error = false;
int trials = 0;
while(!error){
float fin = min + r.nextFloat() * ((max - min) + 1);
float fout = decode32(load(store(encode32(fin))));
if(trials % 10000 == 0)
System.out.print('.');
if(trials % 1000000 == 0)
System.out.println();
if(fin != fout){
System.out.println();
System.out.println("correct trials = " + trials);
System.out.println(fin + " vs " + fout);
error = true;
}
trials++;
}
}
}

I tried Arjans solution, but it returned invalid values for 0, 1, 2, 4. There was a bug with the packing of the exponent, which i changed so the exp takes one 8bit float and the sign is packed with the mantissa:
//unpack a 32bit float from 4 8bit, [0;1] clamped floats
float unpackFloat4( vec4 _packed)
{
vec4 rgba = 255.0 * _packed;
float sign = step(-128.0, -rgba[1]) * 2.0 - 1.0;
float exponent = rgba[0] - 127.0;
if (abs(exponent + 127.0) < 0.001)
return 0.0;
float mantissa = mod(rgba[1], 128.0) * 65536.0 + rgba[2] * 256.0 + rgba[3] + (0x800000);
return sign * exp2(exponent-23.0) * mantissa ;
}
//pack a 32bit float into 4 8bit, [0;1] clamped floats
vec4 packFloat(float f)
{
float F = abs(f);
if(F == 0.0)
{
return vec4(0,0,0,0);
}
float Sign = step(0.0, -f);
float Exponent = floor( log2(F));
float Mantissa = F/ exp2(Exponent);
//std::cout << " sign: " << Sign << ", exponent: " << Exponent << ", mantissa: " << Mantissa << std::endl;
//denormalized values if all exponent bits are zero
if(Mantissa < 1.0)
Exponent -= 1;
Exponent += 127;
vec4 rgba;
rgba[0] = Exponent;
rgba[1] = 128.0 * Sign + mod(floor(Mantissa * float(128.0)),128.0);
rgba[2] = floor( mod(floor(Mantissa* exp2(float(23.0 - 8.0))), exp2(8.0)));
rgba[3] = floor( exp2(23.0)* mod(Mantissa, exp2(-15.0)));
return (1 / 255.0) * rgba;
}

Since you didn't deign to give us the exact code you used to create and upload your texture, I can only guess at what you're doing.
You seem to be creating a JavaScript array of floating-point numbers. You then create a Uint8Array, passing that array to the constructor.
According to the WebGL spec (or rather, the spec that the WebGL spec refers to when ostensibly specifying this behavior), the conversion from floats to unsigned bytes happens in one of two ways, based on the destination. If the destination is considered "clamped", then it clamps the number to the destination range, namely [0, 255] for your case. If the destination is not considered "clamped", then it is taken modulo 28. The WebGL "specification" is sufficiently poor that it is not entirely clear whether the construction of Uint8Array is considered clamped or not. Whether clamped or taken modulo 28, the decimal point is chopped off and the integer value stored.
However, when you give this data to OpenWebGL, you told WebGL to interpret the bytes as normalized unsigned integer values. This means that the input values on the range [0, 255] will be accessed by users of the texture as [0, 1] floating point values.
So if your input array had the value 183.45, the value in the Uint8Array would be 183. The value in the texture would be 183/255, or 0.718. If your input value was 0.45, the Uint8Array would hold 0, and the texture result would be 0.0.
Now, because you passed the data as GL_RGBA, that means that every 4 unsigned bytes will be taken as a single texel. So every call to texture will fetch those particular four values (at the given texture coordinate, using the given filtering parameters), thus returning a vec4.
It is not clear what you intend to do with this floating-point data, so it is hard to make suggestions as to how best to pass float data to a shader. However, a general solution would be to use the OES_texture_float extension and actually create a texture that stores floating-point data. Of course, if it isn't available, you'll still have to find a way to do what you want.
BTW, Khronos really should be ashamed of themselves for even calling WebGL a specification. It barely specifies anything; it's just a bunch of references to other specifications, which makes finding the effects of anything exceedingly difficult.

You won't be able to just interpret the 4 unsigned bytes as the bits of a float value (which I assume you want) in a shader (at least not in GLES or WebGL, I think). What you can do is not store the float's bit representation in the 4 ubytes, but the bits of the mantissa (or the fixed point representation). For this you need to know the approximate range of the floats (I'll assume [0,1] here for simplicity, otherwise you have to scale differently, of course):
r = clamp(int(2^8 * f), 0, 255);
g = clamp(int(2^16 * f), 0, 255);
b = clamp(int(2^24 * f), 0, 255); //only have 24 bits of precision anyway
Of course you can also work directly with the mantissa bits. And then in the shader you can just reconstruct it that way, using the fact that the components of the vec4 are all in [0,1]:
f = (v.r) + (v.g / 2^8) + (v.b / 2^16);
Although I'm not sure if this will result in the exact same value, the powers of two should help a bit there.

Related

How to create a 3D random gradient out of 3 passed values in a fragment shader?

I need to create an animated smoke-like texture. I can achieve this with the 3D perlin noise using gradients passed from the CPU side, which i did:
But on the current project I cannot pass an array from the normal cpp-code. I'm limited to only writing HLSL shaders (although, all the following stuff is written in GLSL as it's easier to set up). So I thought I need to generate some sort of random values for my gradients inside the fragment shader. While investigating how I can tackle this problem, I figured out that I can actually use hash functions as my pseudo random values. I'm following these articles (the first and the second), so I chose to use PCG hash for my purposes. I managed to generate decently looking value noise with the following code.
#version 420 core
#define MAX_TABLE_SIZE 256
#define MASK (MAX_TABLE_SIZE - 1)
in vec2 TexCoord;
out vec4 FragColor;
uniform float Time;
uint pcg_hash(uint input)
{
uint state = input * 747796405u + 2891336453u;
uint word = ((state >> ((state >> 28u) + 4u)) ^ state) * 277803737u;
return (word >> 22u) ^ word;
}
// This is taken from here
// https://stackoverflow.com/a/17479300/9778826
float ConvertToFloat(uint n)
{
uint ieeeMantissa = 0x007FFFFFu; // binary32 mantissa bitmask
uint ieeeOne = 0x3F800000u; // 1.0 in IEEE binary32
n &= ieeeMantissa;
n |= ieeeOne;
float f = uintBitsToFloat(n);
return f - 1.0;
}
float Random1(uint x, uint y)
{
uint hash = pcg_hash(y ^ pcg_hash(x));
return ConvertToFloat(hash);
}
float ValueNoise(vec2 p)
{
int xi = int(p.x);
uint rx0 = uint(xi & MASK);
uint rx1 = uint((xi + 1) & MASK);
int yi = int(p.y);
uint ry0 = uint(yi & MASK);
uint ry1 = uint((yi + 1) & MASK);
float tx = p.x - float(xi);
float ty = p.y - float(yi);
float r00 = Random1(rx0, ry0);
float r10 = Random1(rx1, ry0);
float r01 = Random1(rx0, ry1);
float r11 = Random1(rx1, ry1);
float sx = smoothstep(0, 1, tx);
float sy = smoothstep(0, 1, ty);
float lerp0 = mix(r00, r10, sx);
float lerp1 = mix(r01, r11, sx);
return mix(lerp0, lerp1, sy);
}
float FractalNoise(vec2 point)
{
float sum = 0.0;
float frequency = 0.01;
float amplitude = 1;
int nLayers = 5;
for (int i = 0; i < nLayers; i++)
{
float noise = ValueNoise(point * frequency) * amplitude * 0.5;
sum += noise;
amplitude *= 0.5;
frequency *= 2.0;
}
return sum;
}
void main()
{
// Coordinates go from 0.0 to 1.0 both horizontally and vertically
vec2 Point = TexCoord * 2000;
float noise = FractalNoise(Point);
FragColor = vec4(noise, noise, noise, 1.0);
}
What I want, however, is to generate a 3D random gradient (which is actually just a 3D random vector) out of three arguments that I pass, to then feed it into the Perlin noise function. But I don't know how to do it properly. To clarify a bit about these three arguments: see, I need an animated Perlin noise, which means I will need a three component gradient at every joint of the 3D lattice. And the arguments are exactly x, y as well as the time variable but in the strict order. Say, a point (1, 4, 5) produces a gradient (0.1, 0.03, 0.78), but a point (4, 1, 5) should produce a completely different gradient, say, (0.22, 0.95, 0.43). So again, the order matters.
What I came up with (and what I could understand from the articles in question) is that I can hash the arguments sequentially and then use the resulting value as a seed to the same hash function which will now be working as a random number generator. So I wrote this function:
vec3 RandomGradient3(int x, int y, int z)
{
uint seed = pcg_hash(z ^ pcg_hash(y ^ pcg_hash(x)));
uint s1 = seed ^ pcg_hash(seed);
uint s2 = s1 ^ pcg_hash(s1);
uint s3 = s2 ^ pcg_hash(s2);
float g1 = ConvertToFloat(s1);
float g2 = ConvertToFloat(s2);
float g3 = ConvertToFloat(s3);
return vec3(g1, g2, g3);
}
And the gradient I then feed to the 3D perlin noise function:
float CalculatePerlin3D(vec2 p)
{
float z = Time; // a uniform variable passed from the CPU side
int xi0 = int(floor(p.x)) & MASK;
int yi0 = int(floor(p.y)) & MASK;
int zi0 = int(floor(z)) & MASK;
int xi1 = (xi0 + 1) & MASK;
int yi1 = (yi0 + 1) & MASK;
int zi1 = (zi0 + 1) & MASK;
float tx = p.x - int(floor(p.x));
float ty = p.y - int(floor(p.y));
float tz = z - int(floor(z));
float u = smoothstep(0, 1, tx);
float v = smoothstep(0, 1, ty);
float w = smoothstep(0, 1, tz);
vec3 c000 = RandomGradient3(xi0, yi0, zi0);
vec3 c100 = RandomGradient3(xi1, yi0, zi0);
vec3 c010 = RandomGradient3(xi0, yi1, zi0);
vec3 c110 = RandomGradient3(xi1, yi1, zi0);
vec3 c001 = RandomGradient3(xi0, yi0, zi1);
vec3 c101 = RandomGradient3(xi1, yi0, zi1);
vec3 c011 = RandomGradient3(xi0, yi1, zi1);
vec3 c111 = RandomGradient3(xi1, yi1, zi1);
float x0 = tx, x1 = tx - 1;
float y0 = ty, y1 = ty - 1;
float z0 = tz, z1 = tz - 1;
vec3 p000 = vec3(x0, y0, z0);
vec3 p100 = vec3(x1, y0, z0);
vec3 p010 = vec3(x0, y1, z0);
vec3 p110 = vec3(x1, y1, z0);
vec3 p001 = vec3(x0, y0, z1);
vec3 p101 = vec3(x1, y0, z1);
vec3 p011 = vec3(x0, y1, z1);
vec3 p111 = vec3(x1, y1, z1);
float a = mix(dot(c000, p000), dot(c100, p100), u);
float b = mix(dot(c010, p010), dot(c110, p110), u);
float c = mix(dot(c001, p001), dot(c101, p101), u);
float d = mix(dot(c011, p011), dot(c111, p111), u);
float e = mix(a, b, v);
float f = mix(c, d, v);
float noise = mix(e, f, w);
float unsignedNoise = (noise + 1.0) / 2.0;
return unsignedNoise;
}
With this RandomGradient3 function, the following noise texture is produced:
So the gradients seem to be correlated, hence the noise is not really random. The question is, how can I properly randomize these s1, s2 and s3 from RandomGradient3? I'm a real beginner in all this random numbers generating stuff and is certainly not a math guy.
The 3D perlin noise function, which I have, seems to be fine because if I feed it with predefined gradients from CPU it produces the expected result.
Oh, well. After I have posted the question, I realized that I haven't scaled the generated gradients properly! The function produced gradients in the range [0.0; 1.0] but we actually need [-1.0; 1.0] to make it work. So I rewrote this piece of code
vec3 RandomGradient3(int x, int y, int z)
{
uint seed = pcg_hash(z ^ pcg_hash(y ^ pcg_hash(x)));
uint s1 = seed ^ pcg_hash(seed);
uint s2 = s1 ^ pcg_hash(s1);
uint s3 = s2 ^ pcg_hash(s2);
float g1 = ConvertToFloat(s1);
float g2 = ConvertToFloat(s2);
float g3 = ConvertToFloat(s3);
return vec3(g1, g2, g3);
}
To this:
vec3 RandomGradient3(int x, int y, int z)
{
uint seed = pcg_hash(z ^ pcg_hash(y ^ pcg_hash(x)));
uint s1 = seed ^ pcg_hash(seed);
uint s2 = s1 ^ pcg_hash(s1);
uint s3 = s2 ^ pcg_hash(s2);
float g1 = (ConvertToFloat(s1) - 0.5) * 2.0;
float g2 = (ConvertToFloat(s2) - 0.5) * 2.0;
float g3 = (ConvertToFloat(s3) - 0.5) * 2.0;
return vec3(g1, g2, g3);
}
The animation now looks as expected:
Although, I've got another question. Do these computations produce really good pseudo random numbers that we can rely on to generate random textures? Or is there a better way to do this? Obviously, they produce a good enough result which we can assume from the GIF above, but still. Sure, I can dive into statistics and stuff but maybe somebody's got a quick answer.
uint seed = pcg_hash(z ^ pcg_hash(y ^ pcg_hash(x)));
uint s1 = seed ^ pcg_hash(seed);
uint s2 = s1 ^ pcg_hash(s1);
uint s3 = s2 ^ pcg_hash(s2);

GLES Encode/Decode 32bits float to 2x16bits

Im trying to optimize texture memory and all that stop me from converting a GL_RGBA32F LUT to GL_RGBA16F is one index that (might) exceed the limit. Is there anyway that I could in C take a float and split it into 2 values and then in GLSL reconstruct that float from the 2 values stored in the LUT?
What I mean is something like this:
[ C ]
float v0,v1, *pixel_array;
magic_function_in_c( my_big_value, &v0, &v1 );
pixel_array[ index++ ] = pos.x; // R
pixel_array[ index++ ] = pos.y; // G
pixel_array[ index++ ] = v0; // B
pixel_array[ index++ ] = v1; // A
[ GLSL ]
vec4 lookup = texture2D( sampler0, texcoord );
float v = magic_function_in_glsl( lookup.b, lookup.a );
ps: Im using GLES 2.0 (to be also compatible with WebGL)
If you just need more range than float16 provides, and only in one direction (larger or smaller), you can multiply by a fixed scaling factor.
For instance, if you need to some number N, greater than 65503, you can 'encode' by dividing N by 2, and 'decode' by multiplying by 2. This shifts the effective range up, sacrificing the range of 1/N, but expanding the range maximum for +/-N. You can swap the multiply and divide if you need more range in 1/N than in +/-N. You can use the second value to store what the scaling factor is, if you need it to change based on data.
You can also experiment with exp2 and log2, something like:
void
magic_function_in_c(float fVal, uint16_t* hExponent, uint16_t* hMult)
{
float fExponent = log2f(f);
*hExponent = f32_to_f16(fExponent);
// Compensate for f32->f16 precision loss
float fActualExponent = f16_to_f32(*hExponent);
float fValFromExponent = exp2f(fActualExponent);
float fMult;
if (fValFromExponent != 0.0f) {
fMult = fVal / fValFromExponent;
} else if (fVal < 0.0f) {
fMult = -1.0f;
} else {
fMult = 1.0f
}
*hMult = f32_to_f16(fMult);
}
highp float
magic_function_in_glsl(highp float hExponent, highp float hMult)
{
return exp2(hExponent) * hMult;
}
Note that none of this will work if you don't have highp floats in your GLSL shader.

Encode floating point data in a RGBA texture

I wrote some WebGL code that is based on floating point textures. But while testing it on a few more devices I found that support for the OES_texture_float extension isn't as widespread as I had thought. So I'm looking for a fallback.
I have currently a luminance floating point texture with values between -1.0 and 1.0. I'd like to encode this data in a texture format that is available in WebGL without any extensions, so probably a simple RGBA unsigned byte texture.
I'm a bit worried about the potential performance overhead because the cases where this fallback is needed are older smartphones or tablets which already have much weaker GPUs than a modern desktop computer.
How can I emulate floating point textures on a device that doesn't support them in WebGL?
If you know your range is -1 to +1 the simplest way is to just to convert that to some integer range and then convert back. Using the code from this answer which packs a value that goes from 0 to 1 into a 32bit color
const vec4 bitSh = vec4(256. * 256. * 256., 256. * 256., 256., 1.);
const vec4 bitMsk = vec4(0.,vec3(1./256.0));
const vec4 bitShifts = vec4(1.) / bitSh;
vec4 pack (float value) {
vec4 comp = fract(value * bitSh);
comp -= comp.xxyz * bitMsk;
return comp;
}
float unpack (vec4 color) {
return dot(color , bitShifts);
}
Then
const float rangeMin = -1.;
const float rangeMax = -1.;
vec4 convertFromRangeToColor(float value) {
float zeroToOne = (value - rangeMin) / (rangeMax - rangeMin);
return pack(value);
}
float convertFromColorToRange(vec4 color) {
float zeroToOne = unpack(color);
return rangeMin + zeroToOne * (rangeMax - rangeMin);
}
This should be a good starting point: http://aras-p.info/blog/2009/07/30/encoding-floats-to-rgba-the-final/
It's intended for encoding to 0.0 to 1.0, but should be straightforward to remap to your required range.

Extended floating point precision on mobile GPU

I'm trying to compute the gradient vector field of an image on the gpu using opengl-es 2.0. I found a cpu implementation for it which i use as a compare to my gpu implementation. The challenge here is that the cpu implementation relies on java type float (32 bits) whereas my gpu implementation is using lowp float (8 bits). I know i could use mediump or highp, to get better results but still i would like to keep on using lowp float to make sure my code will be able to run on the poorest possible hardware.
The first few steps for calculating the gradient vector field are very simple:
compute normalised greyscale (red+green+blue)/3.0
compute edge map (right pixel-left pixel)/2.0 and (up pixel-down pixel)/2.0
compute laplacian (a bit more complex but there is no need to get to the details of this now)
Currently, without doing anything fancy, i'm able to mimic exactly step 1 such that the image result from the cpu implementation is the same as the one from the gpu.
Unfortunately, i'm already stuck on step 2, because my edge map calculation is not accurate enough on the gpu.
So i've tried to implement an extended precision floating point, inspired from http://andrewthall.org/papers/df64_qf128.pdf .
I'm fairly new to opengl-es and so i'm not even sure i did things correctly here, but below are the operations i intended to code in order to work out this precision loss i'm currently suffering of.
vec2 split(float a)
{
float t = a * (2e-8+1.0);
float aHi = t - (t -a);
float aLo = a - aHi;
return vec2(aHi,aLo);
}
vec2 twoProd(float a, float b)
{
float p = a * b;
vec2 aS = split(a);
vec2 bS = split(b);
float err = ( ( (aS.x * bS.x) - p) + (aS.x * bS.y) + (aS.y * bS.x) ) + (aS.y * bS.y);
return vec2(p,err);
}
vec2 FMAtwoProd(float a,float b)
{
float x = a * b;
float y = a * b - x;
return vec2(x,y);
}
vec2 div(vec2 a, vec2 b)
{
float q = a.x / b.x;
vec2 res = twoProd( q , b.x );
float r = ( a.x - res.x ) - res.y ;
return vec2(q,r);
}
vec2 div(vec2 a, float b)
{
return div(a,split(b));
}
vec2 quickTwoSum(float a,float b)
{
float s = a + b;
float e = b - (s-a);
return vec2(s,e);
}
vec2 twoSum(float a,float b)
{
float s = a + b;
float v = s - a;
float e = ( a - (s - v)) + ( b - v );
return vec2(s,e);
}
vec2 add(vec2 a, vec2 b)
{
vec2 s = twoSum(a.x , b.x);
vec2 t = twoSum(a.y , b.y);
s.y += t.x;
s = quickTwoSum(s.x,s.y);
s.y += t.y;
s = quickTwoSum(s.x,s.y);
return s;
}
vec2 add(vec2 a,float b)
{
return add(a,split(b));
}
vec2 mult2(vec2 a,vec2 b)
{
vec2 p = twoProd(a.x,b.x);
p.y += a.x * b.y;
p.y += a.y * b.x;
p = quickTwoSum(p.x,p.y);
return p;
}
vec2 mult(vec2 a,float b)
{
return mult2(a, split(b));
}
Obviously, i must be doing something wrong here or miss some quite fundamental concepts as i'm getting the same results whether i'm using simple operations or my extended floating point operations...
The challenge here is that the cpu implementation relies on java type float (32 bits) whereas my gpu implementation is using lowp float (8 bits).
lowp does not actually imply the number of bits used for floating-point arithmetic. It is more to do with the range of values that must be expressible and the minimum distinguishable value (precision) - you can use this to figure out a minimum number of bits, but GLSL never discusses it as such.
Currently, without doing anything fancy, i'm able to mimic exactly step 1 such that the image result from the cpu implementation is the same as the one from the gpu.
That is lucky, because an immediate problem in your description comes from the fact that lowp is only guaranteed to represent values in the range [-2.0,2.0]. If you try to normalize a low-precision floating-point value by dividing it by 3 (as shown in step 1), that may or may not work. In the worst-case this will not work because the floating-point value will never reach 3.0. However, on some GPUs it may work because there may be no difference between lowp and mediump or a GPU's lowp may exceed the minimum requirements outlined in 4.5.2 Precision Qualifiers of the GLSL ES 1.00 specification.
... still I would like to keep on using lowp float to make sure my code will be able to run on the poorest possible hardware.
If you are targeting the lowest-end hardware possible, keep in mind that ES 2.0 requires mediump support in all shader stages. The only thing lowp might get you is improved performance on some GPUs, but any GPU that can host ES 2.0 is one that supports medium precision floating-point and your algorithm is one that requires a range greater than lowp guarantees.

Random / noise functions for GLSL

As the GPU driver vendors don't usually bother to implement noiseX in GLSL, I'm looking for a "graphics randomization swiss army knife" utility function set, preferably optimised to use within GPU shaders. I prefer GLSL, but code any language will do for me, I'm ok with translating it on my own to GLSL.
Specifically, I'd expect:
a) Pseudo-random functions - N-dimensional, uniform distribution over [-1,1] or over [0,1], calculated from M-dimensional seed (ideally being any value, but I'm OK with having the seed restrained to, say, 0..1 for uniform result distribution). Something like:
float random (T seed);
vec2 random2 (T seed);
vec3 random3 (T seed);
vec4 random4 (T seed);
// T being either float, vec2, vec3, vec4 - ideally.
b) Continous noise like Perlin Noise - again, N-dimensional, +- uniform distribution, with constrained set of values and, well, looking good (some options to configure the appearance like Perlin levels could be useful too). I'd expect signatures like:
float noise (T coord, TT seed);
vec2 noise2 (T coord, TT seed);
// ...
I'm not very much into random number generation theory, so I'd most eagerly go for a pre-made solution, but I'd also appreciate answers like "here's a very good, efficient 1D rand(), and let me explain you how to make a good N-dimensional rand() on top of it..." .
For very simple pseudorandom-looking stuff, I use this oneliner that I found on the internet somewhere:
float rand(vec2 co){
return fract(sin(dot(co, vec2(12.9898, 78.233))) * 43758.5453);
}
You can also generate a noise texture using whatever PRNG you like, then upload this in the normal fashion and sample the values in your shader; I can dig up a code sample later if you'd like.
Also, check out this file for GLSL implementations of Perlin and Simplex noise, by Stefan Gustavson.
It occurs to me that you could use a simple integer hash function and insert the result into a float's mantissa. IIRC the GLSL spec guarantees 32-bit unsigned integers and IEEE binary32 float representation so it should be perfectly portable.
I gave this a try just now. The results are very good: it looks exactly like static with every input I tried, no visible patterns at all. In contrast the popular sin/fract snippet has fairly pronounced diagonal lines on my GPU given the same inputs.
One disadvantage is that it requires GLSL v3.30. And although it seems fast enough, I haven't empirically quantified its performance. AMD's Shader Analyzer claims 13.33 pixels per clock for the vec2 version on a HD5870. Contrast with 16 pixels per clock for the sin/fract snippet. So it is certainly a little slower.
Here's my implementation. I left it in various permutations of the idea to make it easier to derive your own functions from.
/*
static.frag
by Spatial
05 July 2013
*/
#version 330 core
uniform float time;
out vec4 fragment;
// A single iteration of Bob Jenkins' One-At-A-Time hashing algorithm.
uint hash( uint x ) {
x += ( x << 10u );
x ^= ( x >> 6u );
x += ( x << 3u );
x ^= ( x >> 11u );
x += ( x << 15u );
return x;
}
// Compound versions of the hashing algorithm I whipped together.
uint hash( uvec2 v ) { return hash( v.x ^ hash(v.y) ); }
uint hash( uvec3 v ) { return hash( v.x ^ hash(v.y) ^ hash(v.z) ); }
uint hash( uvec4 v ) { return hash( v.x ^ hash(v.y) ^ hash(v.z) ^ hash(v.w) ); }
// Construct a float with half-open range [0:1] using low 23 bits.
// All zeroes yields 0.0, all ones yields the next smallest representable value below 1.0.
float floatConstruct( uint m ) {
const uint ieeeMantissa = 0x007FFFFFu; // binary32 mantissa bitmask
const uint ieeeOne = 0x3F800000u; // 1.0 in IEEE binary32
m &= ieeeMantissa; // Keep only mantissa bits (fractional part)
m |= ieeeOne; // Add fractional part to 1.0
float f = uintBitsToFloat( m ); // Range [1:2]
return f - 1.0; // Range [0:1]
}
// Pseudo-random value in half-open range [0:1].
float random( float x ) { return floatConstruct(hash(floatBitsToUint(x))); }
float random( vec2 v ) { return floatConstruct(hash(floatBitsToUint(v))); }
float random( vec3 v ) { return floatConstruct(hash(floatBitsToUint(v))); }
float random( vec4 v ) { return floatConstruct(hash(floatBitsToUint(v))); }
void main()
{
vec3 inputs = vec3( gl_FragCoord.xy, time ); // Spatial and temporal inputs
float rand = random( inputs ); // Random per-pixel value
vec3 luma = vec3( rand ); // Expand to RGB
fragment = vec4( luma, 1.0 );
}
Screenshot:
I inspected the screenshot in an image editing program. There are 256 colours and the average value is 127, meaning the distribution is uniform and covers the expected range.
Gustavson's implementation uses a 1D texture
No it doesn't, not since 2005. It's just that people insist on downloading the old version. The version that is on the link you supplied uses only 8-bit 2D textures.
The new version by Ian McEwan of Ashima and myself does not use a texture, but runs at around half the speed on typical desktop platforms with lots of texture bandwidth. On mobile platforms, the textureless version might be faster because texturing is often a significant bottleneck.
Our actively maintained source repository is:
https://github.com/ashima/webgl-noise
A collection of both the textureless and texture-using versions of noise is here (using only 2D textures):
http://www.itn.liu.se/~stegu/simplexnoise/GLSL-noise-vs-noise.zip
If you have any specific questions, feel free to e-mail me directly (my email address can be found in the classicnoise*.glsl sources.)
Gold Noise
// Gold Noise ©2015 dcerisano#standard3d.com
// - based on the Golden Ratio
// - uniform normalized distribution
// - fastest static noise generator function (also runs at low precision)
// - use with indicated fractional seeding method.
float PHI = 1.61803398874989484820459; // Φ = Golden Ratio
float gold_noise(in vec2 xy, in float seed){
return fract(tan(distance(xy*PHI, xy)*seed)*xy.x);
}
See Gold Noise in your browser right now!
This function has improved random distribution over the current function in #appas' answer as of Sept 9, 2017:
The #appas function is also incomplete, given there is no seed supplied (uv is not a seed - same for every frame), and does not work with low precision chipsets. Gold Noise runs at low precision by default (much faster).
There is also a nice implementation described here by McEwan and #StefanGustavson that looks like Perlin noise, but "does not require any setup, i.e. not textures nor uniform arrays. Just add it to your shader source code and call it wherever you want".
That's very handy, especially given that Gustavson's earlier implementation, which #dep linked to, uses a 1D texture, which is not supported in GLSL ES (the shader language of WebGL).
After the initial posting of this question in 2010, a lot has changed in the realm of good random functions and hardware support for them.
Looking at the accepted answer from today's perspective, this algorithm is very bad in uniformity of the random numbers drawn from it. And the uniformity suffers a lot depending on the magnitude of the input values and visible artifacts/patterns will become apparent when sampling from it for e.g. ray/path tracing applications.
There have been many different functions (most of them integer hashing) being devised for this task, for different input and output dimensionality, most of which are being evaluated in the 2020 JCGT paper Hash Functions for GPU Rendering. Depending on your needs you could select a function from the list of proposed functions in that paper and simply from the accompanying Shadertoy.
One that isn't covered in this paper but that has served me very well without any noticeably patterns on any input magnitude values is also one that I want to highlight.
Other classes of algorithms use low-discrepancy sequences to draw pseudo-random numbers from, such as the Sobol squence with Owen-Nayar scrambling. Eric Heitz has done some amazing research in this area, as well with his A Low-Discrepancy Sampler that Distributes Monte Carlo Errors as a Blue Noise in Screen Space paper.
Another example of this is the (so far latest) JCGT paper Practical Hash-based Owen Scrambling, which applies Owen scrambling to a different hash function (namely Laine-Karras).
Yet other classes use algorithms that produce noise patterns with desirable frequency spectrums, such as blue noise, that is particularly "pleasing" to the eyes.
(I realize that good StackOverflow answers should provide the algorithms as source code and not as links because those can break, but there are way too many different algorithms nowadays and I intend for this answer to be a summary of known-good algorithms today)
Do use this:
highp float rand(vec2 co)
{
highp float a = 12.9898;
highp float b = 78.233;
highp float c = 43758.5453;
highp float dt= dot(co.xy ,vec2(a,b));
highp float sn= mod(dt,3.14);
return fract(sin(sn) * c);
}
Don't use this:
float rand(vec2 co){
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
You can find the explanation in Improvements to the canonical one-liner GLSL rand() for OpenGL ES 2.0
hash:
Nowadays webGL2.0 is there so integers are available in (w)GLSL.
-> for quality portable hash (at similar cost than ugly float hashes) we can now use "serious" hashing techniques.
IQ implemented some in https://www.shadertoy.com/view/XlXcW4 (and more)
E.g.:
const uint k = 1103515245U; // GLIB C
//const uint k = 134775813U; // Delphi and Turbo Pascal
//const uint k = 20170906U; // Today's date (use three days ago's dateif you want a prime)
//const uint k = 1664525U; // Numerical Recipes
vec3 hash( uvec3 x )
{
x = ((x>>8U)^x.yzx)*k;
x = ((x>>8U)^x.yzx)*k;
x = ((x>>8U)^x.yzx)*k;
return vec3(x)*(1.0/float(0xffffffffU));
}
Just found this version of 3d noise for GPU, alledgedly it is the fastest one available:
#ifndef __noise_hlsl_
#define __noise_hlsl_
// hash based 3d value noise
// function taken from https://www.shadertoy.com/view/XslGRr
// Created by inigo quilez - iq/2013
// License Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
// ported from GLSL to HLSL
float hash( float n )
{
return frac(sin(n)*43758.5453);
}
float noise( float3 x )
{
// The noise function returns a value in the range -1.0f -> 1.0f
float3 p = floor(x);
float3 f = frac(x);
f = f*f*(3.0-2.0*f);
float n = p.x + p.y*57.0 + 113.0*p.z;
return lerp(lerp(lerp( hash(n+0.0), hash(n+1.0),f.x),
lerp( hash(n+57.0), hash(n+58.0),f.x),f.y),
lerp(lerp( hash(n+113.0), hash(n+114.0),f.x),
lerp( hash(n+170.0), hash(n+171.0),f.x),f.y),f.z);
}
#endif
A straight, jagged version of 1d Perlin, essentially a random lfo zigzag.
half rn(float xx){
half x0=floor(xx);
half x1=x0+1;
half v0 = frac(sin (x0*.014686)*31718.927+x0);
half v1 = frac(sin (x1*.014686)*31718.927+x1);
return (v0*(1-frac(xx))+v1*(frac(xx)))*2-1*sin(xx);
}
I also have found 1-2-3-4d perlin noise on shadertoy owner inigo quilez perlin tutorial website, and voronoi and so forth, he has full fast implementations and codes for them.
I have translated one of Ken Perlin's Java implementations into GLSL and used it in a couple projects on ShaderToy.
Below is the GLSL interpretation I did:
int b(int N, int B) { return N>>B & 1; }
int T[] = int[](0x15,0x38,0x32,0x2c,0x0d,0x13,0x07,0x2a);
int A[] = int[](0,0,0);
int b(int i, int j, int k, int B) { return T[b(i,B)<<2 | b(j,B)<<1 | b(k,B)]; }
int shuffle(int i, int j, int k) {
return b(i,j,k,0) + b(j,k,i,1) + b(k,i,j,2) + b(i,j,k,3) +
b(j,k,i,4) + b(k,i,j,5) + b(i,j,k,6) + b(j,k,i,7) ;
}
float K(int a, vec3 uvw, vec3 ijk)
{
float s = float(A[0]+A[1]+A[2])/6.0;
float x = uvw.x - float(A[0]) + s,
y = uvw.y - float(A[1]) + s,
z = uvw.z - float(A[2]) + s,
t = 0.6 - x * x - y * y - z * z;
int h = shuffle(int(ijk.x) + A[0], int(ijk.y) + A[1], int(ijk.z) + A[2]);
A[a]++;
if (t < 0.0)
return 0.0;
int b5 = h>>5 & 1, b4 = h>>4 & 1, b3 = h>>3 & 1, b2= h>>2 & 1, b = h & 3;
float p = b==1?x:b==2?y:z, q = b==1?y:b==2?z:x, r = b==1?z:b==2?x:y;
p = (b5==b3 ? -p : p); q = (b5==b4 ? -q : q); r = (b5!=(b4^b3) ? -r : r);
t *= t;
return 8.0 * t * t * (p + (b==0 ? q+r : b2==0 ? q : r));
}
float noise(float x, float y, float z)
{
float s = (x + y + z) / 3.0;
vec3 ijk = vec3(int(floor(x+s)), int(floor(y+s)), int(floor(z+s)));
s = float(ijk.x + ijk.y + ijk.z) / 6.0;
vec3 uvw = vec3(x - float(ijk.x) + s, y - float(ijk.y) + s, z - float(ijk.z) + s);
A[0] = A[1] = A[2] = 0;
int hi = uvw.x >= uvw.z ? uvw.x >= uvw.y ? 0 : 1 : uvw.y >= uvw.z ? 1 : 2;
int lo = uvw.x < uvw.z ? uvw.x < uvw.y ? 0 : 1 : uvw.y < uvw.z ? 1 : 2;
return K(hi, uvw, ijk) + K(3 - hi - lo, uvw, ijk) + K(lo, uvw, ijk) + K(0, uvw, ijk);
}
I translated it from Appendix B from Chapter 2 of Ken Perlin's Noise Hardware at this source:
https://www.csee.umbc.edu/~olano/s2002c36/ch02.pdf
Here is a public shade I did on Shader Toy that uses the posted noise function:
https://www.shadertoy.com/view/3slXzM
Some other good sources I found on the subject of noise during my research include:
https://thebookofshaders.com/11/
https://mzucker.github.io/html/perlin-noise-math-faq.html
https://rmarcus.info/blog/2018/03/04/perlin-noise.html
http://flafla2.github.io/2014/08/09/perlinnoise.html
https://mrl.nyu.edu/~perlin/noise/
https://rmarcus.info/blog/assets/perlin/perlin_paper.pdf
https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch05.html
I highly recommend the book of shaders as it not only provides a great interactive explanation of noise, but other shader concepts as well.
EDIT:
Might be able to optimize the translated code by using some of the hardware-accelerated functions available in GLSL. Will update this post if I end up doing this.
lygia, a multi-language shader library
If you don't want to copy / paste the functions into your shader, you can also use lygia, a multi-language shader library. It contains a few generative functions like cnoise, fbm, noised, pnoise, random, snoise in both GLSL and HLSL. And many other awesome functions as well. For this to work it:
Relays on #include "file" which is defined by Khronos GLSL standard and suported by most engines and enviroments (like glslViewer, glsl-canvas VS Code pluging, Unity, etc. ).
Example: cnoise
Using cnoise.glsl with #include:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
#include "lygia/generative/cnoise.glsl"
void main (void) {
vec2 st = gl_FragCoord.xy / u_resolution.xy;
vec3 color = vec3(cnoise(vec3(st * 5.0, u_time)));
gl_FragColor = vec4(color, 1.0);
}
To run this example I used glslViewer.
Please see below an example how to add white noise to the rendered texture.
The solution is to use two textures: original and pure white noise, like this one: wiki white noise
private static final String VERTEX_SHADER =
"uniform mat4 uMVPMatrix;\n" +
"uniform mat4 uMVMatrix;\n" +
"uniform mat4 uSTMatrix;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"varying vec4 vInCamPosition;\n" +
"void main() {\n" +
" vTextureCoord = (uSTMatrix * aTextureCoord).xy;\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
"}\n";
private static final String FRAGMENT_SHADER =
"precision mediump float;\n" +
"uniform sampler2D sTextureUnit;\n" +
"uniform sampler2D sNoiseTextureUnit;\n" +
"uniform float uNoseFactor;\n" +
"varying vec2 vTextureCoord;\n" +
"varying vec4 vInCamPosition;\n" +
"void main() {\n" +
" gl_FragColor = texture2D(sTextureUnit, vTextureCoord);\n" +
" vec4 vRandChosenColor = texture2D(sNoiseTextureUnit, fract(vTextureCoord + uNoseFactor));\n" +
" gl_FragColor.r += (0.05 * vRandChosenColor.r);\n" +
" gl_FragColor.g += (0.05 * vRandChosenColor.g);\n" +
" gl_FragColor.b += (0.05 * vRandChosenColor.b);\n" +
"}\n";
The fragment shared contains parameter uNoiseFactor which is updated on every rendering by main application:
float noiseValue = (float)(mRand.nextInt() % 1000)/1000;
int noiseFactorUniformHandle = GLES20.glGetUniformLocation( mProgram, "sNoiseTextureUnit");
GLES20.glUniform1f(noiseFactorUniformHandle, noiseFactor);
FWIW I had the same questions and I needed it to be implemented in WebGL 1.0, so I couldn't use a few of the examples given in previous answers. I tried the Gold Noise mentioned before, but the use of PHI doesn't really click for me. (distance(xy * PHI, xy) * seed just equals length(xy) * (1.0 - PHI) * seed so I don't see how the magic of PHI should be put to work when it gets directly multiplied by seed?
Anyway, I did something similar just without PHI and instead added some variation at another place, basically I take the tan of the distance between xy and some random point lying outside of the frame to the top right and then multiply with the distance between xy and another such random point lying in the bottom left (so there is no accidental match between these points). Looks pretty decent as far as I can see. Click to generate new frames.
(function main() {
const dim = [512, 512];
twgl.setDefaults({ attribPrefix: "a_" });
const gl = twgl.getContext(document.querySelector("canvas"));
gl.canvas.width = dim[0];
gl.canvas.height = dim[1];
const bfi = twgl.primitives.createXYQuadBufferInfo(gl);
const pgi = twgl.createProgramInfo(gl, ["vs", "fs"]);
gl.canvas.onclick = (() => {
twgl.bindFramebufferInfo(gl, null);
gl.useProgram(pgi.program);
twgl.setUniforms(pgi, {
u_resolution: dim,
u_seed: Array(4).fill().map(Math.random)
});
twgl.setBuffersAndAttributes(gl, pgi, bfi);
twgl.drawBufferInfo(gl, bfi);
});
})();
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<script id="vs" type="x-shader/x-vertex">
attribute vec4 a_position;
attribute vec2 a_texcoord;
void main() {
gl_Position = a_position;
}
</script>
<script id="fs" type="x-shader/x-fragment">
precision highp float;
uniform vec2 u_resolution;
uniform vec2 u_seed[2];
void main() {
float uni = fract(
tan(distance(
gl_FragCoord.xy,
u_resolution * (u_seed[0] + 1.0)
)) * distance(
gl_FragCoord.xy,
u_resolution * (u_seed[1] - 2.0)
)
);
gl_FragColor = vec4(uni, uni, uni, 1.0);
}
</script>
<canvas></canvas>

Resources