Convert floating-point numbers to decimal digits in GLSL - algorithm

As others have discussed, GLSL lacks any kind of printf debugging.
But sometimes I really want to examine numeric values while debugging my shaders.
I've been trying to create a visual debugging tool.
I found that it's possible to render an arbitrary series of digits fairly easily in a shader, if you work with a sampler2D in which the digits 0123456789 have been rendered in monospace. Basically, you just juggle your x coordinate.
Now, to use this to examine a floating-point number, I need an algorithm for converting a float to a sequence of decimal digits, such as you might find in any printf implementation.
Unfortunately, as far as I understand the topic, these algorithms seem to need to represent the
floating-point number in a higher-precision format, and I don't see how this is going to be
possible in GLSL where I seem to have only 32-bit floats available.
For this reason, I think this question is not a duplicate of any general "how does printf work" question, but rather specifically about how such algorithms can be made to work under the constraints of GLSL. I've seen this question and answer, but have no idea what's going on there.
The algorithms I've tried aren't very good.
My first try, marked Version A (commented out) seemed pretty bad:
to take three random examples, RenderDecimal(1.0) rendered as 1.099999702, RenderDecimal(2.5) gave me
2.599999246 and RenderDecimal(2.6) came out as 2.699999280.
My second try, marked Version B, seemed
slightly better: 1.0 and 2.6 both come out fine, but RenderDecimal(2.5) still mismatches an apparent
rounding-up of the 5 with the fact that the residual is 0.099.... The result appears as 2.599000022.
My minimal/complete/verifiable example, below, starts with some shortish GLSL 1.20 code, and then
I happen to have chosen Python 2.x for the rest, just to get the shaders compiled and the textures loaded and rendered. It requires the pygame, NumPy, PyOpenGL and PIL third-party packages. Note that the Python is really just boilerplate and could be trivially (though tediously) re-written in C or anything else. Only the GLSL code at the top is critical for this question, and for this reason I don't think the python or python 2.x tags would be helpful.
It requires the following image to be saved as digits.png:
vertexShaderSource = """\
varying vec2 vFragCoordinate;
void main(void)
{
vFragCoordinate = gl_Vertex.xy;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
"""
fragmentShaderSource = """\
varying vec2 vFragCoordinate;
uniform vec2 uTextureSize;
uniform sampler2D uTextureSlotNumber;
float OrderOfMagnitude( float x )
{
return x == 0.0 ? 0.0 : floor( log( abs( x ) ) / log( 10.0 ) );
}
void RenderDecimal( float value )
{
// Assume that the texture to which uTextureSlotNumber refers contains
// a rendering of the digits '0123456789' packed together, such that
const vec2 startOfDigitsInTexture = vec2( 0, 0 ); // the lower-left corner of the first digit starts here and
const vec2 sizeOfDigit = vec2( 100, 125 ); // each digit spans this many pixels
const float nSpaces = 10.0; // assume we have this many digits' worth of space to render in
value = abs( value );
vec2 pos = vFragCoordinate - startOfDigitsInTexture;
float dpstart = max( 0.0, OrderOfMagnitude( value ) );
float decimal_position = dpstart - floor( pos.x / sizeOfDigit.x );
float remainder = mod( pos.x, sizeOfDigit.x );
if( pos.x >= 0 && pos.x < sizeOfDigit.x * nSpaces && pos.y >= 0 && pos.y < sizeOfDigit.y )
{
float digit_value;
// Version B
float dp, running_value = value;
for( dp = dpstart; dp >= decimal_position; dp -= 1.0 )
{
float base = pow( 10.0, dp );
digit_value = mod( floor( running_value / base ), 10.0 );
running_value -= digit_value * base;
}
// Version A
//digit_value = mod( floor( value * pow( 10.0, -decimal_position ) ), 10.0 );
vec2 textureSourcePosition = vec2( startOfDigitsInTexture.x + remainder + digit_value * sizeOfDigit.x, startOfDigitsInTexture.y + pos.y );
gl_FragColor = texture2D( uTextureSlotNumber, textureSourcePosition / uTextureSize );
}
// Render the decimal point
if( ( decimal_position == -1.0 && remainder / sizeOfDigit.x < 0.1 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) ||
( decimal_position == 0.0 && remainder / sizeOfDigit.x > 0.9 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) )
{
gl_FragColor = texture2D( uTextureSlotNumber, ( startOfDigitsInTexture + sizeOfDigit * vec2( 1.5, 0.5 ) ) / uTextureSize );
}
}
void main(void)
{
gl_FragColor = texture2D( uTextureSlotNumber, vFragCoordinate / uTextureSize );
RenderDecimal( 2.5 ); // for current demonstration purposes, just a constant
}
"""
# Python (PyOpenGL) code to demonstrate the above
# (Note: the same OpenGL calls could be made from any language)
import os, sys, time
import OpenGL
from OpenGL.GL import *
from OpenGL.GLU import *
import pygame, pygame.locals # just for getting a canvas to draw on
try: from PIL import Image # PIL.Image module for loading image from disk
except ImportError: import Image # old PIL didn't package its submodules on the path
import numpy # for manipulating pixel values on the Python side
def CompileShader( type, source ):
shader = glCreateShader( type )
glShaderSource( shader, source )
glCompileShader( shader )
result = glGetShaderiv( shader, GL_COMPILE_STATUS )
if result != 1:
raise Exception( "Shader compilation failed:\n" + glGetShaderInfoLog( shader ) )
return shader
class World:
def __init__( self, width, height ):
self.window = pygame.display.set_mode( ( width, height ), pygame.OPENGL | pygame.DOUBLEBUF )
# compile shaders
vertexShader = CompileShader( GL_VERTEX_SHADER, vertexShaderSource )
fragmentShader = CompileShader( GL_FRAGMENT_SHADER, fragmentShaderSource )
# build shader program
self.program = glCreateProgram()
glAttachShader( self.program, vertexShader )
glAttachShader( self.program, fragmentShader )
glLinkProgram( self.program )
# try to activate/enable shader program, handling errors wisely
try:
glUseProgram( self.program )
except OpenGL.error.GLError:
print( glGetProgramInfoLog( self.program ) )
raise
# enable alpha blending
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE )
glEnable( GL_DEPTH_TEST )
glEnable( GL_BLEND )
glBlendEquation( GL_FUNC_ADD )
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA )
# set projection and background color
gluOrtho2D( 0, width, 0, height )
glClearColor( 0.0, 0.0, 0.0, 1.0 )
self.uTextureSlotNumber_addr = glGetUniformLocation( self.program, 'uTextureSlotNumber' )
self.uTextureSize_addr = glGetUniformLocation( self.program, 'uTextureSize' )
def RenderFrame( self, *textures ):
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT )
for t in textures: t.Draw( world=self )
pygame.display.flip()
def Close( self ):
pygame.display.quit()
def Capture( self ):
w, h = self.window.get_size()
rawRGB = glReadPixels( 0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE )
return Image.frombuffer( 'RGB', ( w, h ), rawRGB, 'raw', 'RGB', 0, 1 ).transpose( Image.FLIP_TOP_BOTTOM )
class Texture:
def __init__( self, source, slot=0, position=(0,0,0) ):
# wrangle array
source = numpy.array( source )
if source.dtype.type not in [ numpy.float32, numpy.float64 ]: source = source.astype( float ) / 255.0
while source.ndim < 3: source = numpy.expand_dims( source, -1 )
if source.shape[ 2 ] == 1: source = source[ :, :, [ 0, 0, 0 ] ] # LUMINANCE -> RGB
if source.shape[ 2 ] == 2: source = source[ :, :, [ 0, 0, 0, 1 ] ] # LUMINANCE_ALPHA -> RGBA
if source.shape[ 2 ] == 3: source = source[ :, :, [ 0, 1, 2, 2 ] ]; source[ :, :, 3 ] = 1.0 # RGB -> RGBA
# now it can be transferred as GL_RGBA and GL_FLOAT
# housekeeping
self.textureSize = [ source.shape[ 1 ], source.shape[ 0 ] ]
self.textureSlotNumber = slot
self.textureSlotCode = getattr( OpenGL.GL, 'GL_TEXTURE%d' % slot )
self.listNumber = slot + 1
self.position = list( position )
# transfer texture content
glActiveTexture( self.textureSlotCode )
self.textureID = glGenTextures( 1 )
glBindTexture( GL_TEXTURE_2D, self.textureID )
glEnable( GL_TEXTURE_2D )
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F, self.textureSize[ 0 ], self.textureSize[ 1 ], 0, GL_RGBA, GL_FLOAT, source[ ::-1 ] )
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST )
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST )
# define surface
w, h = self.textureSize
glNewList( self.listNumber, GL_COMPILE )
glBegin( GL_QUADS )
glColor3f( 1, 1, 1 )
glNormal3f( 0, 0, 1 )
glVertex3f( 0, h, 0 )
glVertex3f( w, h, 0 )
glVertex3f( w, 0, 0 )
glVertex3f( 0, 0, 0 )
glEnd()
glEndList()
def Draw( self, world ):
glPushMatrix()
glTranslate( *self.position )
glUniform1i( world.uTextureSlotNumber_addr, self.textureSlotNumber )
glUniform2f( world.uTextureSize_addr, *self.textureSize )
glCallList( self.listNumber )
glPopMatrix()
world = World( 1000, 800 )
digits = Texture( Image.open( 'digits.png' ) )
done = False
while not done:
world.RenderFrame( digits )
for event in pygame.event.get():
# Press 'q' to quit or 's' to save a timestamped snapshot
if event.type == pygame.locals.QUIT: done = True
elif event.type == pygame.locals.KEYUP and event.key in [ ord( 'q' ), 27 ]: done = True
elif event.type == pygame.locals.KEYUP and event.key in [ ord( 's' ) ]:
world.Capture().save( time.strftime( 'snapshot-%Y%m%d-%H%M%S.png' ) )
world.Close()

+1 for interesting problem. Was curious so I tried to code this. I need the use of arrays so I chose #version 420 core. My app is rendering single quad covering screen with coordinates <-1,+1>. I am using whole ASCII 8x8 pixel 32x8 characters font texture I created some years ago:
The vertex is simple:
//---------------------------------------------------------------------------
// Vertex
//---------------------------------------------------------------------------
#version 420 core
//---------------------------------------------------------------------------
layout(location=0) in vec4 vertex;
out vec2 pos; // screen position <-1,+1>
void main()
{
pos=vertex.xy;
gl_Position=vertex;
}
//---------------------------------------------------------------------------
Fragment is a bit more complicated:
//---------------------------------------------------------------------------
// Fragment
//---------------------------------------------------------------------------
#version 420 core
//---------------------------------------------------------------------------
in vec2 pos; // screen position <-1,+1>
out vec4 gl_FragColor; // fragment output color
uniform sampler2D txr_font; // ASCII 32x8 characters font texture unit
uniform float fxs,fys; // font/screen resolution ratio
//---------------------------------------------------------------------------
const int _txtsiz=32; // text buffer size
int txt[_txtsiz],txtsiz; // text buffer and its actual size
vec4 col; // color interface for txt_print()
//---------------------------------------------------------------------------
void txt_decimal(float x) // print float x into txt
{
int i,j,c; // l is size of string
float y,a;
const float base=10;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// divide to int(x).fract(y) parts of number
y=x; x=floor(x); y-=x;
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x=floor(x/base);
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0.0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
// handle fractional part
for (txt[txtsiz]='.',txtsiz++;txtsiz<_txtsiz;)
{
y*=base;
a=floor(y);
y-=a;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (y<=0.0) break;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_print(float x0,float y0) // print txt at x0,y0 [chars]
{
int i;
float x,y;
// fragment position [chars] relative to x0,y0
x=0.5*(1.0+pos.x)/fxs; x-=x0;
y=0.5*(1.0-pos.y)/fys; y-=y0;
// inside bbox?
if ((x<0.0)||(x>float(txtsiz))||(y<0.0)||(y>1.0)) return;
// get font texture position for target ASCII
i=int(x); // char index in txt
x-=float(i);
i=txt[i];
x+=float(int(i&31));
y+=float(int(i>>5));
x/=32.0; y/=8.0; // offset in char texture
col=texture2D(txr_font,vec2(x,y));
}
//---------------------------------------------------------------------------
void main()
{
col=vec4(0.0,1.0,0.0,1.0); // background color
txtsiz=0;
txt[txtsiz]='F'; txtsiz++;
txt[txtsiz]='l'; txtsiz++;
txt[txtsiz]='o'; txtsiz++;
txt[txtsiz]='a'; txtsiz++;
txt[txtsiz]='t'; txtsiz++;
txt[txtsiz]=':'; txtsiz++;
txt[txtsiz]=' '; txtsiz++;
txt_decimal(12.345);
txt_print(1.0,1.0);
gl_FragColor=col;
}
//---------------------------------------------------------------------------
Here my CPU side uniforms:
glUniform1i(glGetUniformLocation(prog_id,"txr_font"),0);
glUniform1f(glGetUniformLocation(prog_id,"fxs"),(8.0)/float(xs));
glUniform1f(glGetUniformLocation(prog_id,"fys"),(8.0)/float(ys));
where xs,ys is my screen resolution. Font is 8x8 in unit 0
Here output for the test fragment code:
If your floating point accuracy is decreased due to HW implementation then you should consider printing in hex where no accuracy loss is present (using binary access). That could be converted to decadic base on integers later ...
see:
string hex2dec conversion on integer math
[Edit2] old style GLSL shaders
I tried to port to old style GLSL and suddenly it works (before it would not compile with arrays present but when I think of it I was trying char[] which was the real reason).
//---------------------------------------------------------------------------
// Vertex
//---------------------------------------------------------------------------
varying vec2 pos; // screen position <-1,+1>
void main()
{
pos=gl_Vertex.xy;
gl_Position=gl_Vertex;
}
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
// Fragment
//---------------------------------------------------------------------------
varying vec2 pos; // screen position <-1,+1>
uniform sampler2D txr_font; // ASCII 32x8 characters font texture unit
uniform float fxs,fys; // font/screen resolution ratio
//---------------------------------------------------------------------------
const int _txtsiz=32; // text buffer size
int txt[_txtsiz],txtsiz; // text buffer and its actual size
vec4 col; // color interface for txt_print()
//---------------------------------------------------------------------------
void txt_decimal(float x) // print float x into txt
{
int i,j,c; // l is size of string
float y,a;
const float base=10.0;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// divide to int(x).fract(y) parts of number
y=x; x=floor(x); y-=x;
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x=floor(x/base);
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0.0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
// handle fractional part
for (txt[txtsiz]='.',txtsiz++;txtsiz<_txtsiz;)
{
y*=base;
a=floor(y);
y-=a;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (y<=0.0) break;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_print(float x0,float y0) // print txt at x0,y0 [chars]
{
int i;
float x,y;
// fragment position [chars] relative to x0,y0
x=0.5*(1.0+pos.x)/fxs; x-=x0;
y=0.5*(1.0-pos.y)/fys; y-=y0;
// inside bbox?
if ((x<0.0)||(x>float(txtsiz))||(y<0.0)||(y>1.0)) return;
// get font texture position for target ASCII
i=int(x); // char index in txt
x-=float(i);
i=txt[i];
x+=float(int(i-((i/32)*32)));
y+=float(int(i/32));
x/=32.0; y/=8.0; // offset in char texture
col=texture2D(txr_font,vec2(x,y));
}
//---------------------------------------------------------------------------
void main()
{
col=vec4(0.0,1.0,0.0,1.0); // background color
txtsiz=0;
txt[txtsiz]='F'; txtsiz++;
txt[txtsiz]='l'; txtsiz++;
txt[txtsiz]='o'; txtsiz++;
txt[txtsiz]='a'; txtsiz++;
txt[txtsiz]='t'; txtsiz++;
txt[txtsiz]=':'; txtsiz++;
txt[txtsiz]=' '; txtsiz++;
txt_decimal(12.345);
txt_print(1.0,1.0);
gl_FragColor=col;
}
//---------------------------------------------------------------------------

First of all I want to mention that the amazing solution of Spektre is almost perfect and even more a general solution for text output. I gave his answer an upvote.
As an alternative, I present a minimally invasive solution, and improve the code of the question.
I do not want to conceal the fact that I have studied the solution of Spektre and integrated into my solution.
// Assume that the texture to which uTextureSlotNumber refers contains
// a rendering of the digits '0123456789' packed together, such that
const vec2 startOfDigitsInTexture = vec2( 100, 125 ); // the lower-left corner of the first digit starts here and
const vec2 sizeOfDigit = vec2( 0.1, 0.2 ); // each digit spans this many pixels
const float nSpaces = 10.0; // assume we have this many digits' worth of space to render in
void RenderDigit( int strPos, int digit, vec2 pos )
{
float testStrPos = pos.x / sizeOfDigit.x;
if ( testStrPos >= float(strPos) && testStrPos < float(strPos+1) )
{
float start = sizeOfDigit.x * float(digit);
vec2 textureSourcePosition = vec2( startOfDigitsInTexture.x + start + mod( pos.x, sizeOfDigit.x ), startOfDigitsInTexture.y + pos.y );
gl_FragColor = texture2D( uTextureSlotNumber, textureSourcePosition / uTextureSize );
}
}
The function ValueToDigits interprets a floating point number an fills up an array with the digits.
Each number in the array is in (0, 9).
const int MAX_DIGITS = 32;
int digits[MAX_DIGITS];
int noOfDigits = 0;
int posOfComma = 0;
void Reverse( int start, int end )
{
for ( ; start < end; ++ start, -- end )
{
int digit = digits[start];
digits[start] = digits[end];
digits[end] = digit;
}
}
void ValueToDigits( float value )
{
const float base = 10.0;
int start = noOfDigits;
value = abs( value );
float frac = value; value = floor(value); frac -= value;
// integral digits
for ( ; value > 0.0 && noOfDigits < MAX_DIGITS; ++ noOfDigits )
{
float newValue = floor( value / base );
digits[noOfDigits] = int( value - base * newValue );
value = newValue;
}
Reverse( start, noOfDigits-1 );
posOfComma = noOfDigits;
// fractional digits
for ( ; frac > 0.0 && noOfDigits < MAX_DIGITS; ++ noOfDigits )
{
frac *= base;
float digit = floor( frac );
frac -= digit;
digits[noOfDigits] = int( digit );
}
}
Call ValueToDigits in your original function and find the digit and textur coordinates for the current fragment.
void RenderDecimal( float value )
{
// fill the array of digits with the floating point value
ValueToDigits( value );
// Render the digits
vec2 pos = vFragCoordinate.xy - startOfDigitsInTexture;
if( pos.x >= 0 && pos.x < sizeOfDigit.x * nSpaces && pos.y >= 0 && pos.y < sizeOfDigit.y )
{
// render the digits
for ( int strPos = 0; strPos < noOfDigits; ++ strPos )
RenderDigit( strPos, digits[strPos], pos );
}
// Render the decimal point
float testStrPos = pos.x / sizeOfDigit.x;
float remainder = mod( pos.x, sizeOfDigit.x );
if( ( testStrPos >= float(posOfComma) && testStrPos < float(posOfComma+1) && remainder / sizeOfDigit.x < 0.1 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) ||
( testStrPos >= float(posOfComma-1) && testStrPos < float(posOfComma) && remainder / sizeOfDigit.x > 0.9 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) )
{
gl_FragColor = texture2D( uTextureSlotNumber, ( startOfDigitsInTexture + sizeOfDigit * vec2( 1.5, 0.5 ) ) / uTextureSize );
}
}

Here's my updated fragment shader, which can be dropped into the listing in my original question. It implements the decimal-digit-finding algorithm Spektre proposed, in a way that is even compatible with the legacy GLSL 1.20 dialect I'm using. Without that constraint, Spektre's solution is, of course, much more elegant and powerful.
varying vec2 vFragCoordinate;
uniform vec2 uTextureSize;
uniform sampler2D uTextureSlotNumber;
float Digit( float x, int position, float base )
{
int i;
float digit;
if( position < 0 )
{
x = fract( x );
for( i = -1; i >= position; i-- )
{
if( x <= 0.0 ) { digit = 0.0; break; }
x *= base;
digit = floor( x );
x -= digit;
}
}
else
{
x = floor( x );
float prevx;
for( i = 0; i <= position; i++ )
{
if( x <= 0.0 ) { digit = 0.0; break; }
prevx = x;
x = floor( x / base );
digit = prevx - base * x;
}
}
return digit;
}
float OrderOfMagnitude( float x )
{
return x == 0.0 ? 0.0 : floor( log( abs( x ) ) / log( 10.0 ) );
}
void RenderDecimal( float value )
{
// Assume that the texture to which uTextureSlotNumber refers contains
// a rendering of the digits '0123456789' packed together, such that
const vec2 startOfDigitsInTexture = vec2( 0, 0 ); // the lower-left corner of the first digit starts here and
const vec2 sizeOfDigit = vec2( 100, 125 ); // each digit spans this many pixels
const float nSpaces = 10.0; // assume we have this many digits' worth of space to render in
value = abs( value );
vec2 pos = vFragCoordinate - startOfDigitsInTexture;
float dpstart = max( 0.0, OrderOfMagnitude( value ) );
int decimal_position = int( dpstart - floor( pos.x / sizeOfDigit.x ) );
float remainder = mod( pos.x, sizeOfDigit.x );
if( pos.x >= 0.0 && pos.x < sizeOfDigit.x * nSpaces && pos.y >= 0.0 && pos.y < sizeOfDigit.y )
{
float digit_value = Digit( value, decimal_position, 10.0 );
vec2 textureSourcePosition = vec2( startOfDigitsInTexture.x + remainder + digit_value * sizeOfDigit.x, startOfDigitsInTexture.y + pos.y );
gl_FragColor = texture2D( uTextureSlotNumber, textureSourcePosition / uTextureSize );
}
// Render the decimal point
if( ( decimal_position == -1 && remainder / sizeOfDigit.x < 0.1 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) ||
( decimal_position == 0 && remainder / sizeOfDigit.x > 0.9 && abs( pos.y ) / sizeOfDigit.y < 0.1 ) )
{
gl_FragColor = texture2D( uTextureSlotNumber, ( startOfDigitsInTexture + sizeOfDigit * vec2( 1.5, 0.5 ) ) / uTextureSize );
}
}
void main(void)
{
gl_FragColor = texture2D( uTextureSlotNumber, vFragCoordinate / uTextureSize );
RenderDecimal( 2.5 ); // for current demonstration purposes, just a constant
}

Related

What is this called and how to achieve! Visuals in processing

Hey does anyone know how to achieve this effect using processing or what this is called?
I have been trying to use the wave gradient example in the processing library and implementing Perlin noise but I can not get close to the gif quality.
I know the artist used processing but can not figure out how!
Link to gif:
https://giphy.com/gifs/processing-jodeus-QInYLzY33wMwM
The effect is reminescent of Op Art (optical illusion art): I recommend reading/learning more about this fascinating genre and artists like:
Bridget Riley
(Bridget Riley, Intake, 1964)
(Bridget Riley, Hesistate, 1964,
Copyright: (c) Bridget Riley 2018. All rights reserved. / Photo (c) Tate)
Victor Vasarely
(Victor Vasarely, Zebra Couple)
(Victor Vasarely, VegaII)
Frank Stella
(Frank Stella, Untitled 1965, Image curtesy of Art Gallery NSW)
and more
You notice this waves are reminiscent/heavily inspired by Bridget Riley's work.
I also recommend checking out San Charoenchai;s album visualiser for Beach House - 7
As mentioned in my comment: you should post your attempt.
Waves and perlin noise could work for sure.
There are many ways to achieve a similar look.
Here's tweaked version of Daniel Shiffman's Noise Wave example:
int numWaves = 24;
float[] yoff = new float[numWaves]; // 2nd dimension of perlin noise
float[] yoffIncrements = new float[numWaves];
void setup() {
size(640, 360);
noStroke();
for(int i = 0 ; i < numWaves; i++){
yoffIncrements[i] = map(i, 0, numWaves - 1, 0.01, 0.03);
}
}
void draw() {
background(0);
float waveHeight = height / numWaves;
for(int i = 0 ; i < numWaves; i++){
float waveY = i * waveHeight;
fill(i % 2 == 0 ? color(255) : color(0));
// We are going to draw a polygon out of the wave points
beginShape();
float xoff = 0; // Option #1: 2D Noise
// float xoff = yoff; // Option #2: 1D Noise
// Iterate over horizontal pixels
for (float x = 0; x <= width + 30; x += 20) {
// Calculate a y value according to noise, map to
float y = map(noise(xoff, yoff[i]), 0, 1, waveY , waveY + (waveHeight * 3)); // Option #1: 2D Noise
// float y = map(noise(xoff), 0, 1, 200,300); // Option #2: 1D Noise
// Set the vertex
vertex(x, y);
// Increment x dimension for noise
xoff += 0.05;
}
// increment y dimension for noise
yoff[i] += yoffIncrements[i];
vertex(width, height);
vertex(0, height);
endShape(CLOSE);
}
}
Notice the quality of the noise wave in comparison to the image you're trying to emulate: there is a constant rhythm to it. To me that is a hint that it's using cycling sine waves changing phase and amplitude (potentially even adding waves together).
I've written an extensive answer on animating sine waves here
(Reuben Margolin's kinectic sculpture system demo)
From your question it sounds like you would be comfortable implementing a sine wave animation. It it helps, here's an example of adding two waves together:
void setup(){
size(600,600);
noStroke();
}
void draw(){
background(0);
// how many waves per sketch height
int heightDivisions = 30;
// split the sketch height into equal height sections
float heightDivisionSize = (float)height / heightDivisions;
// for each height division
for(int j = 0 ; j < heightDivisions; j++){
// use % 2 to alternate between black and white
// see https://processing.org/reference/modulo.html and
// https://processing.org/reference/conditional.html for more
fill(j % 2 == 0 ? color(255) : color(0));
// offset drawing on Y axis
translate(0,(j * heightDivisionSize));
// start a wave shape
beginShape();
// first vertex is at the top left corner
vertex(0,height);
// how many horizontal (per wave) divisions ?
int widthDivisions = 12;
// equally space the points on the wave horizontally
float widthDivsionSize = (float)width / widthDivisions;
// for each point on the wave
for(int i = 0; i <= widthDivisions; i++){
// calculate different phases
// play with arithmetic operators to make interesting wave additions
float phase1 = (frameCount * 0.01) + ((i * j) * 0.025);
float phase2 = (frameCount * 0.05) + ((i + j) * 0.25);
// calculate vertex x position
float x = widthDivsionSize * i;
// multiple sine waves
// (can use cos() and use other ratios too
// 150 in this case is the wave amplitude (e.g. from -150 to + 150)
float y = ((sin(phase1) * sin(phase2) * 150));
// draw calculated vertex
vertex(x,y);
}
// last vertex is at bottom right corner
vertex(width,height);
// finish the shape
endShape();
}
}
The result:
Minor note on performance: this could be implemented more efficiently using PShape, however I recommend playing with the maths/geometry to find the form you're after, then as a last step think of optimizing it.
My intention is not to show you how to create an exact replica, but to show there's more to Op Art than an effect and hopefully inspire you to explore other methods of achieving something similar in the hope that you will discover your own methods and outcomes: something new and of your own through fun happy accidents.
In terms of other techniques/avenues to explore:
displacement maps:
Using an alternating black/white straight bars texture on wavy 3D geometry
using shaders:
Shaders are a huge topic on their own, but it's worth noting:
There's a very good Processing Shader Tutorial
You might be able to explore frament shaders on shadertoy, tweak the code in browser then make slight changes so you can run them in Processing.
Here are a few quick examples:
https://www.shadertoy.com/view/Wts3DB
tweaked for black/white waves in Processing as shader-Wts3DB.frag
// https://www.shadertoy.com/view/Wts3DB
uniform vec2 iResolution;
uniform float iTime;
#define COUNT 6.
#define COL_BLACK vec3(23,32,38) / 255.0
#define SF 1./min(iResolution.x,iResolution.y)
#define SS(l,s) smoothstep(SF,-SF,l-s)
#define hue(h) clamp( abs( fract(h + vec4(3,2,1,0)/3.) * 6. - 3.) -1. , 0., 1.)
// Original noise code from https://www.shadertoy.com/view/4sc3z2
#define MOD3 vec3(.1031,.11369,.13787)
vec3 hash33(vec3 p3)
{
p3 = fract(p3 * MOD3);
p3 += dot(p3, p3.yxz+19.19);
return -1.0 + 2.0 * fract(vec3((p3.x + p3.y)*p3.z, (p3.x+p3.z)*p3.y, (p3.y+p3.z)*p3.x));
}
float simplex_noise(vec3 p)
{
const float K1 = 0.333333333;
const float K2 = 0.166666667;
vec3 i = floor(p + (p.x + p.y + p.z) * K1);
vec3 d0 = p - (i - (i.x + i.y + i.z) * K2);
vec3 e = step(vec3(0.0), d0 - d0.yzx);
vec3 i1 = e * (1.0 - e.zxy);
vec3 i2 = 1.0 - e.zxy * (1.0 - e);
vec3 d1 = d0 - (i1 - 1.0 * K2);
vec3 d2 = d0 - (i2 - 2.0 * K2);
vec3 d3 = d0 - (1.0 - 3.0 * K2);
vec4 h = max(0.6 - vec4(dot(d0, d0), dot(d1, d1), dot(d2, d2), dot(d3, d3)), 0.0);
vec4 n = h * h * h * h * vec4(dot(d0, hash33(i)), dot(d1, hash33(i + i1)), dot(d2, hash33(i + i2)), dot(d3, hash33(i + 1.0)));
return dot(vec4(31.316), n);
}
void mainImage( vec4 fragColor, vec2 fragCoord )
{
}
void main(void) {
//vec2 uv = vec2(gl_FragColor.x / iResolution.y, gl_FragColor.y / iResolution.y);
vec2 uv = gl_FragCoord.xy / iResolution.y;
float m = 0.;
float t = iTime *.5;
vec3 col;
for(float i=COUNT; i>=0.; i-=1.){
float edge = simplex_noise(vec3(uv * vec2(2., 0.) + vec2(0, t + i*.15), 3.))*.2 + (.95/COUNT)*i;
float mi = SS(edge, uv.y) - SS(edge + .095, uv.y);
m += mi;
if(mi > 0.){
col = vec3(1.0);
}
}
col = mix(COL_BLACK, col, m);
gl_FragColor = vec4(col,1.0);
// mainImage(gl_FragColor,gl_FragCoord);
}
loaded in Processing as:
PShader shader;
void setup(){
size(300,300,P2D);
noStroke();
shader = loadShader("shader-Wts3DB.frag");
shader.set("iResolution",(float)width, float(height));
}
void draw(){
background(0);
shader.set("iTime",frameCount * 0.05);
shader(shader);
rect(0,0,width,height);
}
https://www.shadertoy.com/view/MtsXzl
tweaked as shader-MtsXzl.frag
//https://www.shadertoy.com/view/MtsXzl
#define SHOW_GRID 1
const float c_scale = 0.5;
const float c_rate = 2.0;
#define FLT_MAX 3.402823466e+38
uniform vec3 iMouse;
uniform vec2 iResolution;
uniform float iTime;
//=======================================================================================
float CubicHermite (float A, float B, float C, float D, float t)
{
float t2 = t*t;
float t3 = t*t*t;
float a = -A/2.0 + (3.0*B)/2.0 - (3.0*C)/2.0 + D/2.0;
float b = A - (5.0*B)/2.0 + 2.0*C - D / 2.0;
float c = -A/2.0 + C/2.0;
float d = B;
return a*t3 + b*t2 + c*t + d;
}
//=======================================================================================
float hash(float n) {
return fract(sin(n) * 43758.5453123);
}
//=======================================================================================
float GetHeightAtTile(vec2 T)
{
float rate = hash(hash(T.x) * hash(T.y))*0.5+0.5;
return (sin(iTime*rate*c_rate) * 0.5 + 0.5) * c_scale;
}
//=======================================================================================
float HeightAtPos(vec2 P)
{
vec2 tile = floor(P);
P = fract(P);
float CP0X = CubicHermite(
GetHeightAtTile(tile + vec2(-1.0,-1.0)),
GetHeightAtTile(tile + vec2(-1.0, 0.0)),
GetHeightAtTile(tile + vec2(-1.0, 1.0)),
GetHeightAtTile(tile + vec2(-1.0, 2.0)),
P.y
);
float CP1X = CubicHermite(
GetHeightAtTile(tile + vec2( 0.0,-1.0)),
GetHeightAtTile(tile + vec2( 0.0, 0.0)),
GetHeightAtTile(tile + vec2( 0.0, 1.0)),
GetHeightAtTile(tile + vec2( 0.0, 2.0)),
P.y
);
float CP2X = CubicHermite(
GetHeightAtTile(tile + vec2( 1.0,-1.0)),
GetHeightAtTile(tile + vec2( 1.0, 0.0)),
GetHeightAtTile(tile + vec2( 1.0, 1.0)),
GetHeightAtTile(tile + vec2( 1.0, 2.0)),
P.y
);
float CP3X = CubicHermite(
GetHeightAtTile(tile + vec2( 2.0,-1.0)),
GetHeightAtTile(tile + vec2( 2.0, 0.0)),
GetHeightAtTile(tile + vec2( 2.0, 1.0)),
GetHeightAtTile(tile + vec2( 2.0, 2.0)),
P.y
);
return CubicHermite(CP0X, CP1X, CP2X, CP3X, P.x);
}
//=======================================================================================
vec3 NormalAtPos( vec2 p )
{
float eps = 0.01;
vec3 n = vec3( HeightAtPos(vec2(p.x-eps,p.y)) - HeightAtPos(vec2(p.x+eps,p.y)),
2.0*eps,
HeightAtPos(vec2(p.x,p.y-eps)) - HeightAtPos(vec2(p.x,p.y+eps)));
return normalize( n );
}
//=======================================================================================
float RayIntersectSphere (vec4 sphere, in vec3 rayPos, in vec3 rayDir)
{
//get the vector from the center of this circle to where the ray begins.
vec3 m = rayPos - sphere.xyz;
//get the dot product of the above vector and the ray's vector
float b = dot(m, rayDir);
float c = dot(m, m) - sphere.w * sphere.w;
//exit if r's origin outside s (c > 0) and r pointing away from s (b > 0)
if(c > 0.0 && b > 0.0)
return -1.0;
//calculate discriminant
float discr = b * b - c;
//a negative discriminant corresponds to ray missing sphere
if(discr < 0.0)
return -1.0;
//ray now found to intersect sphere, compute smallest t value of intersection
float collisionTime = -b - sqrt(discr);
//if t is negative, ray started inside sphere so clamp t to zero and remember that we hit from the inside
if(collisionTime < 0.0)
collisionTime = -b + sqrt(discr);
return collisionTime;
}
//=======================================================================================
vec3 DiffuseColor (in vec3 pos)
{
#if SHOW_GRID
pos = mod(floor(pos),2.0);
return vec3(mod(pos.x, 2.0) < 1.0 ? 1.0 : 0.0);
#else
return vec3(0.1, 0.8, 0.9);
#endif
}
//=======================================================================================
vec3 ShadePoint (in vec3 pos, in vec3 rayDir, float time, bool fromUnderneath)
{
vec3 diffuseColor = DiffuseColor(pos);
vec3 reverseLightDir = normalize(vec3(1.0,1.0,-1.0));
vec3 lightColor = vec3(1.0);
vec3 ambientColor = vec3(0.05);
vec3 normal = NormalAtPos(pos.xz);
normal *= fromUnderneath ? -1.0 : 1.0;
// diffuse
vec3 color = diffuseColor;
float dp = dot(normal, reverseLightDir);
if(dp > 0.0)
color += (diffuseColor * lightColor);
return color;
}
//=======================================================================================
vec3 HandleRay (in vec3 rayPos, in vec3 rayDir, in vec3 pixelColor, out float hitTime)
{
float time = 0.0;
float lastHeight = 0.0;
float lastY = 0.0;
float height;
bool hitFound = false;
hitTime = FLT_MAX;
bool fromUnderneath = false;
vec2 timeMinMax = vec2(0.0, 20.0);
time = timeMinMax.x;
const int c_numIters = 100;
float deltaT = (timeMinMax.y - timeMinMax.x) / float(c_numIters);
vec3 pos = rayPos + rayDir * time;
float firstSign = sign(pos.y - HeightAtPos(pos.xz));
for (int index = 0; index < c_numIters; ++index)
{
pos = rayPos + rayDir * time;
height = HeightAtPos(pos.xz);
if (sign(pos.y - height) * firstSign < 0.0)
{
fromUnderneath = firstSign < 0.0;
hitFound = true;
break;
}
time += deltaT;
lastHeight = height;
lastY = pos.y;
}
if (hitFound) {
time = time - deltaT + deltaT*(lastHeight-lastY)/(pos.y-lastY-height+lastHeight);
pos = rayPos + rayDir * time;
pixelColor = ShadePoint(pos, rayDir, time, fromUnderneath);
hitTime = time;
}
return pixelColor;
}
//=======================================================================================
void main()
{
// scrolling camera
vec3 cameraOffset = vec3(iTime, 0.5, iTime);
//----- camera
vec2 mouse = iMouse.xy / iResolution.xy;
vec3 cameraAt = vec3(0.5,0.5,0.5) + cameraOffset;
float angleX = iMouse.z > 0.0 ? 6.28 * mouse.x : 3.14 + iTime * 0.25;
float angleY = iMouse.z > 0.0 ? (mouse.y * 6.28) - 0.4 : 0.5;
vec3 cameraPos = (vec3(sin(angleX)*cos(angleY), sin(angleY), cos(angleX)*cos(angleY))) * 5.0;
// float angleX = 0.8;
// float angleY = 0.8;
// vec3 cameraPos = vec3(0.0,0.0,0.0);
cameraPos += vec3(0.5,0.5,0.5) + cameraOffset;
vec3 cameraFwd = normalize(cameraAt - cameraPos);
vec3 cameraLeft = normalize(cross(normalize(cameraAt - cameraPos), vec3(0.0,sign(cos(angleY)),0.0)));
vec3 cameraUp = normalize(cross(cameraLeft, cameraFwd));
float cameraViewWidth = 6.0;
float cameraViewHeight = cameraViewWidth * iResolution.y / iResolution.x;
float cameraDistance = 6.0; // intuitively backwards!
// Objects
vec2 rawPercent = (gl_FragCoord.xy / iResolution.xy);
vec2 percent = rawPercent - vec2(0.5,0.5);
vec3 rayTarget = (cameraFwd * vec3(cameraDistance,cameraDistance,cameraDistance))
- (cameraLeft * percent.x * cameraViewWidth)
+ (cameraUp * percent.y * cameraViewHeight);
vec3 rayDir = normalize(rayTarget);
float hitTime = FLT_MAX;
vec3 pixelColor = vec3(1.0, 1.0, 1.0);
pixelColor = HandleRay(cameraPos, rayDir, pixelColor, hitTime);
gl_FragColor = vec4(clamp(pixelColor,0.0,1.0), 1.0);
}
and the mouse interactive Processing sketch:
PShader shader;
void setup(){
size(300,300,P2D);
noStroke();
shader = loadShader("shader-MtsXzl.frag");
shader.set("iResolution",(float)width, float(height));
}
void draw(){
background(0);
shader.set("iTime",frameCount * 0.05);
shader.set("iMouse",(float)mouseX , (float)mouseY, mousePressed ? 1.0 : 0.0);
shader(shader);
rect(0,0,width,height);
}
Shadertoy is great way to play/learn: have fun !
Update
Here's a quick test tweaking Daniel Shiffman's 3D Terrain Generation example to add a stripped texture and basic sine waves instead of perlin noise:
// Daniel Shiffman
// http://codingtra.in
// http://patreon.com/codingtrain
// Code for: https://youtu.be/IKB1hWWedMk
int cols, rows;
int scl = 20;
int w = 2000;
int h = 1600;
float flying = 0;
float[][] terrain;
PImage texture;
void setup() {
size(600, 600, P3D);
textureMode(NORMAL);
noStroke();
cols = w / scl;
rows = h/ scl;
terrain = new float[cols][rows];
texture = getBarsTexture(512,512,96);
}
void draw() {
flying -= 0.1;
float yoff = flying;
for (int y = 0; y < rows; y++) {
float xoff = 0;
for (int x = 0; x < cols; x++) {
//terrain[x][y] = map(noise(xoff, yoff), 0, 1, -100, 100);
terrain[x][y] = map(sin(xoff) * sin(yoff), 0, 1, -60, 60);
xoff += 0.2;
}
yoff += 0.2;
}
background(0);
translate(width/2, height/2+50);
rotateX(PI/9);
translate(-w/2, -h/2);
for (int y = 0; y < rows-1; y++) {
beginShape(TRIANGLE_STRIP);
texture(texture);
for (int x = 0; x < cols; x++) {
float u0 = map(x,0,cols-1,0.0,1.0);
float u1 = map(x+1,0,cols-1,0.0,1.0);
float v0 = map(y,0,rows-1,0.0,1.0);
float v1 = map(y+1,0,rows-1,0.0,1.0);
vertex(x*scl, y*scl, terrain[x][y], u0, v0);
vertex(x*scl, (y+1)*scl, terrain[x][y+1], u1, v1);
}
endShape();
}
}
PGraphics getBarsTexture(int textureWidth, int textureHeight, int numBars){
PGraphics texture = createGraphics(textureWidth, textureHeight);
int moduleSide = textureWidth / numBars;
texture.beginDraw();
texture.background(0);
texture.noStroke();
for(int i = 0; i < numBars; i+= 2){
texture.rect(0, i * moduleSide, textureWidth, moduleSide);
}
texture.endDraw();
return texture;
}

Optimize WebGL shader?

I wrote the following shader to render a pattern with a bunch of concentric circles. Eventually I want to have each rotating sphere be a light emitter to create something along these lines.
Of course right now I'm just doing the most basic part to render the different objects.
Unfortunately the shader is incredibly slow (16fps full screen on a high-end macbook). I'm pretty sure this is due to the numerous for loops and branching that I have in the shader. I'm wondering how I can pull off the geometry I'm trying to achieve in a more performance optimized way:
EDIT: you can run the shader here: https://www.shadertoy.com/view/lssyRH
One obvious optimization I am missing is that currently all the fragments are checked against the entire 24 surrounding circles. It would be pretty quick and easy to just discard these checks entirely by checking if the fragment intersects the outer bounds of the diagram. I guess I'm just trying to get a handle on how the best practice is of doing something like this.
#define N 10
#define M 5
#define K 24
#define M_PI 3.1415926535897932384626433832795
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float aspectRatio = iResolution.x / iResolution.y;
float h = 1.0;
float w = aspectRatio;
vec2 uv = vec2(fragCoord.x / iResolution.x * aspectRatio, fragCoord.y / iResolution.y);
float radius = 0.01;
float orbitR = 0.02;
float orbiterRadius = 0.005;
float centerRadius = 0.002;
float encloseR = 2.0 * orbitR;
float encloserRadius = 0.002;
float spacingX = (w / (float(N) + 1.0));
float spacingY = h / (float(M) + 1.0);
float x = 0.0;
float y = 0.0;
vec4 totalLight = vec4(0.0, 0.0, 0.0, 1.0);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
// compute the center of the diagram
vec2 center = vec2(spacingX * (float(i) + 1.0), spacingY * (float(j) + 1.0));
x = center.x + orbitR * cos(iGlobalTime);
y = center.y + orbitR * sin(iGlobalTime);
vec2 bulb = vec2(x,y);
if (length(uv - center) < centerRadius) {
// frag intersects white center marker
fragColor = vec4(1.0);
return;
} else if (length(uv - bulb) < radius) {
// intersects rotating "light"
fragColor = vec4(uv,0.5+0.5*sin(iGlobalTime),1.0);
return;
} else {
// intersects one of the enclosing 24 cylinders
for(int k = 0; k < K; k++) {
float theta = M_PI * 2.0 * float(k)/ float(K);
x = center.x + cos(theta) * encloseR;
y = center.y + sin(theta) * encloseR;
vec2 encloser = vec2(x,y);
if (length(uv - encloser) < encloserRadius) {
fragColor = vec4(uv,0.5+0.5*sin(iGlobalTime),1.0);
return;
}
}
}
}
}
}
Keeping in mind that you want to optimize the fragment shader, and only the fragment shader:
Move the sin(iGlobalTime) and cos(iGlobalTime) out of the loops, these remain static over the whole draw call so no need to recalculate them every loop iteration.
GPUs employ vectorized instruction sets (SIMD) where possible, take advantage of that. You're wasting lots of cycles by doing multiple scalar ops where you could use a single vector instruction(see annotated code)
[Three years wiser me here: I'm not really sure if this statement is true in regards to how modern GPUs process the instructions, however it certainly does help readability and maybe even give a hint or two to the compiler]
Do your radius checks squared, save that sqrt(length) for when you really need it
Replace float casts of constants(your loop limits) with a float constant(intelligent shader compilers will already do this, not something to count on though)
Don't have undefined behavior in your shader(not writing to gl_FragColor)
Here is an optimized and annotated version of your shader(still containing that undefined behavior, just like the one you provided). Annotation is in the form of:
// annotation
// old code, if any
new code
#define N 10
// define float constant N
#define fN 10.
#define M 5
// define float constant M
#define fM 5.
#define K 24
// define float constant K
#define fK 24.
#define M_PI 3.1415926535897932384626433832795
// predefine 2 times PI
#define M_PI2 6.28318531
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float aspectRatio = iResolution.x / iResolution.y;
// we dont need these separate
// float h = 1.0;
// float w = aspectRatio;
// use vector ops(2 divs 1 mul => 1 div 1 mul)
// vec2 uv = vec2(fragCoord.x / iResolution.x * aspectRatio, fragCoord.y / iResolution.y);
vec2 uv = fragCoord.xy / iResolution.xy;
uv.x *= aspectRatio;
// most of the following declarations should be predefined or marked as "const"...
float radius = 0.01;
// precalc squared radius
float radius2 = radius*radius;
float orbitR = 0.02;
float orbiterRadius = 0.005;
float centerRadius = 0.002;
// precalc squared center radius
float centerRadius2 = centerRadius * centerRadius;
float encloseR = 2.0 * orbitR;
float encloserRadius = 0.002;
// precalc squared encloser radius
float encloserRadius2 = encloserRadius * encloserRadius;
// Use float constants and vector ops here(2 casts 2 adds 2 divs => 1 add 1 div)
// float spacingX = w / (float(N) + 1.0);
// float spacingY = h / (float(M) + 1.0);
vec2 spacing = vec2(aspectRatio, 1.0) / (vec2(fN, fM)+1.);
// calc sin and cos of global time
// saves N*M(sin,cos,2 muls)
vec2 stct = vec2(sin(iGlobalTime), cos(iGlobalTime));
vec2 orbit = orbitR * stct;
// not needed anymore
// float x = 0.0;
// float y = 0.0;
// was never used
// vec4 totalLight = vec4(0.0, 0.0, 0.0, 1.0);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
// compute the center of the diagram
// Use vector ops
// vec2 center = vec2(spacingX * (float(i) + 1.0), spacingY * (float(j) + 1.0));
vec2 center = spacing * (vec2(i,j)+1.0);
// Again use vector opts, use precalced time trig(orbit = orbitR * stct)
// x = center.x + orbitR * cos(iGlobalTime);
// y = center.y + orbitR * sin(iGlobalTime);
// vec2 bulb = vec2(x,y);
vec2 bulb = center + orbit;
// calculate offsets
vec2 centerOffset = uv - center;
vec2 bulbOffset = uv - bulb;
// use squared length check
// if (length(uv - center) < centerRadius) {
if (dot(centerOffset, centerOffset) < centerRadius2) {
// frag intersects white center marker
fragColor = vec4(1.0);
return;
// use squared length check
// } else if (length(uv - bulb) < radius) {
} else if (dot(bulbOffset, bulbOffset) < radius2) {
// Use precalced sin global time in stct.x
// intersects rotating "light"
fragColor = vec4(uv,0.5+0.5*stct.x,1.0);
return;
} else {
// intersects one of the enclosing 24 cylinders
for(int k = 0; k < K; k++) {
// use predefined 2*PI and float K
float theta = M_PI2 * float(k) / fK;
// Use vector ops(2 muls 2 adds => 1 mul 1 add)
// x = center.x + cos(theta) * encloseR;
// y = center.y + sin(theta) * encloseR;
// vec2 encloser = vec2(x,y);
vec2 encloseOffset = uv - (center + vec2(cos(theta),sin(theta)) * encloseR);
if (dot(encloseOffset,encloseOffset) < encloserRadius2) {
fragColor = vec4(uv,0.5+0.5*stct.x,1.0);
return;
}
}
}
}
}
}
I did a little more thinking ... I realized the best way to optimize it is to actually change the logic so that before doing intersection tests on the small circles it checks the bounds of the group of circles. This got it to run at 60fps:
Example here:
https://www.shadertoy.com/view/lssyRH

How do I convert between float and vec4,vec3,vec2?

This question is very related to the question here(How do I convert a vec4 rgba value to a float?).
There is some of articles or questions related to this question already, but I wonder most of articles are not identifying which type of floating value.
As long as I can come up with, there is some of floating value packing/unpacking formula below.
unsigned normalized float
signed normalized float
signed ranged float (the floating value I can find range limitation)
unsigned ranged float
unsigned float
signed float
However, these are just 2 case actually. The other packing/unpacking can be processed by these 2 method.
unsigned ranged float (I can pack/unpack by easy bitshifting)
signed float
I want to pack and unpack signed floating values into vec3 or vec2 also.
For my case, the floating value is not ensured to be normalized, so I can not use the simple bitshifting way.
If you know the max range of values you want to store, say +5 to -5, than the easiest way is just to pick some convert that range to a value from 0 to 1. Expand that to the number of bits you have and then break it into parts.
vec2 packFloatInto8BitVec2(float v, float min, float max) {
float zeroToOne = (v - min) / (max - min);
float zeroTo16Bit = zeroToOne * 256.0 * 255.0;
return vec2(mod(zeroTo16Bit, 256.0), zeroTo16Bit / 256.0);
}
To put it back you do the opposite. Assemble the parts, divide to get back to a zeroToOne value, then expand by the range.
float unpack8BitVec2IntoFloat(vec2 v, float min, float max) {
float zeroTo16Bit = v.x + v.y * 256.0;
float zeroToOne = zeroTo16Bit / 256.0 / 255.0;
return zeroToOne * (max - min) + min;
}
For vec3 just expand it
vec3 packFloatInto8BitVec3(float v, float min, float max) {
float zeroToOne = (v - min) / (max - min);
float zeroTo24Bit = zeroToOne * 256.0 * 256.0 * 255.0;
return vec3(mod(zeroTo24Bit, 256.0), mod(zeroTo24Bit / 256.0, 256.0), zeroTo24Bit / 256.0 / 256.0);
}
float unpack8BitVec3IntoFloat(vec3 v, float min, float max) {
float zeroTo24Bit = v.x + v.y * 256.0 + v.z * 256.0 * 256.0;
float zeroToOne = zeroTo24Bit / 256.0 / 256.0 / 256.0;
return zeroToOne * (max - min) + min;
}
I have written small example few days ago with shadertoy:
https://www.shadertoy.com/view/XdK3Dh
It stores float as RGB or load float from pixel. There is also test that function are exact inverses (lot of other functions i have seen has bug in some ranges because of bad precision).
Entire example assumes you want to save values in buffer and read it back in next draw. Having only 256 colors, it limits you to get 16777216 different values. Most of the time I dont need larger scale. I also shifted it to have signed float insted in interval <-8388608;8388608>.
float color2float(in vec3 c) {
c *= 255.;
c = floor(c); // without this value could be shifted for some intervals
return c.r*256.*256. + c.g*256. + c.b - 8388608.;
}
// values out of <-8388608;8388608> are stored as min/max values
vec3 float2color(in float val) {
val += 8388608.; // this makes values signed
if(val < 0.) {
return vec3(0.);
}
if(val > 16777216.) {
return vec3(1.);
}
vec3 c = vec3(0.);
c.b = mod(val, 256.);
val = floor(val/256.);
c.g = mod(val, 256.);
val = floor(val/256.);
c.r = mod(val, 256.);
return c/255.;
}
One more thing, values that overflow will be rounded to min/max value.
In order to pack a floating-point value in a vec2, vec3 or vec4, either the range of the source values has to be restricted and well specified, or the exponent has to be stored somehow too. In general, if the significant digits of a floating-point number should be pack in bytes, consecutively 8 bits packages have to be extract from the the significant digits and have to be stored in a byte.
Encode a floating point number in a restricted and predefined range
A value range [minVal, maxVal] must be defined which includes all values that are to be encoded and the value range must be mapped to the range from [0.0, 1.0].
Encoding of a floating point number in the range [minVal, maxVal] to vec2, vec3 and vec4:
vec2 EncodeRangeV2( in float value, in float minVal, in float maxVal )
{
value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
value *= (256.0*256.0 - 1.0) / (256.0*256.0);
vec3 encode = fract( value * vec3(1.0, 256.0, 256.0*256.0) );
return encode.xy - encode.yz / 256.0 + 1.0/512.0;
}
vec3 EncodeRangeV3( in float value, in float minVal, in float maxVal )
{
value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
vec4 EncodeRangeV4( in float value, in float minVal, in float maxVal )
{
value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}
Decodeing of a vec2, vec3 and vec4 to a floating point number in the range [minVal, maxVal]:
float DecodeRangeV2( in vec2 pack, in float minVal, in float maxVal )
{
float value = dot( pack, 1.0 / vec2(1.0, 256.0) );
value *= (256.0*256.0) / (256.0*256.0 - 1.0);
return mix( minVal, maxVal, value );
}
float DecodeRangeV3( in vec3 pack, in float minVal, in float maxVal )
{
float value = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
value *= (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
return mix( minVal, maxVal, value );
}
float DecodeRangeV4( in vec4 pack, in float minVal, in float maxVal )
{
float value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
value *= (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
return mix( minVal, maxVal, value );
}
Note,Since a standard 32-bit [IEEE 754][2] number has only 24 significant digits, it is completely sufficient to encode the number in 3 bytes.
Encode the significant digits and the exponent of a floating point number
Encoding of the significant digits of a floating point number and its exponent to vec2, vec3 and vec4:
vec2 EncodeExpV2( in float value )
{
int exponent = int( log2( abs( value ) ) + 1.0 );
value /= exp2( float( exponent ) );
value = (value + 1.0) * 255.0 / (2.0*256.0);
vec2 encode = fract( value * vec2(1.0, 256.0) );
return vec2( encode.x - encode.y / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}
vec3 EncodeExpV3( in float value )
{
int exponent = int( log2( abs( value ) ) + 1.0 );
value /= exp2( float( exponent ) );
value = (value + 1.0) * (256.0*256.0 - 1.0) / (2.0*256.0*256.0);
vec3 encode = fract( value * vec3(1.0, 256.0, 256.0*256.0) );
return vec3( encode.xy - encode.yz / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}
vec4 EncodeExpV4( in float value )
{
int exponent = int( log2( abs( value ) ) + 1.0 );
value /= exp2( float( exponent ) );
value = (value + 1.0) * (256.0*256.0*256.0 - 1.0) / (2.0*256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}
Decoding of a vec2, vec3 and vec4 to he significant digits of a floating point number and its exponent:
float DecodeExpV2( in vec2 pack )
{
int exponent = int( pack.z * 256.0 - 127.0 );
float value = pack.x * (2.0*256.0) / 255.0 - 1.0;
return value * exp2( float(exponent) );
}
float DecodeExpV3( in vec3 pack )
{
int exponent = int( pack.z * 256.0 - 127.0 );
float value = dot( pack.xy, 1.0 / vec2(1.0, 256.0) );
value = value * (2.0*256.0*256.0) / (256.0*256.0 - 1.0) - 1.0;
return value * exp2( float(exponent) );
}
float DecodeExpV4( in vec4 pack )
{
int exponent = int( pack.w * 256.0 - 127.0 );
float value = dot( pack.xyz, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
value = value * (2.0*256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0) - 1.0;
return value * exp2( float(exponent) );
}
See also the answer to the following question:
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?
I tested gman's solution and found that the scale factor was incorrect, and it produced roundoff errors, and there needs to be an additional division by 255.0 if you want to store the result in a RGB texture. So this is my revised solution:
#define SCALE_FACTOR (256.0 * 256.0 * 256.0 - 1.0)
vec3 packFloatInto8BitVec3(float v, float min, float max) {
float zeroToOne = (v - min) / (max - min);
float zeroTo24Bit = zeroToOne * SCALE_FACTOR;
return floor(
vec3(
mod(zeroTo24Bit, 256.0),
mod(zeroTo24Bit / 256.0, 256.0),
zeroTo24Bit / 256.0 / 256.0
)
) / 255.0;
}
float unpack8BitVec3IntoFloat(vec3 v, float min, float max) {
vec3 scaleVector = vec3(1.0, 256.0, 256.0 * 256.0) / SCALE_FACTOR * 255.0;
float zeroToOne = dot(v, scaleVector);
return zeroToOne * (max - min) + min;
}
Example:
If you pack 0.25 using min=0 and max=1, you will get (1.0, 1.0, 0.247059)
If you unpack that vector, you will get 0.249999970197678

Radial reveal of image in OpenGL shader

I'm playing with a shader concept to radially reveal an image using a shader in OpenGL ES. The end goal is to create a circular progress bar by discarding fragments in a fragment shader that renders a full circular progress texture.
I have coded my idea here in ShaderToy so you can play with it. I can't seem to get it to work, and since there's no way to debug I'm having a hard time figuring out why.
Here's my glsl code for the fragment shader:
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return acos(dot(v1, v2) / (magnitude(v1) * magnitude(v2)));
}
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 360.0);
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
It sweeps out revealing from the bottom on both sides. I just want it to sweep out from a vector pointing straight up, and moving in a clockwise motion.
Try this code:
const float PI = 3.1415926;
const float TWO_PI = 6.2831852;
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
}
float getTargetAngle()
{
return clamp( iGlobalTime, 0.0, TWO_PI );
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
float a = angleBetween(center, fragCoord );
return a <= targetAngle;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
Explanation:
The main change I made was the way the angle between two vectors is calculated:
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
This is the angle of the difference vector between v1 and v2. If you swap the x and y values it will change the direction of where the 0 angle is, i.e. if you try this:
return atan( v1.y - v2.y, v1.x - v2.x ) + PI;
the circle begins from the right rather than upwards. You can also invert the value of atan to change the direction of the animation.
You also don't need to worry about the up vector when calculating the angle between, notice the code just takes the angle between the center and the current frag co-ordinates:
float a = angleBetween(center, fragCoord );
Other Notes:
Remember calculations are in radians, not degrees so I changed the clamp on time (although this doesn't really affect the output):
return clamp( iGlobalTime, 0.0, TWO_PI );
You have a variable with the same name as one of your functions:
float angleBetween = angleBetween(up, v2);
which should be avoided since not all implementations are happy with this, I couldn't compile your shader on my current machine until I changed this.
Change only two functions below
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 6.14);
}
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
if(fragCoord.x>320.0)// a half width
{
up += 2.0*vec2(up.x,-up.y);
targetAngle *= 2.;
}
else
{
up -= 2.0*vec2(up.x,-up.y);
targetAngle -= 1.57;
}
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}

Using a texture for data

I asked this question before about how to pass a data array to a fragment shader for coloring a terrain, and it was suggested I could use a texture's RGBA values.
I'm now stuck trying to work out how I would also use the yzw values. This is my fragment shader code:
vec4 data = texture2D(texture, vec2(verpos.x / 32.0, verpos.z / 32.0));
float blockID = data.x;
vec4 color;
if (blockID == 1.0) {
color = vec4(0.28, 0.52, 0.05, 1.0);
}
else if (blockID == 2.0) {
color = vec4(0.25, 0.46, 0.05, 1.0);
}
else if (blockID == 3.0) {
color = vec4(0.27, 0.49, 0.05, 1.0);
}
gl_FragColor = color;
This works fine, however as you can see it's only using the float from the x-coordinate. If it was also using the yzw coordinates the texture size could be reduced to 16x16 instead of 32x32 (four times smaller).
The aim of this is to create a voxel-type terrain, where each 'block' is 1x1 in space coordinates and is colored based on the blockID. Looks like this:
Outside of GLSL this would be simple, however with no ability to store which blocks have been computed I'm finding this difficult. No doubt, I'm over thinking things and it can be done with some simple math.
EDIT:
Code based on Wagner Patriota's answer:
vec2 pixel_of_target = vec2( verpos.xz * 32.0 - 0.5 ); // Assuming verpos.xz == uv_of_target ?
// For some reason mod() doesn't support integers so I have to convert it using int()
int X = int(mod(pixel_of_target.y, 2.0) * 2.0 + mod(pixel_of_target.x, 2.0));
// Gives the error "Index expression must be constant"
float blockID = data[ X ];
About the error, I asked a question about that before which actually led to me asking this one. :P
Any ideas? Thanks! :)
The idea is to replace:
float blockID = data.x;
By
float blockID = data[ X ];
Where X is a integer that allows you to pick the R, G, B or A from your 16x16 data image.
The thing is how to calculate X in function of your UV?
Ok, you have a target image (32x32) and the data image (16x16). So let's do:
ivec pixel_of_target = ivec( uv_of_target * 32.0 - vec2( 0.5 ) ); // a trick!
Multiplying your UV with the texture dimesions (32 in this case) you find the exact pixel. The -0.5 is necessary because you are trying "to find a pixel from a texture". And of course the texture has interpolated values between the "center of the pixels". You need the exact center of the pixel...
Your pixel_of_target is an ivec (integers) and you can identify exactly where you are drawing! So the thing now is to identify (based on the pixel you are drawing) which channel you should get from the 16x16 texture.
int X = ( pixel_of_target.y % 2 ) * 2 + pixel_of_target.x % 2;
float blockID = data[ X ]; // party on!
This expression above allows you to pick up the correct index X based on the target pixel! On your "data texture" 16x16 map your (R,G,B,A) to (top-left, top-right, bottom-left, bottom-right) of every group of 4 pixels on your target (or maybe upside-down if you prefer... you can figure it out)
UPDATE:
Because you are using WebGL, some details should be changed. I did this and it worked.
vec2 pixel_of_target = vTextureCoord * 32.0 + vec2( 0.5 ); // the signal changed!
int _x = int( pixel_of_target.x );
int _y = int( pixel_of_target.y );
int X = mod( _y, 2 ) * 2 + mod( _x, 2 );
I used this for my test:
if ( X == 0 )
gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if ( X == 1 )
gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else if ( X == 2 )
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
else if ( X == 3 )
gl_FragColor = vec4( 1.0, 0.0, 1.0, 1.0 );
My image worked perfectly fine:
Here i zommed with Photoshop to see the deatails of the pixels.
PS1: Because I am not familiar with WebGL, I could not run WebGL in Chrome, I tried with Firefox, and I didn't find the mod() function either... So I did:
int mod( int a, int b )
{
return a - int( floor( float( a ) / float( b ) ) * float( b ) );
}
PS2: I don't know why I had to sum vec2( 0.5 ) instead of subtract. WebGL is a little bit different. It probably has this shift. I don't know... It just works.

Resources