Why am I seeing these blending artifacts in my GLSL shader? - opengl-es

I'm attempting to create a shader that additively blends colored "blobs" (kind of like particles) on top of one another. This seems like it should be a straightforward task but I'm getting strange "banding"-like artifacts when the blobs blend.
First off, here's the behavior I'm after (replicated using Photoshop layers):
Note that the three color layers are all set to blendmode "Linear Dodge (Add)" which as far as I understand is Photoshop's "additive" blend mode.
If I merge the color layers and leave the resulting layer set to "Normal" blending, I'm then free to change the background color as I please.
Obviously additive blending will not work on top of a non-black background, so in the end I will also want/need the shader to support this pre-merging of colors before finally blending into a background that could have any color. However, I'm content for now to only focus on getting the additive-on-top-of-black blending working correctly, because it's not.
Here's my shader code in its current state.
const int MAX_SHAPES = 10;
vec2 spread = vec2(0.3, 0.3);
vec2 offset = vec2(0.0, 0.0);
float shapeSize = 0.3;
const float s = 1.0;
float shapeColors[MAX_SHAPES * 3] = float[MAX_SHAPES * 3] (
s, 0.0, 0.0,
0.0, s, 0.0,
0.0, 0.0, s,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0
);
vec2 motionFunction (float i) {
float t = iTime;
return vec2(
(cos(t * 0.31 + i * 3.0) + cos(t * 0.11 + i * 14.0) + cos(t * 0.78 + i * 30.0) + cos(t * 0.55 + i * 10.0)) / 4.0,
(cos(t * 0.13 + i * 33.0) + cos(t * 0.66 + i * 38.0) + cos(t * 0.42 + i * 83.0) + cos(t * 0.9 + i * 29.0)) / 4.0
);
}
float blend (float src, float dst, float alpha) {
return alpha * src + (1.0 - alpha) * dst;
}
void mainImage (out vec4 fragColor, in vec2 fragCoord) {
float aspect = iResolution.x / iResolution.y;
float x = (fragCoord.x / iResolution.x) - 0.5;
float y = (fragCoord.y / iResolution.y) - 0.5;
vec2 pixel = vec2(x, y / aspect);
vec4 totalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i = 0; i < MAX_SHAPES; i++) {
if (i >= 3) {
break;
}
vec2 shapeCenter = motionFunction(float(i));
shapeCenter *= spread;
shapeCenter += offset;
float dx = shapeCenter.x - pixel.x;
float dy = shapeCenter.y - pixel.y;
float d = sqrt(dx * dx + dy * dy);
float ratio = d / shapeSize;
float intensity = 1.0 - clamp(ratio, 0.0, 1.0);
totalColor.x = totalColor.x + shapeColors[i * 3 + 0] * intensity;
totalColor.y = totalColor.y + shapeColors[i * 3 + 1] * intensity;
totalColor.z = totalColor.z + shapeColors[i * 3 + 2] * intensity;
totalColor.w = totalColor.w + intensity;
}
float alpha = clamp(totalColor.w, 0.0, 1.0);
float background = 0.0;
fragColor = vec4(
blend(totalColor.x, background, alpha),
blend(totalColor.y, background, alpha),
blend(totalColor.z, background, alpha),
1.0
);
}
And here's a ShaderToy version where you can view it live — https://www.shadertoy.com/view/wlf3RM
Or as a video — https://streamable.com/un25t
The visual artifacts should be pretty obvious, but here's a video that points them out: https://streamable.com/kxaps
(I think they are way more prevalent in the video linked before this one, though. The motion really make them pop out.)
Also as a static image for comparison:
Basically, there are "edges" that appear on certain magical thresholds. I have no idea how they got there or how to get rid of them. Your help would be highly appreciated.

The inside lines are where totalColor.w reaches 1 and so alpha is clamped to 1 inside them. The outside ones that you've traced in white are the edges of the circles.
I modified your ShaderToy link by changing float alpha = clamp(totalColor.w, 0.0, 1.0); to float alpha = 1.0; and float intensity = 1.0 - clamp(ratio, 0.0, 1.0); to float intensity = smoothstep(1.0, 0.0, ratio); (to smooth out the edges of the circles) and now it looks like the first picture.

Related

Threejs: Sphere geometry not being shaded properly GLSL

I'm having a problem with a GLSL shader that interpolates color in 3D space, and assigns it based on the 3D coordinates of the bounding box and I can't seem to fix it:
The stamen in this codepen: https://codepen.io/ricky1280/pen/BaxyaZY
this is the code that I feel like probably has the problem, the geometry of the sphere:
const stamenEndCap = new THREE.SphereGeometry( sinCurveScale/120, 20, 20 );
// stamenEndCap.scale(1,1.5,1)
stamenEndCap.scale(4,1,1) //find a way to rotate geometry relative to the sin curve at the end
stamenEndCap.toNonIndexed();
stamenEndCap.computeBoundingSphere();
stamenEndCap.computeBoundingBox();
stamenEndCap.normalizeNormals();
stamenEndCap.computeTangents();
console.log(stamenEndCap.attributes.position.array)
for (var i=0; i<stamenEndCap.attributes.position.array.length; i=i+3){
stamenEndCap.attributes.position.array[i]=stamenEndCap.attributes.position.array[i]+((centerEnd.x)) //offset
stamenEndCap.attributes.position.array[i+1]=stamenEndCap.attributes.position.array[i+1]+((centerEnd.y))
stamenEndCap.attributes.position.array[i+2]=stamenEndCap.attributes.position.array[i+2]+((centerEnd.z)) //height?
}
stamenEndCap.computeVertexNormals();
// let positionVector = new THREE.Vector3(spherePoint.x,spherePoint.y,spherePoint.z)
// console.log(positionVector)
stamenEndCap.attributes.position.needsUpdate = true;
console.log(stamenEndCap.attributes.position.array)
let merge = THREE.BufferGeometryUtils.mergeBufferGeometries([geometry2,stamenEndCap])
merge.attributes.position.needsUpdate = true;
It is shaded improperly, it looks like this:
The color harshly changes from white to that light blue color on the vertical axis, even though the stamen end cap (line 364 of the codepen) is merged with the tube geometry and the shader is calculated across the 3D space of the entire merged object. The geometry becomes "merge" on line 394, and then "stamenGeom" on line 400. Then its boundingbox is used in the vertex and fragment shaders that exist on lines 422-552.
I'm not sure how to shade this properly so that it transitions smoothly, without the line denoting the change in color from white-blue. It doesn't seem to respond to normals, unfortunately.
Viewing the stamen from plan (top-down?) shows that the color is transitioning properly, but viewed from the side it appears as the image.
If anyone has any advice or solutions please let me know, and thank you for reading all of this.
figured it out: in the shader code the colors weren't being blended properly.
previous fragment shader code:
`vec4 diffuseColor = vec4( diffuse, opacity );`,
`
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
vec4 blue = vec4(0.0, 0.0, 1.0, 1.0);
vec4 green = vec4(0.0, 1.0, 0.0, 1.0);
float f = clamp((vPos.z - bbMin.z) / (bbMax.z - bbMin.z)+vertOffset, 0., 1.);
// + is slider for vertical color position, -1 to 1
float linear_modifier = (1.00 * abs(1.) * f);
//vertical gradient position!!
//moves from 0-10?
vec3 col = mix(color1, color2, linear_modifier);
//float f2 = clamp((vPos.x - bbMin.x) / (bbMax.x - bbMin.x), 0., 1.);
float f2 = clamp(vUv.x, 0., 1.);
vec2 pos_ndc = vPos.xy*centerSize2;
float dist = length(pos_ndc*centerSize);
//controls central gradient position!
//the lower the larger?
//0-20
// float linear_modifier2 = (1.00 * abs(sin(1.0)) * dist);
//col = mix(color3, col, dist);
//NOT USING DIST REMOVES VERTICAL CENTRAL GRADIENT
// vec4 diffuseColor = vec4( col, opacity );
float f3 = clamp(vUv.x+f3Offset, 0., 1.);
// ^ THIS controls brightness of lowlights. lower the more intense.
col = mix(color3, col, f3);
//not using this removes LOWLIGHTS
//f3 is subtle fade
//col = mix(color3, col, f3);
//col = mix(color3, col, f2);
//f2 is default
vec4 diffuseColor = vec4( col, opacity );`
fixed shader code:
`vec4 diffuseColor = vec4( diffuse, opacity );`,
`
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
vec4 blue = vec4(0.0, 0.0, 1.0, 1.0);
vec4 green = vec4(0.0, 1.0, 0.0, 1.0);
float f = clamp((vPos.z - bbMin.z) / (bbMax.z - bbMin.z)+vertOffset, 0., 1.);
// + is slider for vertical color position, -1 to 1
float linear_modifier = (1.00 * abs(1.) * f);
//vertical gradient position!!
//moves from 0-10?
vec3 col = mix(color1, color2, linear_modifier);
float f2 = clamp((vPos.x - bbMin.x) / (bbMax.x - bbMin.x), 0., 1.);
//float f2 = clamp(vUv.x, 0., 1.);
vec2 pos_ndc = vPos.xy*centerSize2;
float dist = length(pos_ndc*centerSize);
//controls central gradient position!
//the lower the larger?
//0-20
// float linear_modifier2 = (1.00 * abs(sin(1.0)) * dist);
//col = mix(color3, col, dist);
//NOT USING DIST REMOVES VERTICAL CENTRAL GRADIENT
// vec4 diffuseColor = vec4( col, opacity );
float f3 = clamp(vUv.x+f3Offset, 0., 1.);
// ^ THIS controls brightness of lowlights. lower the more intense.
//col = mix(color3, col, f3);
//not using this removes LOWLIGHTS
//f3 is subtle fade
//col = mix(color3, col, f3);
//col = mix(color3, col, f2);
//f2 is default
vec4 diffuseColor = vec4( col, opacity );
`

GLSL sparking vertex shader

I am trying to tweak this ShaderToy example for vertices to create 'sparks'
out of them. Have tried to play with gl_PointCoord and gl_FragCoord without any results. Maybe, someone here could help me?
I need effect similar to this animated gif:
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main( ) {
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + sin(time + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(gl_FragCoord.xy / resolution.xy) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * ( 0.25 * sin(time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
As I understand have to play with vec2 pos, setting it to a vertex position.
You don't need to play with pos. As Vertex Shader is only run by each vertex, there is no way to process its pixel values there using Pos. However, you can do processing pixel using gl_PointCoord.
I can think of two ways only for changing the scale of a texture
gl_PointSize in Vertex Shader in opengl es
In Fragment Shader, you can change the texture UV value, for example,
vec4 color = texture(texture0, ((gl_PointCoord-0.5) * factor) + vec2(0.5));
If you don't want to use any texture but only pixel processing in FS,
you can set UV like ((gl_PointCoord-0.5) * factor) + vec2(0.5)
instead of uv which is normally set as fragCoord.xy / iResolution.xy in Shadertoy

cocos2dx shader rotate a shape in fragment shader

This problem is cocos2d-x related since I am using cocos2d-x as game engine but I can think it can be solved use basic opengl shader knowledge.
Part 1:
. I have a canvas size of 800 * 600
. I try to draw a simple colored square in size of 96 * 96 which is placed in the middle of the canvas
It is quite simple, the draw part code :
var boundingBox = this.getBoundingBox();
var squareVertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
var vertices = [
boundingBox.width, boundingBox.height,
0, boundingBox.height,
boundingBox.width, 0,
0, 0
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.enableVertexAttribArray(cc.VERTEX_ATTRIB_POSITION);
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
gl.vertexAttribPointer(cc.VERTEX_ATTRIB_POSITION, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
And the vert shader:
attribute vec4 a_position;
void main()
{
gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
}
And the frag shader:
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 center;
uniform vec2 resolution;
uniform float rotation;
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
gl_FragColor = GREEN;
}
And everything works fine :
The grid line is size of 32 * 32, and the black dot indicates the center of the canvas.
Part 2:
. I try to separate the square into half (vertically)
. The left part is green and the right part is red
I changed the frag shader to get it done :
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
/*
x => [0, 1]
y => [0, 1]
*/
vec2 UV = (rotatedFragCoord.xy - center.xy + resolution.xy / 2.0) / resolution.xy;
/*
x => [-1, 1]
y => [-1, 1]
*/
vec2 POS = -1.0 + 2.0 * UV;
if (POS.x <= 0.0) {
gl_FragColor = GREEN;
}
else {
gl_FragColor = RED;
}
}
The uniform 'center' is the position of the square so it is 400, 300 in this case.
The uniform 'resolution' is the content size of the square so the value is 96, 96.
The result is fine :
Part 3:
. I try to change the rotation in cocos2dx style
myShaderNode.setRotation(45);
And the square is rotated but the content is not :
So I tried to rotate the content according to the rotation angle of the node.
I changed the frag shader again:
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
vec2 rotatedFragCoord = gl_FragCoord.xy - center.xy;
float cosa = cos(rotation);
float sina = sin(rotation);
float t = rotatedFragCoord.x;
rotatedFragCoord.x = t * cosa - rotatedFragCoord.y * sina + center.x;
rotatedFragCoord.y = t * sina + rotatedFragCoord.y * cosa + center.y;
/*
x => [0, 1]
y => [0, 1]
*/
vec2 UV = (rotatedFragCoord.xy - center.xy + resolution.xy / 2.0) / resolution.xy;
/*
x => [-1, 1]
y => [-1, 1]
*/
vec2 POS = -1.0 + 2.0 * UV;
if (POS.x <= 0.0) {
gl_FragColor = GREEN;
}
else {
gl_FragColor = RED;
}
}
The uniform rotation is the angle the node rotated so in this case it is 45.
The result is close to what I want but still not right:
I tried hard but just can not figure out what is wrong in my code and what's more if there is anyway easier to get things done.
I am quite new to shader programming and any advice will be appreciated, thanks :)

edge detection on depth buffer [cel shading]

I am currently writing a cel shading shader, but I'm having issues with edge detection. I am currently using the following code utilizing laplacian edge detection on non-linear depth buffer values:
uniform sampler2d depth_tex;
void main(){
vec4 color_out;
float znear = 1.0;
float zfar = 50000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy).r;
float lineAmp = mix( 0.001, 0.0, clamp( (500.0 / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) )/2.0), 0.0, 1.0 ) );// make the lines thicker at close range
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depthn = depthn / depthm;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy - vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depths = depths / depthm;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + vec2(0.0 , 0.002 + lineAmp) ).r;
depthw = depthw / depthm;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy - vec2(0.0 , 0.002 + lineAmp) ).r;
depthe = depthe / depthm;
float Contour = -4.0 + depthn + depths + depthw + depthe;
float lineAmp2 = 100.0 * clamp( depthm - 0.99, 0.0, 1.0);
lineAmp2 = lineAmp2 * lineAmp2;
Contour = (512.0 + lineAmp2 * 204800.0 ) * Contour;
if(Contour > 0.15){
Contour = (0.15 - Contour) / 1.5 + 0.5;
} else
Contour = 1.0;
color_out.rgb = color_out.rgb * Contour;
color_out.a = 1.0;
gl_FragColor = color_out;
}
but it is hackish[note the lineAmp2], and the details at large distances are lost. So I made up some other algorithm:
[Note that Laplacian edge detection is in use]
1.Get 5 samples from the depth buffer: depthm, depthn, depths, depthw, depthe, where depthm is exactly where the processed fragment is, depthn is slightly to the top, depths is slightly to the bottom etc.
2.Calculate their real coordinates in camera space[as well as convert to linear].
3.Compare the side samples to the middle sample by substracting and then normalize each difference by dividing by difference in distance between two camera-space points and add all four results. This should in theory help with situation, where at large distances from the camera two fragments are very close on the screen but very far in camera space, which is fatal for linear depth testing.
where:
2.a convert the non linear depth to linear using an algorithm from [url=http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer]http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer[/url]
exact code:
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
2.b convert the screen coordinates to be [tan a, tan b], where a is horizontal angle and b i vertical. There probably is a better terminology with some spherical coordinates but I don't know these yet.
2.c create a 3d vector ( converted screen coordinates, 1.0 ) and scale it by linear depth. I assume this is estimated camera space coordinates of the fragment. It looks like it.
3.a each difference is as follows: (depthm - sidedepth)/lenght( positionm - sideposition)
And I may have messed up something at any point. Code looks fine, but the algorithm may not be, as I made it up myself.
My code:
uniform sampler2d depth_tex;
void main(){
float znear = 1.0;
float zfar = 10000000000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy + distort ).r;
depthm = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) ); //convert to linear
vec2 scorm = (gl_TexCoord[0].xy + distort) -0.5; //conversion to desired coordinates space. This line returns value from range (-0.5,0.5)
scorm = scorm * 2.0 * 0.5; // normalize to (-1, 1) and multiply by tan FOV/2, and default fov is IIRC 60 degrees
scorm.x = scorm.x * 1.6; //1.6 is aspect ratio 16/10
vec3 posm = vec3( scorm, 1.0 );
posm = posm * depthm; //scale by linearized depth
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2( 0.002*0.625 , 0.0) ).r; //0.625 is aspect ratio 10/16
depthn = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthn - 1.0 ) * (zfar - znear) );
vec2 scorn = (gl_TexCoord[0].xy + distort + vec2( 0.002*0.625, 0.0) ) -0.5;
scorn = scorn * 2.0 * 0.5;
scorn.x = scorn.x * 1.6;
vec3 posn = vec3( scorn, 1.0 );
posn = posn * depthn;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2( 0.002*0.625 , 0.0) ).r;
depths = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depths - 1.0 ) * (zfar - znear) );
vec2 scors = (gl_TexCoord[0].xy + distort - vec2( 0.002*0.625, 0.0) ) -0.5;
scors = scors * 2.0 * 0.5;
scors.x = scors.x * 1.6;
vec3 poss = vec3( scors, 1.0 );
poss = poss * depths;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2(0.0 , 0.002) ).r;
depthw = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthw - 1.0 ) * (zfar - znear) );
vec2 scorw = ( gl_TexCoord[0].xy + distort + vec2( 0.0 , 0.002) ) -0.5;
scorw = scorw * 2.0 * 0.5;
scorw.x = scorw.x * 1.6;
vec3 posw = vec3( scorw, 1.0 );
posw = posw * depthw;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2(0.0 , 0.002) ).r;
depthe = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthe - 1.0 ) * (zfar - znear) );
vec2 score = ( gl_TexCoord[0].xy + distort - vec2( 0.0 , 0.002) ) -0.5;
score = score * 2.0 * 0.5;
score.x = score.x * 1.6;
vec3 pose = vec3( score, 1.0 );
pose = pose * depthe;
float Contour = ( depthn - depthm )/length(posm - posn) + ( depths - depthm )/length(posm - poss) + ( depthw - depthm )/length(posm - posw) + ( depthe - depthm )/length(posm - pose);
Contour = 0.25 * Contour;
color_out.rgb = vec3( Contour, Contour, Contour );
color_out.a = 1.0;
gl_FragColor = color_out;
}
The exact issue with the second code is that it exhibits some awful artifacts at larger distances.
My goal is to make either of them work properly. Are there any tricks I could use to improve precision/quality in both linearized and non-linearized depth buffer? Is anything wrong with my algorithm for linearized depth buffer?

Using a texture for data

I asked this question before about how to pass a data array to a fragment shader for coloring a terrain, and it was suggested I could use a texture's RGBA values.
I'm now stuck trying to work out how I would also use the yzw values. This is my fragment shader code:
vec4 data = texture2D(texture, vec2(verpos.x / 32.0, verpos.z / 32.0));
float blockID = data.x;
vec4 color;
if (blockID == 1.0) {
color = vec4(0.28, 0.52, 0.05, 1.0);
}
else if (blockID == 2.0) {
color = vec4(0.25, 0.46, 0.05, 1.0);
}
else if (blockID == 3.0) {
color = vec4(0.27, 0.49, 0.05, 1.0);
}
gl_FragColor = color;
This works fine, however as you can see it's only using the float from the x-coordinate. If it was also using the yzw coordinates the texture size could be reduced to 16x16 instead of 32x32 (four times smaller).
The aim of this is to create a voxel-type terrain, where each 'block' is 1x1 in space coordinates and is colored based on the blockID. Looks like this:
Outside of GLSL this would be simple, however with no ability to store which blocks have been computed I'm finding this difficult. No doubt, I'm over thinking things and it can be done with some simple math.
EDIT:
Code based on Wagner Patriota's answer:
vec2 pixel_of_target = vec2( verpos.xz * 32.0 - 0.5 ); // Assuming verpos.xz == uv_of_target ?
// For some reason mod() doesn't support integers so I have to convert it using int()
int X = int(mod(pixel_of_target.y, 2.0) * 2.0 + mod(pixel_of_target.x, 2.0));
// Gives the error "Index expression must be constant"
float blockID = data[ X ];
About the error, I asked a question about that before which actually led to me asking this one. :P
Any ideas? Thanks! :)
The idea is to replace:
float blockID = data.x;
By
float blockID = data[ X ];
Where X is a integer that allows you to pick the R, G, B or A from your 16x16 data image.
The thing is how to calculate X in function of your UV?
Ok, you have a target image (32x32) and the data image (16x16). So let's do:
ivec pixel_of_target = ivec( uv_of_target * 32.0 - vec2( 0.5 ) ); // a trick!
Multiplying your UV with the texture dimesions (32 in this case) you find the exact pixel. The -0.5 is necessary because you are trying "to find a pixel from a texture". And of course the texture has interpolated values between the "center of the pixels". You need the exact center of the pixel...
Your pixel_of_target is an ivec (integers) and you can identify exactly where you are drawing! So the thing now is to identify (based on the pixel you are drawing) which channel you should get from the 16x16 texture.
int X = ( pixel_of_target.y % 2 ) * 2 + pixel_of_target.x % 2;
float blockID = data[ X ]; // party on!
This expression above allows you to pick up the correct index X based on the target pixel! On your "data texture" 16x16 map your (R,G,B,A) to (top-left, top-right, bottom-left, bottom-right) of every group of 4 pixels on your target (or maybe upside-down if you prefer... you can figure it out)
UPDATE:
Because you are using WebGL, some details should be changed. I did this and it worked.
vec2 pixel_of_target = vTextureCoord * 32.0 + vec2( 0.5 ); // the signal changed!
int _x = int( pixel_of_target.x );
int _y = int( pixel_of_target.y );
int X = mod( _y, 2 ) * 2 + mod( _x, 2 );
I used this for my test:
if ( X == 0 )
gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if ( X == 1 )
gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else if ( X == 2 )
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
else if ( X == 3 )
gl_FragColor = vec4( 1.0, 0.0, 1.0, 1.0 );
My image worked perfectly fine:
Here i zommed with Photoshop to see the deatails of the pixels.
PS1: Because I am not familiar with WebGL, I could not run WebGL in Chrome, I tried with Firefox, and I didn't find the mod() function either... So I did:
int mod( int a, int b )
{
return a - int( floor( float( a ) / float( b ) ) * float( b ) );
}
PS2: I don't know why I had to sum vec2( 0.5 ) instead of subtract. WebGL is a little bit different. It probably has this shift. I don't know... It just works.

Resources