Using a texture for data - opengl-es

I asked this question before about how to pass a data array to a fragment shader for coloring a terrain, and it was suggested I could use a texture's RGBA values.
I'm now stuck trying to work out how I would also use the yzw values. This is my fragment shader code:
vec4 data = texture2D(texture, vec2(verpos.x / 32.0, verpos.z / 32.0));
float blockID = data.x;
vec4 color;
if (blockID == 1.0) {
color = vec4(0.28, 0.52, 0.05, 1.0);
}
else if (blockID == 2.0) {
color = vec4(0.25, 0.46, 0.05, 1.0);
}
else if (blockID == 3.0) {
color = vec4(0.27, 0.49, 0.05, 1.0);
}
gl_FragColor = color;
This works fine, however as you can see it's only using the float from the x-coordinate. If it was also using the yzw coordinates the texture size could be reduced to 16x16 instead of 32x32 (four times smaller).
The aim of this is to create a voxel-type terrain, where each 'block' is 1x1 in space coordinates and is colored based on the blockID. Looks like this:
Outside of GLSL this would be simple, however with no ability to store which blocks have been computed I'm finding this difficult. No doubt, I'm over thinking things and it can be done with some simple math.
EDIT:
Code based on Wagner Patriota's answer:
vec2 pixel_of_target = vec2( verpos.xz * 32.0 - 0.5 ); // Assuming verpos.xz == uv_of_target ?
// For some reason mod() doesn't support integers so I have to convert it using int()
int X = int(mod(pixel_of_target.y, 2.0) * 2.0 + mod(pixel_of_target.x, 2.0));
// Gives the error "Index expression must be constant"
float blockID = data[ X ];
About the error, I asked a question about that before which actually led to me asking this one. :P
Any ideas? Thanks! :)

The idea is to replace:
float blockID = data.x;
By
float blockID = data[ X ];
Where X is a integer that allows you to pick the R, G, B or A from your 16x16 data image.
The thing is how to calculate X in function of your UV?
Ok, you have a target image (32x32) and the data image (16x16). So let's do:
ivec pixel_of_target = ivec( uv_of_target * 32.0 - vec2( 0.5 ) ); // a trick!
Multiplying your UV with the texture dimesions (32 in this case) you find the exact pixel. The -0.5 is necessary because you are trying "to find a pixel from a texture". And of course the texture has interpolated values between the "center of the pixels". You need the exact center of the pixel...
Your pixel_of_target is an ivec (integers) and you can identify exactly where you are drawing! So the thing now is to identify (based on the pixel you are drawing) which channel you should get from the 16x16 texture.
int X = ( pixel_of_target.y % 2 ) * 2 + pixel_of_target.x % 2;
float blockID = data[ X ]; // party on!
This expression above allows you to pick up the correct index X based on the target pixel! On your "data texture" 16x16 map your (R,G,B,A) to (top-left, top-right, bottom-left, bottom-right) of every group of 4 pixels on your target (or maybe upside-down if you prefer... you can figure it out)
UPDATE:
Because you are using WebGL, some details should be changed. I did this and it worked.
vec2 pixel_of_target = vTextureCoord * 32.0 + vec2( 0.5 ); // the signal changed!
int _x = int( pixel_of_target.x );
int _y = int( pixel_of_target.y );
int X = mod( _y, 2 ) * 2 + mod( _x, 2 );
I used this for my test:
if ( X == 0 )
gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if ( X == 1 )
gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else if ( X == 2 )
gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
else if ( X == 3 )
gl_FragColor = vec4( 1.0, 0.0, 1.0, 1.0 );
My image worked perfectly fine:
Here i zommed with Photoshop to see the deatails of the pixels.
PS1: Because I am not familiar with WebGL, I could not run WebGL in Chrome, I tried with Firefox, and I didn't find the mod() function either... So I did:
int mod( int a, int b )
{
return a - int( floor( float( a ) / float( b ) ) * float( b ) );
}
PS2: I don't know why I had to sum vec2( 0.5 ) instead of subtract. WebGL is a little bit different. It probably has this shift. I don't know... It just works.

Related

Why am I seeing these blending artifacts in my GLSL shader?

I'm attempting to create a shader that additively blends colored "blobs" (kind of like particles) on top of one another. This seems like it should be a straightforward task but I'm getting strange "banding"-like artifacts when the blobs blend.
First off, here's the behavior I'm after (replicated using Photoshop layers):
Note that the three color layers are all set to blendmode "Linear Dodge (Add)" which as far as I understand is Photoshop's "additive" blend mode.
If I merge the color layers and leave the resulting layer set to "Normal" blending, I'm then free to change the background color as I please.
Obviously additive blending will not work on top of a non-black background, so in the end I will also want/need the shader to support this pre-merging of colors before finally blending into a background that could have any color. However, I'm content for now to only focus on getting the additive-on-top-of-black blending working correctly, because it's not.
Here's my shader code in its current state.
const int MAX_SHAPES = 10;
vec2 spread = vec2(0.3, 0.3);
vec2 offset = vec2(0.0, 0.0);
float shapeSize = 0.3;
const float s = 1.0;
float shapeColors[MAX_SHAPES * 3] = float[MAX_SHAPES * 3] (
s, 0.0, 0.0,
0.0, s, 0.0,
0.0, 0.0, s,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0,
s, 0.0, 0.0
);
vec2 motionFunction (float i) {
float t = iTime;
return vec2(
(cos(t * 0.31 + i * 3.0) + cos(t * 0.11 + i * 14.0) + cos(t * 0.78 + i * 30.0) + cos(t * 0.55 + i * 10.0)) / 4.0,
(cos(t * 0.13 + i * 33.0) + cos(t * 0.66 + i * 38.0) + cos(t * 0.42 + i * 83.0) + cos(t * 0.9 + i * 29.0)) / 4.0
);
}
float blend (float src, float dst, float alpha) {
return alpha * src + (1.0 - alpha) * dst;
}
void mainImage (out vec4 fragColor, in vec2 fragCoord) {
float aspect = iResolution.x / iResolution.y;
float x = (fragCoord.x / iResolution.x) - 0.5;
float y = (fragCoord.y / iResolution.y) - 0.5;
vec2 pixel = vec2(x, y / aspect);
vec4 totalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i = 0; i < MAX_SHAPES; i++) {
if (i >= 3) {
break;
}
vec2 shapeCenter = motionFunction(float(i));
shapeCenter *= spread;
shapeCenter += offset;
float dx = shapeCenter.x - pixel.x;
float dy = shapeCenter.y - pixel.y;
float d = sqrt(dx * dx + dy * dy);
float ratio = d / shapeSize;
float intensity = 1.0 - clamp(ratio, 0.0, 1.0);
totalColor.x = totalColor.x + shapeColors[i * 3 + 0] * intensity;
totalColor.y = totalColor.y + shapeColors[i * 3 + 1] * intensity;
totalColor.z = totalColor.z + shapeColors[i * 3 + 2] * intensity;
totalColor.w = totalColor.w + intensity;
}
float alpha = clamp(totalColor.w, 0.0, 1.0);
float background = 0.0;
fragColor = vec4(
blend(totalColor.x, background, alpha),
blend(totalColor.y, background, alpha),
blend(totalColor.z, background, alpha),
1.0
);
}
And here's a ShaderToy version where you can view it live — https://www.shadertoy.com/view/wlf3RM
Or as a video — https://streamable.com/un25t
The visual artifacts should be pretty obvious, but here's a video that points them out: https://streamable.com/kxaps
(I think they are way more prevalent in the video linked before this one, though. The motion really make them pop out.)
Also as a static image for comparison:
Basically, there are "edges" that appear on certain magical thresholds. I have no idea how they got there or how to get rid of them. Your help would be highly appreciated.
The inside lines are where totalColor.w reaches 1 and so alpha is clamped to 1 inside them. The outside ones that you've traced in white are the edges of the circles.
I modified your ShaderToy link by changing float alpha = clamp(totalColor.w, 0.0, 1.0); to float alpha = 1.0; and float intensity = 1.0 - clamp(ratio, 0.0, 1.0); to float intensity = smoothstep(1.0, 0.0, ratio); (to smooth out the edges of the circles) and now it looks like the first picture.

How to build my funny timeline?

Building my responsive website, I would like to build my funny timeline, but I cannot come up with a solution.
It would be a sprite such as a rocket or flying saucer taking off at the bottom of middle of the page and coming out with smoke.
Smoke would remain more or less and disclose my timeline.
Sketch
Is anyone does have an idea how to make that possible?
To simulate smoke, you have to use a particle system.
As you maybe know, WebGL is able to draw triangles, lines and points.
This last one is what we need. The smoke is made of hundreds of semi-transparent white disks of slighly different sizes. Each point is defined by 7 attributes :
x, y: starting position.
vx, vy: direction.
radius: maximal radius.
life: number of milliseconds before it disappears.
delay: Number of milliseconds to wait before its birth.
One trick is to create points along a vertical centered axis. The more you go up, the more the delay increases. The other trick is to make the point more more transparent as it reaches it end of live.
Here is how you create such vertices :
function createVertices() {
var x, y, vx, vy, radius, life, delay;
var vertices = [];
for( delay=0; delay<1; delay+=0.01 ) {
for( var loops=0; loops<5; loops++ ) {
// Going left.
x = rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = -rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
// Going right.
x = -rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
}
}
var buff = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, buff );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW );
return Math.floor( vertices.length / 7 );
}
As you can see, I created points going right and points going left to get a growing fuzzy triangle.
Then you need a vertex shader controling the position and size of the points.
WebGL provide the output variable gl_PointSize which is the size (in pixels) of the square to draw for the current point.
uniform float uniWidth;
uniform float uniHeight;
uniform float uniTime;
attribute vec2 attCoords;
attribute vec2 attDirection;
attribute float attRadius;
attribute float attLife;
attribute float attDelay;
varying float varAlpha;
const float PERIOD = 10000.0;
const float TRAVEL_TIME = 2000.0;
void main() {
float time = mod( uniTime, PERIOD );
time -= TRAVEL_TIME * attDelay;
if( time < 0.0 || time > attLife) return;
vec2 pos = attCoords + time * attDirection;
gl_Position = vec4( pos.xy, 0, 1 );
gl_PointSize = time * attRadius * min(uniWidth, uniHeight);
varAlpha = 1.0 - (time / attLife);
}
Finally, the fragment shader will display a point in white. but the more you go far from the center, the more transparent the fragments become.
To know where you are in the square drawn for the current point, you can read the global WebGL variable gl_PointCoord.
precision mediump float;
varying float varAlpha;
void main() {
float x = gl_PointCoord.x - 0.5;
float y = gl_PointCoord.y - 0.5;
float radius = x * x + y * y;
if( radius > 0.25 ) discard;
float alpha = varAlpha * 0.8 * (0.25 - radius);
gl_FragColor = vec4(1, 1, 1, alpha);
}
Here is a live example : https://jsfiddle.net/m1a9qry6/1/

Weird behavior if DataTextures are not square (1:1)

I have a pair of shader programs where everything works great if my DataTextures are square (1:1), but if one or both are 2:1 (width:height) ratio the behavior gets messed up. I can extend each of the buffers with unused filler to make sure they are always square, but this seems unnecessarily costly (memory-wise) in the long run, as one of the two buffer sizes is quite large to start. Is there a way to handle a 2:1 buffer in this scenario?
I have a pair of shader programs:
The first is a single frag shader used to calculate the physics for my program (it writes out a texture tPositions to be read by the second set of shaders). It is driven by Three.js's GPUComputeRenderer script (resolution set at the size of my largest buffer.)
The second pair of shaders (vert and frag) use the data texture tPositions produced by the first shader program to then render out the visualization (resolution set at the window size).
The visualization is a grid of variously shaped particle clouds. In the shader programs, there are textures of two different sizes: The smaller sized textures contain information for each of the particle clouds (one texel per cloud), larger sized textures contain information for each particle in all of the clouds (one texel per particle). Both have a certain amount of unused filler tacked on the end to fill them out to a power of 2.
Texel-per-particle sized textures (large): tPositions, tOffsets
Texel-per-cloud sized textures (small): tGridPositionsAndSeeds, tSelectionFactors
As I said before, the problem is that when these two buffer sizes (the large and the small) are at a 1:1 (width: height) ratio, the programs work just fine; however, when one or both are at a 2:1 (width:height) ratio the behavior is a mess. What accounts for this, and how can I address it? Thanks in advance!
UPDATE: Could the problem be related to my housing the texel coords to read the tPosition texture in the shader's position attribute in the second shader program? If so, perhaps this Github issue regarding texel coords in the position attribute may be related, though I can't find a corresponding question/answer here on SO.
UPDATE 2:
I'm also looking into whether this could be an unpack alignment issue. Thoughts?
Here's the set up in Three.js for the first shader program:
function initComputeRenderer() {
textureData = MotifGrid.getBufferData();
gpuCompute = new GPUComputationRenderer( textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, renderer );
dtPositions = gpuCompute.createTexture();
dtPositions.image.data = textureData.tPositions;
offsetsTexture = new THREE.DataTexture( textureData.tOffsets, textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.needsUpdate = true;
gridPositionsAndSeedsTexture = new THREE.DataTexture( textureData.tGridPositionsAndSeeds, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
gridPositionsAndSeedsTexture.needsUpdate = true;
selectionFactorsTexture = new THREE.DataTexture( textureData.tSelectionFactors, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
positionVariable = gpuCompute.addVariable( "tPositions", document.getElementById( 'position_fragment_shader' ).textContent, dtPositions );
positionVariable.wrapS = THREE.RepeatWrapping; // repeat wrapping for use only with bit powers: 8x8, 16x16, etc.
positionVariable.wrapT = THREE.RepeatWrapping;
gpuCompute.setVariableDependencies( positionVariable, [ positionVariable ] );
positionUniforms = positionVariable.material.uniforms;
positionUniforms.tOffsets = { type: "t", value: offsetsTexture };
positionUniforms.tGridPositionsAndSeeds = { type: "t", value: gridPositionsAndSeedsTexture };
positionUniforms.tSelectionFactors = { type: "t", value: selectionFactorsTexture };
positionUniforms.uPerMotifBufferWidth = { type : "f", value : textureData.uPerMotifBufferWidth };
positionUniforms.uPerMotifBufferHeight = { type : "f", value : textureData.uPerMotifBufferHeight };
positionUniforms.uTime = { type: "f", value: 0.0 };
positionUniforms.uXOffW = { type: "f", value: 0.5 };
}
Here is the first shader program (only a frag for physics calculations):
// tPositions is handled by the GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform float uTime;
uniform float uXOffW;
[...skipping a noise function for brevity...]
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) {
float damping = 0.98;
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferWidth)*texelSizeX, floor(cellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) {
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1
//nowPos = vec4( -(uTime) + gridPosition.x + offsets.x, gridPosition.y + offsets.y, 0.0, 0.0 );
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
[...skipping the physics for brevity...]
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
Here is the setup for the second shader program:
Note: The renderer for this section is a WebGLRenderer at window size
function makePerParticleReferencePositions() {
var positions = new Float32Array( perParticleBufferSize * 3 );
var texelSizeX = 1 / perParticleBufferDimensions.width;
var texelSizeY = 1 / perParticleBufferDimensions.height;
for ( var j = 0, j3 = 0; j < perParticleBufferSize; j ++, j3 += 3 ) {
positions[ j3 + 0 ] = ( ( j % perParticleBufferDimensions.width ) / perParticleBufferDimensions.width ) + ( 0.5 * texelSizeX );
positions[ j3 + 1 ] = ( Math.floor( j / perParticleBufferDimensions.height ) / perParticleBufferDimensions.height ) + ( 0.5 * texelSizeY );
positions[ j3 + 2 ] = j * 0.0001; // this is the real z value for the particle display
}
return positions;
}
var positions = makePerParticleReferencePositions();
...
// Add attributes to the BufferGeometry:
gridOfMotifs.geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
gridOfMotifs.geometry.addAttribute( 'aTextureIndex', new THREE.BufferAttribute( motifGridAttributes.aTextureIndex, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aAlpha', new THREE.BufferAttribute( motifGridAttributes.aAlpha, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aCellIndex', new THREE.BufferAttribute(
motifGridAttributes.aCellIndex, 1 ) );
uniformValues = {};
uniformValues.tSelectionFactors = motifGridAttributes.tSelectionFactors;
uniformValues.uPerMotifBufferWidth = motifGridAttributes.uPerMotifBufferWidth;
uniformValues.uPerMotifBufferHeight = motifGridAttributes.uPerMotifBufferHeight;
gridOfMotifs.geometry.computeBoundingSphere();
...
function makeCustomUniforms( uniformValues ) {
selectionFactorsTexture = new THREE.DataTexture( uniformValues.tSelectionFactors, uniformValues.uPerMotifBufferWidth, uniformValues.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
var customUniforms = {
tPositions : { type : "t", value : null },
tSelectionFactors : { type : "t", value : selectionFactorsTexture },
uPerMotifBufferWidth : { type : "f", value : uniformValues.uPerMotifBufferWidth },
uPerMotifBufferHeight : { type : "f", value : uniformValues.uPerMotifBufferHeight },
uTextureSheet : { type : "t", value : texture }, // this is a sprite sheet of all 10 strokes
uPointSize : { type : "f", value : 18.0 }, // the radius of a point in WebGL units, e.g. 30.0
// Coords for the hatch textures:
uTextureCoordSizeX : { type : "f", value : 1.0 / numTexturesInSheet },
uTextureCoordSizeY : { type : "f", value : 1.0 }, // the size of a texture in the texture map ( they're square, thus only one value )
};
return customUniforms;
}
And here is the corresponding shader program (vert & frag):
Vertex shader:
uniform sampler2D tPositions;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform sampler2D uTextureSheet;
uniform float uPointSize; // the radius size of the point in WebGL units, e.g. "30.0"
uniform float uTextureCoordSizeX; // vertical dimension of each texture given the full side = 1
uniform float uTextureCoordSizeY; // horizontal dimension of each texture given the full side = 1
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
[...skipping noise function for brevity...]
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (aCellIndex >= 0.0) { // buffer filler cell indexes are -1
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX, floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vDensity = vel.x + vel.y;
vAlpha *= abs( vDensity * 3.0 );
vColor = vec3( 1.0, aColor, 1.0 ); // set RGB color associated to vertex; use later in fragment shader.
gl_PointSize = uPointSize;
} else { // if this is a filler cell index (-1)
vAlpha = 0.0;
vDensity = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 ); // position holds the real z value. The z value of "color" is a component of velocity
}
Fragment shader:
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // only render out the texture if this point is not a buffer filler
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Expected Behavior: I can achieve this by forcing all the DataTextures to be 1:1
Weird Behavior: When the smaller DataTextures are 2:1 those perfectly horizontal clouds in the top right of the picture below form and have messed up physics. When the larger DataTextures are 2:1, the grid is skewed, and the clouds appear to be missing parts (as seen below). When both the small and large textures are 2:1, both odd behaviors happen (this is the case in the image below).
Thanks to an answer to my related question here, I now know what was going wrong. The problem was in the way I was using the arrays of indexes (1,2,3,4,5...) to access the DataTextures' texels in the shader.
In this function (and the one for the larger DataTextures)...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
...I assumed that in order to create the y value for my custom uv, perMotifUV, I would need to divide the aCellIndex by the height of the buffer, uPerMotifBufferHeight (it's "vertical" dimension). However, as explained in the SO Q&A here the indices should, of course, be divided by the buffer's width, which would then tell you how many rows down you are!
Thus, the function should be revised to...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferWidth)*texelSizeY ); **Note the change to uPerMotifBufferWidth here
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
The reason my program worked on square DataTextures (1:1) is that in such cases the height and width were equal, so my function was effectively dividing by width in the incorrect line because height=width!

Depth Map is white - webgl

I am using the shaders to draw the depth map in my image.
Here is my shader code :
vertex shader:
void main(void) {
gl_PointSize = aPointSize;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
visdepth = aisdepth;
vHasTexture = aHasTexture;
if (aHasTexture > 0.5)
vTextureCoord = aTextureCoord;
}
Fragement Shader:
void main(void) {
if (vHasTexture < 0.5 && visdepth < 0.5)
gl_FragColor = vColor;
if (vHasTexture > 0.5) {
vec4 textureColor = texture2D(uTexture, vec2(vTextureCoord.s, vTextureCoord.t));
gl_FragColor = vec4(textureColor.rgb, textureColor.a * uTextureAlpha);
}
if (visdepth > 0.5){
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth /gl_FragCoord.w;
gl_FragColor = vec4((clipDepth*0.5)+0.5);
}
}
I used the following link as reference for my calculations : draw the depth value in opengl using shaders
I am getting all my values to be white as shown below:
From the two images above, it is clearly seen that points to the far right of the image are behind. This is not reflected in the image I downloaded. After using drawArrays function, I use the toDataUrl function to download the canvas data. The images are a result of the download. Does anyone know of any possible reasons for this?
for anyone who seeks an answer to that question , here's a little hint :
if you want to view the depth map , you have to linearize it...
float linearize_Z(float depth , float zNear , float zFar){
return (2*zNear ) / (zFar + zNear - depth*(zFar -zNear)) ;
}

Calculating the correct texture coordinates for a 3D terrain

Using a texture, I'm trying to pass data to my shader so it knows what color each fragment should be. I'm attempting to create a voxel-type terrain (Minecraft style voxels) using 8-bit ints, with each RGBA value being a different color specified on the shader. The value 1 might be green and 2 might be brown for example.
If my math is correct, a 2048 x 2048 sized texture is the exact size needed for the voxel terrain data:
2048 x 2048 sized texture = 4194304 pixels.
8 x 8 = 64 "chunks" loaded at once.
32 x 32 x 256 = 262144 voxels in a chunk.
64 x 262144 = 16777216 voxels.
For each pixel in the texture I can use RGBA as individual values, so divide it by 4: (Each voxel is therefore 1 byte which is fine as values will be less than 200.)
16777216 / 4 = 4194304 pixels.
That said, I'm having trouble getting the correct texture coordinates to represent the 3D terrain. This is my code at the moment which works fine for a flat plane:
Fragment shader:
int modint( int a, int b )
{
return a - int( floor( float( a ) / float( b ) ) * float( b ) );
}
void main() {
// divide by 4096 because we're using the same pixel twice in each axis
vec4 data = texture2D(uSampler, vec2(verpos.x / 4096.0, verpos.z / 4096.0));
vec2 pixel_of_target = verpos.xz;
int _x = int( pixel_of_target.x );
int _y = int( pixel_of_target.y );
int X = modint( _y, 2 ) * 2 + modint( _x, 2 );
vec4 colorthing;
float blockID;
if (X == 0) blockID = data.x;
else if (X == 1) blockID = data.y;
else if (X == 2) blockID = data.z;
else if (X == 3) blockID = data.w;
if (blockID == 1.0) gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if (blockID == 2.0) gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else if (blockID == 3.0) gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
else if (blockID == 4.0) gl_FragColor = vec4( 1.0, 0.0, 1.0, 1.0 );
}
So basically my texture is a 2D map containing slices of my 3D data, and I need to modify this code so it calculates the correct coordinates. Anyone know how I would do this?
Assuming that your texture will contain 64 slices of your terrain in a 8x8 grid, the following lookup should work:
vec2 texCoord = vec2((verpos.x / 8.0 + mod(verpos.y, 8.0)) / 4096.0,
(verpos.x / 8.0 + floor(verpos.y / 8.0)) / 4096.0));
vec4 data = texture2D(uSampler, texCoord);
... Rest of your shader as above
Your texture then should contain a full 2D slice of the terrain at height 0, then next to it a slice at height 1, until height 7 at the rightmost position. In the next row are heights 8 - 15 and so on.
Having said that, you should normally try to avoid ifs in shader code, because it slows down shader processing quite a bit. If webGL supports arrays (which it does to my knowledge), you can store all your colors in an array and do an array-lookup instead of the if-chain
the texture coordinate should be
vec4 data = texture2D(uSampler, vec2(verpos.x / (4 * 2048.0), verpos.z / 2048.0));
and then the byte to read would be given by
int index = (verpos.x/2048) % 4;
if index == 0 pick data.r, if 1 data.g, if 2 data.b and so on...

Resources