Weird behavior if DataTextures are not square (1:1) - three.js
I have a pair of shader programs where everything works great if my DataTextures are square (1:1), but if one or both are 2:1 (width:height) ratio the behavior gets messed up. I can extend each of the buffers with unused filler to make sure they are always square, but this seems unnecessarily costly (memory-wise) in the long run, as one of the two buffer sizes is quite large to start. Is there a way to handle a 2:1 buffer in this scenario?
I have a pair of shader programs:
The first is a single frag shader used to calculate the physics for my program (it writes out a texture tPositions to be read by the second set of shaders). It is driven by Three.js's GPUComputeRenderer script (resolution set at the size of my largest buffer.)
The second pair of shaders (vert and frag) use the data texture tPositions produced by the first shader program to then render out the visualization (resolution set at the window size).
The visualization is a grid of variously shaped particle clouds. In the shader programs, there are textures of two different sizes: The smaller sized textures contain information for each of the particle clouds (one texel per cloud), larger sized textures contain information for each particle in all of the clouds (one texel per particle). Both have a certain amount of unused filler tacked on the end to fill them out to a power of 2.
Texel-per-particle sized textures (large): tPositions, tOffsets
Texel-per-cloud sized textures (small): tGridPositionsAndSeeds, tSelectionFactors
As I said before, the problem is that when these two buffer sizes (the large and the small) are at a 1:1 (width: height) ratio, the programs work just fine; however, when one or both are at a 2:1 (width:height) ratio the behavior is a mess. What accounts for this, and how can I address it? Thanks in advance!
UPDATE: Could the problem be related to my housing the texel coords to read the tPosition texture in the shader's position attribute in the second shader program? If so, perhaps this Github issue regarding texel coords in the position attribute may be related, though I can't find a corresponding question/answer here on SO.
UPDATE 2:
I'm also looking into whether this could be an unpack alignment issue. Thoughts?
Here's the set up in Three.js for the first shader program:
function initComputeRenderer() {
textureData = MotifGrid.getBufferData();
gpuCompute = new GPUComputationRenderer( textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, renderer );
dtPositions = gpuCompute.createTexture();
dtPositions.image.data = textureData.tPositions;
offsetsTexture = new THREE.DataTexture( textureData.tOffsets, textureData.uPerParticleBufferWidth, textureData.uPerParticleBufferHeight, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.needsUpdate = true;
gridPositionsAndSeedsTexture = new THREE.DataTexture( textureData.tGridPositionsAndSeeds, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
gridPositionsAndSeedsTexture.needsUpdate = true;
selectionFactorsTexture = new THREE.DataTexture( textureData.tSelectionFactors, textureData.uPerMotifBufferWidth, textureData.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
positionVariable = gpuCompute.addVariable( "tPositions", document.getElementById( 'position_fragment_shader' ).textContent, dtPositions );
positionVariable.wrapS = THREE.RepeatWrapping; // repeat wrapping for use only with bit powers: 8x8, 16x16, etc.
positionVariable.wrapT = THREE.RepeatWrapping;
gpuCompute.setVariableDependencies( positionVariable, [ positionVariable ] );
positionUniforms = positionVariable.material.uniforms;
positionUniforms.tOffsets = { type: "t", value: offsetsTexture };
positionUniforms.tGridPositionsAndSeeds = { type: "t", value: gridPositionsAndSeedsTexture };
positionUniforms.tSelectionFactors = { type: "t", value: selectionFactorsTexture };
positionUniforms.uPerMotifBufferWidth = { type : "f", value : textureData.uPerMotifBufferWidth };
positionUniforms.uPerMotifBufferHeight = { type : "f", value : textureData.uPerMotifBufferHeight };
positionUniforms.uTime = { type: "f", value: 0.0 };
positionUniforms.uXOffW = { type: "f", value: 0.5 };
}
Here is the first shader program (only a frag for physics calculations):
// tPositions is handled by the GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform float uTime;
uniform float uXOffW;
[...skipping a noise function for brevity...]
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) {
float damping = 0.98;
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferWidth)*texelSizeX, floor(cellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) {
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1
//nowPos = vec4( -(uTime) + gridPosition.x + offsets.x, gridPosition.y + offsets.y, 0.0, 0.0 );
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
[...skipping the physics for brevity...]
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
}
Here is the setup for the second shader program:
Note: The renderer for this section is a WebGLRenderer at window size
function makePerParticleReferencePositions() {
var positions = new Float32Array( perParticleBufferSize * 3 );
var texelSizeX = 1 / perParticleBufferDimensions.width;
var texelSizeY = 1 / perParticleBufferDimensions.height;
for ( var j = 0, j3 = 0; j < perParticleBufferSize; j ++, j3 += 3 ) {
positions[ j3 + 0 ] = ( ( j % perParticleBufferDimensions.width ) / perParticleBufferDimensions.width ) + ( 0.5 * texelSizeX );
positions[ j3 + 1 ] = ( Math.floor( j / perParticleBufferDimensions.height ) / perParticleBufferDimensions.height ) + ( 0.5 * texelSizeY );
positions[ j3 + 2 ] = j * 0.0001; // this is the real z value for the particle display
}
return positions;
}
var positions = makePerParticleReferencePositions();
...
// Add attributes to the BufferGeometry:
gridOfMotifs.geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
gridOfMotifs.geometry.addAttribute( 'aTextureIndex', new THREE.BufferAttribute( motifGridAttributes.aTextureIndex, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aAlpha', new THREE.BufferAttribute( motifGridAttributes.aAlpha, 1 ) );
gridOfMotifs.geometry.addAttribute( 'aCellIndex', new THREE.BufferAttribute(
motifGridAttributes.aCellIndex, 1 ) );
uniformValues = {};
uniformValues.tSelectionFactors = motifGridAttributes.tSelectionFactors;
uniformValues.uPerMotifBufferWidth = motifGridAttributes.uPerMotifBufferWidth;
uniformValues.uPerMotifBufferHeight = motifGridAttributes.uPerMotifBufferHeight;
gridOfMotifs.geometry.computeBoundingSphere();
...
function makeCustomUniforms( uniformValues ) {
selectionFactorsTexture = new THREE.DataTexture( uniformValues.tSelectionFactors, uniformValues.uPerMotifBufferWidth, uniformValues.uPerMotifBufferHeight, THREE.RGBAFormat, THREE.FloatType );
selectionFactorsTexture.needsUpdate = true;
var customUniforms = {
tPositions : { type : "t", value : null },
tSelectionFactors : { type : "t", value : selectionFactorsTexture },
uPerMotifBufferWidth : { type : "f", value : uniformValues.uPerMotifBufferWidth },
uPerMotifBufferHeight : { type : "f", value : uniformValues.uPerMotifBufferHeight },
uTextureSheet : { type : "t", value : texture }, // this is a sprite sheet of all 10 strokes
uPointSize : { type : "f", value : 18.0 }, // the radius of a point in WebGL units, e.g. 30.0
// Coords for the hatch textures:
uTextureCoordSizeX : { type : "f", value : 1.0 / numTexturesInSheet },
uTextureCoordSizeY : { type : "f", value : 1.0 }, // the size of a texture in the texture map ( they're square, thus only one value )
};
return customUniforms;
}
And here is the corresponding shader program (vert & frag):
Vertex shader:
uniform sampler2D tPositions;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferWidth;
uniform float uPerMotifBufferHeight;
uniform sampler2D uTextureSheet;
uniform float uPointSize; // the radius size of the point in WebGL units, e.g. "30.0"
uniform float uTextureCoordSizeX; // vertical dimension of each texture given the full side = 1
uniform float uTextureCoordSizeY; // horizontal dimension of each texture given the full side = 1
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
[...skipping noise function for brevity...]
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (aCellIndex >= 0.0) { // buffer filler cell indexes are -1
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX, floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vDensity = vel.x + vel.y;
vAlpha *= abs( vDensity * 3.0 );
vColor = vec3( 1.0, aColor, 1.0 ); // set RGB color associated to vertex; use later in fragment shader.
gl_PointSize = uPointSize;
} else { // if this is a filler cell index (-1)
vAlpha = 0.0;
vDensity = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 ); // position holds the real z value. The z value of "color" is a component of velocity
}
Fragment shader:
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
varying float vDensity;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // only render out the texture if this point is not a buffer filler
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Expected Behavior: I can achieve this by forcing all the DataTextures to be 1:1
Weird Behavior: When the smaller DataTextures are 2:1 those perfectly horizontal clouds in the top right of the picture below form and have messed up physics. When the larger DataTextures are 2:1, the grid is skewed, and the clouds appear to be missing parts (as seen below). When both the small and large textures are 2:1, both odd behaviors happen (this is the case in the image below).
Thanks to an answer to my related question here, I now know what was going wrong. The problem was in the way I was using the arrays of indexes (1,2,3,4,5...) to access the DataTextures' texels in the shader.
In this function (and the one for the larger DataTextures)...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferHeight)*texelSizeY );
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
...I assumed that in order to create the y value for my custom uv, perMotifUV, I would need to divide the aCellIndex by the height of the buffer, uPerMotifBufferHeight (it's "vertical" dimension). However, as explained in the SO Q&A here the indices should, of course, be divided by the buffer's width, which would then tell you how many rows down you are!
Thus, the function should be revised to...
float texelSizeX = 1.0 / uPerMotifBufferWidth;
float texelSizeY = 1.0 / uPerMotifBufferHeight;
vec2 perMotifUV = vec2(
mod(aCellIndex, uPerMotifBufferWidth)*texelSizeX,
floor(aCellIndex / uPerMotifBufferWidth)*texelSizeY ); **Note the change to uPerMotifBufferWidth here
perMotifUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
The reason my program worked on square DataTextures (1:1) is that in such cases the height and width were equal, so my function was effectively dividing by width in the incorrect line because height=width!
Related
Changing fresnel falloff on ThreeJS shader
I've been making use of this shader inside of my ThreeJS project, except I've more or less copied the code verbatim because I have no idea how to write a shader function. Basically I want to edit the rate of falloff on the fresnel effect so that it's only really the edges that are using the colour with a slight glow coming inside var material = THREE.extendMaterial(THREE.MeshStandardMaterial, { // Will be prepended to vertex and fragment code header: 'varying vec3 vNN; varying vec3 vEye;', fragmentHeader: 'uniform vec3 fresnelColor;', // Insert code lines by hinting at a existing vertex: { // Inserts the line after #include <fog_vertex> '#include <fog_vertex>': ` mat4 LM = modelMatrix; LM[2][3] = 0.0; LM[3][0] = 0.0; LM[3][1] = 0.0; LM[3][2] = 0.0; vec4 GN = LM * vec4(objectNormal.xyz, 1.0); vNN = normalize(GN.xyz); vEye = normalize(GN.xyz-cameraPosition);` }, fragment: { 'gl_FragColor = vec4( outgoingLight, diffuseColor.a );' : `gl_FragColor.rgb += ( 1.0 - -min(dot(vEye, normalize(vNN) ), 0.0) ) * fresnelColor;` }, // Uniforms (will be applied to existing or added) uniforms: { diffuse: new THREE.Color( 'black' ), fresnelColor: new THREE.Color( 'blue' ) } }); I've tried changing the number in this line gl_FragColor.rgb += ( **1.0** - -min(dot(vEye, normalize(vNN) ), 0.0) ) * fresnelColor; and whilst that did stop the gradient of the fresnel, it was a hard stop, as though it was limiting levels instead of the rate of gradient. I just need help with how I can make the fall off not as far into my models so that it's only really the edges that have it
Maybe this will help: fragment: { 'gl_FragColor = vec4( outgoingLight, diffuseColor.a );' : ` float m = ( 1.0 - -min(dot(vEye, normalize(vNN)), 0.0) ); m = pow(m, 8.); // the greater the second parameter, the thinner effect you get gl_FragColor.rgb += m * fresnelColor; `
Implement antialiasing logic for line segments and triangles in GLSL shaders
I'm building 2D Graph structure based on Three.js, all elements of the graph (nodes, edges, triangles for arrows) calculated in shaders. I was able to reach a good level of antialiasing for nodes (circles) but stuck with same task for lines and triangles. I was able to reach a good antialiasing results for nodes (circles) with and without stroke following this question: How can I add a uniform width outline to WebGL shader drawn circles/ellipses (drawn using edge/distance antialiasing) , my code, responsible for antialiasing alpha: `float strokeWidth = 0.09; float outerEdgeCenter = 0.5 - strokeWidth; float d = distance(vUV, vec2(.5, .5)); float delta = fwidth(d); float alpha = 1.0 - smoothstep(0.45 - delta, 0.45, d); float stroke = 1.0 - smoothstep(outerEdgeCenter - delta, outerEdgeCenter + delta, d);` But now I'm completely stack with edges and triangles to do same stuff. Here is an example of shapes images that I have now (on non retina displays): To reduce under-sampling artifacts I want to do similar algorithms (as for circles) directly in shaders by manipulating alpha and already find some materials related to this topic: https://thebookofshaders.com/glossary/?search=smoothstep - seems to be the closest solution but unfortunately I wasn't able to implement it properly and figure out how to set up y equation for segmented lines. https://discourse.threejs.org/t/shader-to-create-an-offset-inward-growing-stroke/6060/12 - last answer, looks promising but not give me proper result. https://www.shadertoy.com/view/4dcfW8 - also do not give proper result. Here is an examples of my shaders for lines and triangles: Line VertexShader (is a slightly adapted version of WestLangley's LineMaterial shader): `precision highp float; #include <common> #include <color_pars_vertex> #include <fog_pars_vertex> #include <logdepthbuf_pars_vertex> #include <clipping_planes_pars_vertex> uniform float linewidth; uniform vec2 resolution; attribute vec3 instanceStart; attribute vec3 instanceEnd; attribute vec3 instanceColorStart; attribute vec3 instanceColorEnd; attribute float alphaStart; attribute float alphaEnd; attribute float widthStart; attribute float widthEnd; varying vec2 vUv; varying float alphaTest; void trimSegment( const in vec4 start, inout vec4 end ) { // trim end segment so it terminates between the camera plane and the near plane // conservative estimate of the near plane float a = projectionMatrix[ 2 ][ 2 ]; // 3nd entry in 3th column float b = projectionMatrix[ 3 ][ 2 ]; // 3nd entry in 4th column float nearEstimate = - 0.5 * b / a; float alpha = ( nearEstimate - start.z ) / ( end.z - start.z ); end.xyz = mix( start.xyz, end.xyz, alpha ); } void main() { #ifdef USE_COLOR vColor.xyz = ( position.y < 0.5 ) ? instanceColorStart : instanceColorEnd; alphaTest = ( position.y < 0.5 ) ? alphaStart : alphaEnd; #endif float aspect = resolution.x / resolution.y; vUv = uv; // camera space vec4 start = modelViewMatrix * vec4( instanceStart, 1.0 ); vec4 end = modelViewMatrix * vec4( instanceEnd, 1.0 ); // special case for perspective projection, and segments that terminate either in, or behind, the camera plane // clearly the gpu firmware has a way of addressing this issue when projecting into ndc space // but we need to perform ndc-space calculations in the shader, so we must address this issue directly // perhaps there is a more elegant solution -- WestLangley bool perspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 ); // 4th entry in the 3rd column if (perspective) { if (start.z < 0.0 && end.z >= 0.0) { trimSegment( start, end ); } else if (end.z < 0.0 && start.z >= 0.0) { trimSegment( end, start ); } } // clip space vec4 clipStart = projectionMatrix * start; vec4 clipEnd = projectionMatrix * end; // ndc space vec2 ndcStart = clipStart.xy / clipStart.w; vec2 ndcEnd = clipEnd.xy / clipEnd.w; // direction vec2 dir = ndcEnd - ndcStart; // account for clip-space aspect ratio dir.x *= aspect; dir = normalize( dir ); // perpendicular to dir vec2 offset = vec2( dir.y, - dir.x ); // undo aspect ratio adjustment dir.x /= aspect; offset.x /= aspect; // sign flip if ( position.x < 0.0 ) offset *= - 1.0; // endcaps, to round line corners if ( position.y < 0.0 ) { // offset += - dir; } else if ( position.y > 1.0 ) { // offset += dir; } // adjust for linewidth offset *= (linewidth * widthStart); // adjust for clip-space to screen-space conversion // maybe resolution should be based on viewport ... offset /= resolution.y; // select end vec4 clip = ( position.y < 0.5 ) ? clipStart : clipEnd; // back to clip space offset *= clip.w; clip.xy += offset; gl_Position = clip; vec4 mvPosition = ( position.y < 0.5 ) ? start : end; // this is an approximation #include <logdepthbuf_vertex> #include <clipping_planes_vertex> #include <fog_vertex> }` Line FragmentShader: `precision highp float; #include <common> #include <color_pars_fragment> #include <fog_pars_fragment> #include <logdepthbuf_pars_fragment> #include <clipping_planes_pars_fragment> uniform vec3 diffuse; uniform float opacity; varying vec2 vUv; varying float alphaTest; void main() { if ( abs( vUv.y ) > 1.0 ) { float a = vUv.x; float b = ( vUv.y > 0.0 ) ? vUv.y - 1.0 : vUv.y + 1.0; float len2 = a * a + b * b; if ( len2 > 1.0 ) discard; } vec4 diffuseColor = vec4( diffuse, alphaTest ); #include <logdepthbuf_fragment> #include <color_fragment> gl_FragColor = vec4( diffuseColor.rgb, diffuseColor.a ); #include <premultiplied_alpha_fragment> #include <tonemapping_fragment> #include <encodings_fragment> #include <fog_fragment> }` Triangle vertex shader: `precision highp float; uniform mat4 modelViewMatrix; uniform mat4 projectionMatrix; uniform float zoomLevel; attribute vec3 position; attribute vec3 vertexPos; attribute vec3 color; attribute float alpha; attribute float xAngle; attribute float yAngle; attribute float xScale; attribute float yScale; varying vec4 vColor; // transforms the 'positions' geometry with instance attributes vec3 transform( inout vec3 position, vec3 T) { position.x *= xScale; position.y *= yScale; // Rotate the position vec3 rotatedPosition = vec3( position.x * yAngle + position.y * xAngle, position.y * yAngle - position.x * xAngle, 0); position = rotatedPosition + T; // return the transformed position return position; } void main() { vec3 pos = position; vColor = vec4(color, alpha); // transform it transform(pos, vertexPos); gl_Position = projectionMatrix * modelViewMatrix * vec4( pos, 1.0 ); }` Triangle FragmentShader: `precision highp float; varying vec4 vColor; void main() { gl_FragColor = vColor; }` Will really appreciate any help on how to do it or suggestion of right direction for further investigations. Thank you!
Shader Z space perspective ShaderMaterial BufferGeometry
I'm changing the z coordinate vertices on my geometry but find that the Mesh Stays the same size, and I'm expecting it to get smaller. Tweening between vertex positions works as expected in X,Y space however. This is how I'm calculating my gl_Position by tweening the amplitude uniform in my render function: <script type="x-shader/x-vertex" id="vertexshader"> uniform float amplitude; uniform float direction; uniform vec3 cameraPos; uniform float time; attribute vec3 tweenPosition; varying vec2 vUv; void main() { vec3 pos = position; vec3 morphed = vec3( 0.0, 0.0, 0.0 ); morphed += ( tweenPosition - position ) * amplitude; morphed += pos; vec4 mvPosition = modelViewMatrix * vec4( morphed * vec3(1, -1, 0), 1.0 ); vUv = uv; gl_Position = projectionMatrix * mvPosition; } </script> I also tried something like this from calculating perspective on webglfundamentals: vec4 newPos = projectionMatrix * mvPosition; float zToDivideBy = 1.0 + newPos.z * 1.0; gl_Position = vec4(newPos.xyz, zToDivideBy); This is my loop to calculate another vertex set that I'm tweening between: for (var i = 0; i < positions.length; i++) { if ((i+1) % 3 === 0) { // subtracting from z coord of each vertex tweenPositions[i] = positions[i]- (Math.random() * 2000); } else { tweenPositions[i] = positions[i] } } I get the same results with this -- objects further away in Z-Space do not scale / attenuate / do anything different. What gives?
morphed * vec3(1, -1, 0) z is always zero in your code. [x,y,z] * [1,-1,0] = [x,-y,0]
Slow memory climb until crash in the GPU
I'm displaying a grid of particle clouds using shaders. Every time a user clicks a cloud, that cloud disappears and a new one takes its place. The curious thing is that the memory usage in the GPU climbs every time a new cloud replaces an old one - regardless of whether that new cloud is larger or smaller (and the buffer sizes always stay the same - the unused points are simply displayed offscreen with no color). After less than 10 clicks the GPU maxes out and crashes. Here is my physics shader where the new positions are updated - I pass in the new position values for the new cloud by updating certain values in the the tOffsets texture. After that are my two (vert and frag) visual effects shaders. Can you see my efficiency issue? Or could this be a garbage collection matter? - Thanks in advance! Physics Shader (frag only): // Physics shader: This shader handles the calculations to move the various points. The position values are rendered out to at texture that is passed to the next pair of shaders that add the sprites and opacity. // the tPositions sampler is added to this shader by Three.js's GPUCompute script uniform sampler2D tOffsets; uniform sampler2D tGridPositionsAndSeeds; uniform sampler2D tSelectionFactors; uniform float uPerMotifBufferDimension; uniform float uTime; uniform float uXOffW; ...noise functions omitted for brevity... void main() { vec2 uv = gl_FragCoord.xy / resolution.xy; vec4 offsets = texture2D( tOffsets, uv ).xyzw; float alphaMass = offsets.z; float cellIndex = offsets.w; if (cellIndex >= 0.0) { // this point will be rendered on screen float damping = 0.98; float texelSize = 1.0 / uPerMotifBufferDimension; vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferDimension)*texelSize, floor(cellIndex / uPerMotifBufferDimension)*texelSize ); perMotifUV += vec2(0.5*texelSize); vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw; float swapState = selectionFactors.x; vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw; vec2 noiseSeed = gridPosition.zw; vec4 nowPos; vec2 velocity; nowPos = texture2D( tPositions, uv ).xyzw; velocity = vec2(nowPos.z, nowPos.w); if ( swapState == 0.0 ) { // if no new position values are ready to be swapped in for this point nowPos = texture2D( tPositions, uv ).xyzw; velocity = vec2(nowPos.z, nowPos.w); } else { // if swapState == 1, this means new position values are ready to be swapped in for this point nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 ); velocity = vec2(0.0, 0.0); } ...physics calculations omitted for brevity... vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y); // Write new position out to a texture for processing in the visual effects shader gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y); } else { // this point will not be rendered on screen // Write new position out off screen (all -1 cellIndexes have off-screen offset values) gl_FragColor = vec4( offsets.x, offsets.y, 0.0, 0.0); } From the physics shader the tPositions texture with the points' new movements is rendered out and passed to the visual effects shaders: Visual Effects Shader (vert): uniform sampler2D tPositions; // passed in from the Physics Shader uniform sampler2D tSelectionFactors; uniform float uPerMotifBufferDimension; uniform sampler2D uTextureSheet; uniform float uPointSize; uniform float uTextureCoordSizeX; uniform float uTextureCoordSizeY; attribute float aTextureIndex; attribute float aAlpha; attribute float aCellIndex; varying float vCellIndex; varying vec2 vTextureCoords; varying vec2 vTextureSize; varying float vAlpha; varying vec3 vColor; ...omitted noise functions for brevity... void main() { vec4 tmpPos = texture2D( tPositions, position.xy ); vec2 pos = tmpPos.xy; vec2 vel = tmpPos.zw; vCellIndex = aCellIndex; if (vCellIndex >= 0.0) { // this point will be rendered onscreen float texelSize = 1.0 / uPerMotifBufferDimension; vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferDimension)*texelSize, floor(aCellIndex / uPerMotifBufferDimension)*texelSize ); perMotifUV += vec2(0.5*texelSize); vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw; float aSelectedMotif = selectionFactors.x; float aColor = selectionFactors.y; float fadeFactor = selectionFactors.z; vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 ); vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY ); vAlpha = aAlpha * fadeFactor; vColor = vec3( 1.0, aColor, 1.0 ); gl_PointSize = uPointSize; } else { // this point will not be rendered onscreen vAlpha = 0.0; vColor = vec3(0.0, 0.0, 0.0); gl_PointSize = 0.0; } gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 ); } Visual Effects Shader (frag): uniform sampler2D tPositions; uniform sampler2D uTextureSheet; varying float vCellIndex; varying vec2 vTextureCoords; varying vec2 vTextureSize; varying float vAlpha; varying vec3 vColor; void main() { gl_FragColor = vec4( vColor, vAlpha ); if (vCellIndex >= 0.0) { // this point will be rendered onscreen, so add the texture vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize ); gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord ); } }
Thanks to #Blindman67's comment above, I sorted out the problem. It had nothing to do with the shaders. In the Javascript (Three.js) I needed to signal the GPU to delete old textures before adding the updated ones. Everytime I update a texture (most of mine are DataTextures) I need to call dispose() on the existing texture before creating and updating the new one, like so: var textureHandle; // holds a reference to the current texture uniform value textureHandle.dispose(); // ** deallocates GPU memory ** textureHandle = new THREE.DataTexture( textureData, dimension, dimension, THREE.RGBAFormat, THREE.FloatType ); textureHandle.needsUpdate = true; uniforms.textureHandle.value = textureHandle;
Artifacts from linear filtering a floating point texture in the fragment shader
I'm using the following code taken from this tutorial to perform linear filtering on a floating point texture in my fragment shader in WebGL: float fHeight = 512.0; float fWidth = 1024.0; float texelSizeX = 1.0/fWidth; float texelSizeY = 1.0/fHeight; float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i ) { float p0q0 = texture2D(textureSampler_i, texCoord_i)[0]; float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0]; float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0]; float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0]; float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction. // Fraction near to valid data. float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction. float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction. float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction. return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction. } On an Nvidia GPU this looks fine, but on two other computers with an Intel integrated GPU it looks like this: There are lighter or darker lines appearing that shouldn't be there. They become visible if you zoom in, and tend to get more frequent the more you zoom. When zooming in very closely, they appear at the edge of every texel of the texture I'm filtering. I tried changing the precision statement in the fragment shader, but this didn't fix it. The built-in linear filtering works on both GPUs, but I still need the manual filtering as a fallback for GPUs that don't support linear filtering on floating point textures with WebGL. The Intel GPUs are from a desktop Core i5-4460 and a notebook with an Intel HD 5500 GPU. For all precisions of floating point values I get a rangeMin and rangeMax of 127 and a precision of 23 from getShaderPrecisionFormat. Any idea on what causes these artifacts and how I can work around it? Edit: By experimenting a bit more I found that reducing the texel size variable in the fragment shader removes these artifacts: float texelSizeX = 1.0/fWidth*0.998; float texelSizeY = 1.0/fHeight*0.998; Multiplying by 0.999 isn't enough, but multiplying the texel size by 0.998 removes the artifacts. This is obviously not a satisfying fix, I still don't know what causes it and I probably caused artifacts on other GPUs or drivers now. So I'm still interested in figuring out what the actual issue is here.
It's not clear to me what the code is trying to do. It's not reproducing the GPU's bilinear because that would be using pixels centered around the texcoord. In other words, as implemented vec4 c = tex2DBiLinear(someSampler, someTexcoord); is NOT equivilent to LINEAR vec4 c = texture2D(someSampler, someTexcoord); texture2D looks at pixels someTexcoord +/- texelSize * .5 where as tex2DBiLinear is looking at pixels someTexcoord and someTexcoord + texelSize You haven't given enough code to repo your issue. I'm guessing the size of the source texture is 512x1024 but since you didn't post that code I have no idea if your source texture matches the defined size. You also didn't post what size your target is. The top image you posted is 471x488. Was that your target size? You also didn't post your code for what texture coordinates you're using and the code that manipulates them. Guessing that your source is 512x1024, your target is 471x488 I can't repo your issue. const fs = ` precision highp float; uniform sampler2D tex; varying vec2 v_texcoord; float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i ) { float fHeight = 1024.0; float fWidth = 512.0; float texelSizeX = 1.0/fWidth; float texelSizeY = 1.0/fHeight; float p0q0 = texture2D(textureSampler_i, texCoord_i)[0]; float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0]; float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0]; float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0]; float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction. // Fraction near to valid data. float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction. float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction. float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction. return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction. } void main() { gl_FragColor = vec4(tex2DBiLinear(tex, v_texcoord), 0, 0, 1); } `; const vs = ` attribute vec4 position; attribute vec2 texcoord; varying vec2 v_texcoord; void main() { gl_Position = position; v_texcoord = texcoord; } `; const gl = document.querySelector('canvas').getContext('webgl'); // compile shaders, link programs, look up locations const programInfo = twgl.createProgramInfo(gl, [vs, fs]); // calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array const bufferInfo = twgl.createBufferInfoFromArrays(gl, { position: { numComponents: 2, data: [ -1, -1, 1, -1, -1, 1, 1, 1, ], }, texcoord: [ 0, 0, 1, 0, 0, 1, 1, 1, ], indices: [ 0, 1, 2, 2, 1, 3, ], }); const ctx = document.createElement('canvas').getContext('2d'); ctx.canvas.width = 512; ctx.canvas.height = 1024; const gradient = ctx.createRadialGradient(256, 512, 0, 256, 512, 700); gradient.addColorStop(0, 'red'); gradient.addColorStop(1, 'cyan'); ctx.fillStyle = gradient; ctx.fillRect(0, 0, 512, 1024); const tex = twgl.createTexture(gl, { src: ctx.canvas, minMag: gl.NEAREST, wrap: gl.CLAMP_TO_EDGE, auto: false, }); gl.useProgram(programInfo.program); // calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo); // calls gl.drawArrays or gl.drawElements twgl.drawBufferInfo(gl, bufferInfo); <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> <canvas width="471" height="488"></canvas> If you think the issue is related to floating point textures I can't repo there either const fs = ` precision highp float; uniform sampler2D tex; varying vec2 v_texcoord; float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i ) { float fHeight = 1024.0; float fWidth = 512.0; float texelSizeX = 1.0/fWidth; float texelSizeY = 1.0/fHeight; float p0q0 = texture2D(textureSampler_i, texCoord_i)[0]; float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0]; float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0]; float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0]; float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction. // Fraction near to valid data. float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction. float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction. float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction. return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction. } void main() { gl_FragColor = vec4(tex2DBiLinear(tex, v_texcoord), 0, 0, 1); } `; const vs = ` attribute vec4 position; attribute vec2 texcoord; varying vec2 v_texcoord; void main() { gl_Position = position; v_texcoord = texcoord; } `; const gl = document.querySelector('canvas').getContext('webgl'); const ext = gl.getExtension('OES_texture_float'); if (!ext) { alert('need OES_texture_float'); } // compile shaders, link programs, look up locations const programInfo = twgl.createProgramInfo(gl, [vs, fs]); // calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array const bufferInfo = twgl.createBufferInfoFromArrays(gl, { position: { numComponents: 2, data: [ -1, -1, 1, -1, -1, 1, 1, 1, ], }, texcoord: [ 0, 0, 1, 0, 0, 1, 1, 1, ], indices: [ 0, 1, 2, 2, 1, 3, ], }); const ctx = document.createElement('canvas').getContext('2d'); ctx.canvas.width = 512; ctx.canvas.height = 1024; const gradient = ctx.createRadialGradient(256, 512, 0, 256, 512, 700); gradient.addColorStop(0, 'red'); gradient.addColorStop(1, 'cyan'); ctx.fillStyle = gradient; ctx.fillRect(0, 0, 512, 1024); const tex = twgl.createTexture(gl, { src: ctx.canvas, type: gl.FLOAT, minMag: gl.NEAREST, wrap: gl.CLAMP_TO_EDGE, auto: false, }); gl.useProgram(programInfo.program); // calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo); // calls gl.drawArrays or gl.drawElements twgl.drawBufferInfo(gl, bufferInfo); const e = gl.getExtension('WEBGL_debug_renderer_info'); if (e) { console.log(gl.getParameter(e.UNMASKED_VENDOR_WEBGL)); console.log(gl.getParameter(e.UNMASKED_RENDERER_WEBGL)); } <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> <canvas width="471" height="488"></canvas> If any of the values are off. If your source texture size doesn't match fWidth and fHeigth or if your texture coordinates are different or adjusted in some way then of course maybe I could repo. If any of those are different then I can imagine issues. Tested in Intel Iris Pro and Intel HD Graphics 630. Also tested on an iPhone6+. Note that you need to make sure your fragment shader is running in precision highp float but that setting would likely only affect mobile GPUs.
We had almost identical issue that ocurred at specific zoom of texture. We found out that positions where artifacts appers can be detected with this conditions: vec2 imagePosCenterity = fract(uv * imageSize); if (abs(imagePosCenterity.x-0.5) < 0.001 || abs(imagePosCenterity.y-0.5) < 0.001) {} Where imageSize is width and height of the texture. Our solution looks like this: vec4 texture2DLinear( sampler2D texSampler, vec2 uv) { vec2 pixelOff = vec2(0.5,0.5)/imageSize; vec2 imagePosCenterity = fract(uv * imageSize); if (abs(imagePosCenterity.x-0.5) < 0.001 || abs(imagePosCenterity.y-0.5) < 0.001) { pixelOff = pixelOff-vec2(0.00001,0.00001); } vec4 tl = texture2D(texSampler, uv + vec2(-pixelOff.x,-pixelOff.y)); vec4 tr = texture2D(texSampler, uv + vec2(pixelOff.x,-pixelOff.y)); vec4 bl = texture2D(texSampler, uv + vec2(-pixelOff.x,pixelOff.y)); vec4 br = texture2D(texSampler, uv + vec2(pixelOff.x,pixelOff.y)); vec2 f = fract( (uv.xy-pixelOff) * imageSize ); vec4 tA = mix( tl, tr, f.x ); vec4 tB = mix( bl, br, f.x ); return mix( tA, tB, f.y ); } It is really dirty solution but it works. Changing texelSize as suggested above only moves artifacts to another positions. We are changing texelSize a little bit only on problematic positions. Why we are using linear texture interpolation in GLSL shader? It is because we need to use 1 sample per pixel 16 bit per sample texture with broad set of compatibile devices. It is possible to do it only with OES_texture_half_float_linear extension. By our approach it is possible to solve it without using extension.