Flickering of THREE.Points based on camera position and texture coordinates, but only on Nvidia cards - three.js

I have a problem with flickering of THREE.Points depending on their UV coordinates, as seen in the following codepen: http://codepen.io/anon/pen/qrdQeY?editors=0010
The code in the codepen is condensed down as much as possible (171 lines),
but to summarize what I'm doing:
Rendering sprites using THREE.Points
BufferGeometry contains spritesheet index and position for each sprite
RawShaderMaterial with custom vertex and pixel shader to lookup up the UV coordinates of the sprite for the given index
a 128x128px spritesheet with 4x4 cells contains the sprites
Here's the code:
/// FRAGMENT SHADER ===========================================================
const fragmentShader = `
precision highp float;
uniform sampler2D spritesheet;
// number of spritesheet subdivisions both vertically and horizontally
// e.g. for a 4x4 spritesheet this number is 4
uniform float spritesheetSubdivisions;
// vParams[i].x = sprite index
// vParams[i].z = sprite alpha
varying vec3 vParams;
/**
* Maps regular UV coordinates spanning the entire spritesheet
* to a specific sprite within the spritesheet based on the given index,
* which points into a spritesheel cell (depending on spritesheetSubdivisions
* and assuming that the spritesheet is regular and square).
*/
vec2 spriteIndexToUV(float idx, vec2 uv) {
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
void main() {
vec2 uv = spriteIndexToUV(vParams.x, gl_PointCoord);
vec4 diffuse = texture2D(spritesheet, uv);
float alpha = diffuse.a * vParams.z;
if (alpha < 0.5) discard;
gl_FragColor = vec4(diffuse.xyz, alpha);
}
`
// VERTEX SHADER ==============================================================
const vertexShader = `
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float size;
uniform float scale;
attribute vec3 position;
attribute vec3 params; // x = sprite index, y = unused, z = sprite alpha
attribute vec3 color;
varying vec3 vParams;
void main() {
vParams = params;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = size * ( scale / - mvPosition.z );
}
`
// THREEJS CODE ===============================================================
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("#mycanvas")});
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setClearColor(0xf0f0f0)
const pointGeometry = new THREE.BufferGeometry()
pointGeometry.addAttribute("position", new THREE.BufferAttribute(new Float32Array([
-1.5, -1.5, 0,
-0.5, -1.5, 0,
0.5, -1.5, 0,
1.5, -1.5, 0,
-1.5, -0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
1.5, -0.5, 0,
-1.5, 0.5, 0,
-0.5, 0.5, 0,
0.5, 0.5, 0,
1.5, 0.5, 0,
-1.5, 1.5, 0,
-0.5, 1.5, 0,
0.5, 1.5, 0,
1.5, 1.5, 0,
]), 3))
pointGeometry.addAttribute("params", new THREE.BufferAttribute(new Float32Array([
0, 0, 1, // sprite index 0 (row 0, column 0)
1, 0, 1, // sprite index 1 (row 0, column 1)
2, 0, 1, // sprite index 2 (row 0, column 2)
3, 0, 1, // sprite index 3 (row 0, column 4)
4, 0, 1, // sprite index 4 (row 1, column 0)
5, 0, 1, // sprite index 5 (row 1, column 1)
6, 0, 1, // ...
7, 0, 1,
8, 0, 1,
9, 0, 1,
10, 0, 1,
11, 0, 1,
12, 0, 1,
13, 0, 1,
14, 0, 1,
15, 0, 1
]), 3))
const img = document.querySelector("img")
const texture = new THREE.TextureLoader().load(img.src);
const pointMaterial = new THREE.RawShaderMaterial({
transparent: true,
vertexShader: vertexShader,
fragmentShader: fragmentShader,
uniforms: {
spritesheet: {
type: "t",
value: texture
},
spritesheetSubdivisions: {
type: "f",
value: 4
},
size: {
type: "f",
value: 1
},
scale: {
type: "f",
value: window.innerHeight / 2
}
}
})
const points = new THREE.Points(pointGeometry, pointMaterial)
scene.add(points)
const render = function (timestamp) {
requestAnimationFrame(render);
camera.position.z = 5 + Math.sin(timestamp / 1000.0)
renderer.render(scene, camera);
};
render();
// resize viewport
window.addEventListener( 'resize', onWindowResize, false );
function onWindowResize(){
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
If you have an Nvidia card you will see three sprites flicker while the camera
is moving back and forth along the Z axis. On integrated Intel graphics chips
the problem does not occur.
I'm not sure how to solve this problem. The affected uv coordinates seem kind of random. I'd be grateful for any pointers.

The mod()/floor() calculations inside your spriteIndexToUV() function are causing problems in certain constellations (when spriteindex is a multiple of spritesheetSubdivisions).
I could fix it by tweaking the cols variable with a small epsilon:
vec2 spriteIndexToUV(float idx, vec2 uv)
{
float cols = spritesheetSubdivisions - 1e-6; // subtract epsilon
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
PS: That codepen stuff is really cool, didn't know that this existed :-)
edit: It might be even better/clearer to write it like this:
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float y = floor ((idx+0.5) / cols);
float x = idx - cols * y;
That way, we keep totally clear of any critical situations in the floor operation -- plus we get rid of the mod() call.
As to why floor (idx/4) is sometimes producing 0 instead of 1 when idx should be exactly 4.0, I can only speculate that the varying vec3 vParams is subjected to some interpolation when it goes from the vertex-shader to the fragment-shader stage, thus leading to the fragment-shader seeing e.g. 3.999999 instead of exactly 4.0.

Related

Use 2 meshes + shader materials with each a different fragment shader in 1 scene (three.js)

I have 2 meshes with each a shaderMaterial and each a different fragment shader. When I add both meshes to my scene, only one will show up. Below you can find my 2 fragment shaders (see both images to see what they look like). They're basically the same.
What I want to achieve: Use mesh1 as a mask and put the other one, mesh2 (purple blob) on top of the mask.
Purple blob:
// three.js code
const geometry1 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material1 = new THREE.ShaderMaterial({
uniforms: this.uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh1 = new THREE.Mesh(geometry1, material1);
this.scene.add(mesh1);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Blue mask
// three.js code
const geometry2 = new THREE.PlaneBufferGeometry(1, 1, 1, 1);
const material2 = new THREE.ShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
defines: {
PR: window.devicePixelRatio.toFixed(1)
}
});
const mesh2 = new THREE.Mesh(geometry2, material2);
this.scene.add(mesh2);
// fragment shader
void main() {
vec2 res = u_res * PR;
vec2 st = gl_FragCoord.xy / res.xy - 0.5;
st.y *= u_res.y / u_res.x * 0.8;
vec2 circlePos = st;
float c = circle(circlePos, 0.2 + 0. * 0.1, 1.) * 2.5;
float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y * .1 - u_time * 0.005 - cos(u_time * .001) * .01;
float n = snoise3(vec3(offx, offy, .9) * 2.5) - 2.1;
float finalMask = smoothstep(1., 0.99, n + pow(c, 1.5));
vec4 bg = vec4(0.12, 0.07, 0.28, 1.0);
vec4 bg2 = vec4(0., 0., 0., 0.);
gl_FragColor = mix(bg, bg2, finalMask);
}
Render Target code
this.rtWidth = window.innerWidth;
this.rtHeight = window.innerHeight;
this.renderTarget = new THREE.WebGLRenderTarget(this.rtWidth, this.rtHeight);
this.rtCamera = new THREE.PerspectiveCamera(
this.camera.settings.fov,
this.camera.settings.aspect,
this.camera.settings.near,
this.camera.settings.far
);
this.rtCamera.position.set(0, 0, this.camera.settings.perspective);
this.rtScene = new THREE.Scene();
this.rtScene.add(this.purpleBlob);
const geometry = new THREE.PlaneGeometry(window.innerWidth, window.innerHeight, 1);
const material = new THREE.MeshPhongMaterial({
map: this.renderTarget.texture,
});
this.mesh = new THREE.Mesh(geometry, material);
this.scene.add(this.mesh);
I'm still new to shaders so please be patient. :-)
There are probably infinite ways to mask in three.js. Here's a few
Use the stencil buffer
The stencil buffer is similar to the depth buffer in that it for every pixel in the canvas or render target there is a corresponding stencil pixel. You need to tell three.js you want a stencil buffer and then you can tell it when rendering what to do with the stencil buffer when you're drawing things.
You the stencil settings on Material
You tell three.js
what to do if the pixel you're drawing fails the stencil test
what to do if the pixel your drawing fails the depth test
what to do if the pixel you're drawing passes the depth test.
The things you can tell it to do for each of those conditions are keep (do nothing), increment, decrement, increment wraparound, decrement wraparound, set to a specific value.
You can also specify what the stencil test is by setting Material.stencilFunc
So, for example you can clear the stencil buffer to 0 (the default?), set the stencil test so it always passes, and set the conditions so if the depth test passes you set the stencil to 1. You then draw a bunch of things. Everywhere they are drawn there will now be a 1 in then stencil buffer.
Now you change the stencil test so it only passes if it equals 1 (or 0) and then draw more stuff, now things will only be drawn where the stencil equals the value you set
This exmaple uses the stencil
Mask with an alpha mask
In this case you need 2 color textures and an alpha texture. How you get those is up to you. For example you could load all 3 from images. Or you could generate all 3 using 3 render targets. Finally you pass all 3 to a shader that mixes them as in
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, valueFromAlphaTexture);
This example uses this alpha mixing method
Note that if one of your 2 colors textures has an alpha channel you could use just 2 textures. You'd just pass one of the color textures as your mask.
Or of course you could calculate a mask based on the colors in one image or the other or both. For example
// assume you have function that converts from rgb to hue,saturation,value
vec3 hsv = rgb2hsv(colorFromTexture1.rgb);
float hue = hsv.x;
// pick one or the other if color1 is close to green
float mixAmount = step(abs(hue - 0.33), 0.05);
gl_FragColor = mix(colorFromTexture1, colorFromTexture2, mixAmount);
The point here is not that exact code, it's that you can make any formula you want for the mask, based on whatever you want, color, position, random math, sine waves based on time, some formula that generates a blob, whatever. The most common is some code that just looks up a mixAmount from a texture which is what the linked example above does.
ShaderToy style
Your code above appears to be a shadertoy style shader which is drawing a fullscreen quad. Instead of drawing 2 separate things you can just draw them in the same shader
vec4 computeBlueBlob() {
...
return blueBlobColor;
}
vec4 computeWhiteBlob() {
...
return whtieBlobColor;
}
vec4 main() {
vec4 color1 = computeBlueBlob();
vec4 color2 = computeWhiteBlob();
float mixAmount = color.a; // note: color2.a could be any
// formula to decide which colors
// to draw
gl_FragColor = mix(color1, color2, mixAmount);
}
note just like above how you compute mixAmount is up to you. Based it off anything, color1.r, color2.r, some formula, some hue, some other blob generation function, whatever.

cocos2dx shader rotate a shape in fragment shader

This problem is cocos2d-x related since I am using cocos2d-x as game engine but I can think it can be solved use basic opengl shader knowledge.
Part 1:
. I have a canvas size of 800 * 600
. I try to draw a simple colored square in size of 96 * 96 which is placed in the middle of the canvas
It is quite simple, the draw part code :
var boundingBox = this.getBoundingBox();
var squareVertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
var vertices = [
boundingBox.width, boundingBox.height,
0, boundingBox.height,
boundingBox.width, 0,
0, 0
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.enableVertexAttribArray(cc.VERTEX_ATTRIB_POSITION);
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
gl.vertexAttribPointer(cc.VERTEX_ATTRIB_POSITION, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
And the vert shader:
attribute vec4 a_position;
void main()
{
gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
}
And the frag shader:
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 center;
uniform vec2 resolution;
uniform float rotation;
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
gl_FragColor = GREEN;
}
And everything works fine :
The grid line is size of 32 * 32, and the black dot indicates the center of the canvas.
Part 2:
. I try to separate the square into half (vertically)
. The left part is green and the right part is red
I changed the frag shader to get it done :
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
/*
x => [0, 1]
y => [0, 1]
*/
vec2 UV = (rotatedFragCoord.xy - center.xy + resolution.xy / 2.0) / resolution.xy;
/*
x => [-1, 1]
y => [-1, 1]
*/
vec2 POS = -1.0 + 2.0 * UV;
if (POS.x <= 0.0) {
gl_FragColor = GREEN;
}
else {
gl_FragColor = RED;
}
}
The uniform 'center' is the position of the square so it is 400, 300 in this case.
The uniform 'resolution' is the content size of the square so the value is 96, 96.
The result is fine :
Part 3:
. I try to change the rotation in cocos2dx style
myShaderNode.setRotation(45);
And the square is rotated but the content is not :
So I tried to rotate the content according to the rotation angle of the node.
I changed the frag shader again:
void main()
{
vec4 RED = vec4(1.0, 0.0, 0.0, 1.0);
vec4 GREEN = vec4(0.0, 1.0, 0.0, 1.0);
vec2 rotatedFragCoord = gl_FragCoord.xy - center.xy;
float cosa = cos(rotation);
float sina = sin(rotation);
float t = rotatedFragCoord.x;
rotatedFragCoord.x = t * cosa - rotatedFragCoord.y * sina + center.x;
rotatedFragCoord.y = t * sina + rotatedFragCoord.y * cosa + center.y;
/*
x => [0, 1]
y => [0, 1]
*/
vec2 UV = (rotatedFragCoord.xy - center.xy + resolution.xy / 2.0) / resolution.xy;
/*
x => [-1, 1]
y => [-1, 1]
*/
vec2 POS = -1.0 + 2.0 * UV;
if (POS.x <= 0.0) {
gl_FragColor = GREEN;
}
else {
gl_FragColor = RED;
}
}
The uniform rotation is the angle the node rotated so in this case it is 45.
The result is close to what I want but still not right:
I tried hard but just can not figure out what is wrong in my code and what's more if there is anyway easier to get things done.
I am quite new to shader programming and any advice will be appreciated, thanks :)

Odd results from shaders used to pre-process spring physics simulation

I'm doing a spring physics simulation using 2D samplers to house and pre-process some position data in a fragment shader, and getting very odd results. If I start with 16 individually located springs (a point at the end of an invisible spring originating from an invisible anchor), the visualization ends up with eight pairs, each pair hanging from the same spring anchor point. However, if I simply run the visualization to place the points using only the tOffsets values, all the information to calculate each of the anchor points is there and displays correctly (though no physics, of course). It's once I add back in the spring physics that I end up with pairs again. Also, from watching the visualization, I can tell that the pairs' anchor points values are none of the original 16 anchor point values. Any idea what's going on here? (See both the fiddle and the starred inline comments below.)
(three.js v 80)
See the fiddle using v79 here.
uniform sampler2D tPositions;
uniform sampler2D tOffsets;
varying vec2 vUv;
void main() {
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, vUv ).xyzw;
vec4 offsets = texture2D( tOffsets, vUv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
vec2 anchor = vec2( offsets.x, 130.0 );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force: **this works fine
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force ** something goes wrong once I add the spring physics - the springs display in pairs
float restLength = length(yAnchor - offsets.y);
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize( springForce );
springForce *= (1.0 * springConstant * stretch);
springForce /= mass;
acceleration += springForce; // ** If I comment out this line, all points display where expected, and fall according to gravity. If I add it it back in the springs work properly but display in 8 pairs as opposed to 16 independent locations
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to texture for the next shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y); // **the pair problem shows up with this line active
// sanity checks with comments:
// gl_FragColor = vec4(newPosition.x, newPosition.y, 0.0, 0.0); // **the pair problem also shows up in this case
// gl_FragColor = vec4( offsets.x, offsets.y, velocity ); // **all points display in the correct position, though no physics
// gl_FragColor = vec4(nowPos.x, nowPos.y, 0.0, 0.0); // **all points display in the correct position, though no physics
UPDATE 1:
Could the problem be with the number of values (rgba, xzyw) agreeing between all of the pieces of my program? I've specified rgba values wherever I can think to, but perhaps I've missed somewhere. Here is a snippet from my javascript:
if ( ! renderer.context.getExtension( 'OES_texture_float' ) ) {
alert( 'OES_texture_float is not :(' );
}
var width = 4, height = 4;
particles = width * height;
// Start creation of DataTexture
var positions = new Float32Array( particles * 4 );
var offsets = new Float32Array( particles * 4 );
// hardcoded dummy values for the sake of debugging:
var somePositions = [10.885510444641113, -6.274578094482422, 0, 0, -10.12020206451416, 0.8196354508399963, 0, 0, 35.518341064453125, -5.810637474060059, 0, 0, 3.7696402072906494, -3.118760347366333, 0, 0, 9.090447425842285, -7.851400375366211, 0, 0, -32.53229522705078, -26.4628849029541, 0, 0, 32.3623046875, 22.746187210083008, 0, 0, 7.844726085662842, -15.305091857910156, 0, 0, -32.65345001220703, 22.251712799072266, 0, 0, -25.811357498168945, 32.4153938293457, 0, 0, -28.263731002807617, -31.015430450439453, 0, 0, 2.0903847217559814, 1.7632032632827759, 0, 0, -4.471604347229004, 8.995194435119629, 0, 0, -12.317420959472656, 12.19576358795166, 0, 0, 36.77312469482422, -14.580523490905762, 0, 0, 36.447078704833984, -16.085195541381836, 0, 0];
for ( var i = 0, i4 = 0; i < particles; i ++, i4 +=4 ) {
positions[ i4 + 0 ] = somePositions[ i4 + 0 ]; // x
positions[ i4 + 1 ] = somePositions[ i4 + 1 ]; // y
positions[ i4 + 2 ] = 0.0; // velocity
positions[ i4 + 3 ] = 0.0; // velocity
offsets[ i4 + 0 ] = positions[ i4 + 0 ];// - gridPositions[ i4 + 0 ]; // width offset
offsets[ i4 + 1 ] = positions[ i4 + 1 ];// - gridPositions[ i4 + 1 ]; // height offset
offsets[ i4 + 2 ] = 0; // not used
offsets[ i4 + 3 ] = 0; // not used
}
positionsTexture = new THREE.DataTexture( positions, width, height, THREE.RGBAFormat, THREE.FloatType );
positionsTexture.minFilter = THREE.NearestFilter;
positionsTexture.magFilter = THREE.NearestFilter;
positionsTexture.needsUpdate = true;
offsetsTexture = new THREE.DataTexture( offsets, width, height, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.minFilter = THREE.NearestFilter;
offsetsTexture.magFilter = THREE.NearestFilter;
offsetsTexture.needsUpdate = true;
rtTexturePos = new THREE.WebGLRenderTarget(width, height, {
wrapS:THREE.RepeatWrapping,
wrapT:THREE.RepeatWrapping,
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
format: THREE.RGBAFormat,
type:THREE.FloatType,
stencilBuffer: false
});
rtTexturePos2 = rtTexturePos.clone();
simulationShader = new THREE.ShaderMaterial({
uniforms: {
tPositions: { type: "t", value: positionsTexture },
tOffsets: { type: "t", value: offsetsTexture },
},
vertexShader: document.getElementById('texture_vertex_simulation_shader').textContent,
fragmentShader: document.getElementById('texture_fragment_simulation_shader').textContent
});
fboParticles = new THREE.FBOUtils( width, renderer, simulationShader );
fboParticles.renderToTexture(rtTexturePos, rtTexturePos2);
fboParticles.in = rtTexturePos;
fboParticles.out = rtTexturePos2;
UPDATE 2:
Perhaps the problem has to do with how the texels are being read from these textures? Somehow it may be reading between two texels, and so coming up with an averaged position shared by two springs? Is this possible? If so, where would I look to fix it?
I never discovered the problem with the fiddle in my question above; however, I did eventually find the newer version of the THREE.FBOUtils script I was using above - it is now called THREE.GPUComputationRenderer. After implementing it, my script finally worked!
For those who find themselves trying trying so solve a similar problem, here is the new and improved fiddle using the GPUComputationRenderer in place of the old FBOUtils.
Here, from the script documentation, is a basic use case of GPUComputationRenderer:
//Initialization...
// Create computation renderer
var gpuCompute = new GPUComputationRenderer( 1024, 1024, renderer );
// Create initial state float textures
var pos0 = gpuCompute.createTexture();
var vel0 = gpuCompute.createTexture();
// and fill in here the texture data...
// Add texture variables
var velVar = gpuCompute.addVariable( "textureVelocity", fragmentShaderVel, pos0 );
var posVar = gpuCompute.addVariable( "texturePosition", fragmentShaderPos, vel0 );
// Add variable dependencies
gpuCompute.setVariableDependencies( velVar, [ velVar, posVar ] );
gpuCompute.setVariableDependencies( posVar, [ velVar, posVar ] );
// Add custom uniforms
velVar.material.uniforms.time = { value: 0.0 };
// Check for completeness
var error = gpuCompute.init();
if ( error !== null ) {
console.error( error );
}
// In each frame...
// Compute!
gpuCompute.compute();
// Update texture uniforms in your visualization materials with the gpu renderer output
myMaterial.uniforms.myTexture.value = gpuCompute.getCurrentRenderTarget( posVar ).texture;
// Do your rendering
renderer.render( myScene, myCamera );

Artifacts from linear filtering a floating point texture in the fragment shader

I'm using the following code taken from this tutorial to perform linear filtering on a floating point texture in my fragment shader in WebGL:
float fHeight = 512.0;
float fWidth = 1024.0;
float texelSizeX = 1.0/fWidth;
float texelSizeY = 1.0/fHeight;
float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i )
{
float p0q0 = texture2D(textureSampler_i, texCoord_i)[0];
float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0];
float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0];
float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0];
float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction.
// Fraction near to valid data.
float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction.
float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction.
float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction.
return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction.
}
On an Nvidia GPU this looks fine, but on two other computers with an Intel integrated GPU it looks like this:
There are lighter or darker lines appearing that shouldn't be there. They become visible if you zoom in, and tend to get more frequent the more you zoom. When zooming in very closely, they appear at the edge of every texel of the texture I'm filtering. I tried changing the precision statement in the fragment shader, but this didn't fix it.
The built-in linear filtering works on both GPUs, but I still need the manual filtering as a fallback for GPUs that don't support linear filtering on floating point textures with WebGL.
The Intel GPUs are from a desktop Core i5-4460 and a notebook with an Intel HD 5500 GPU. For all precisions of floating point values I get a rangeMin and rangeMax of 127 and a precision of 23 from getShaderPrecisionFormat.
Any idea on what causes these artifacts and how I can work around it?
Edit:
By experimenting a bit more I found that reducing the texel size variable in the fragment shader removes these artifacts:
float texelSizeX = 1.0/fWidth*0.998;
float texelSizeY = 1.0/fHeight*0.998;
Multiplying by 0.999 isn't enough, but multiplying the texel size by 0.998 removes the artifacts.
This is obviously not a satisfying fix, I still don't know what causes it and I probably caused artifacts on other GPUs or drivers now. So I'm still interested in figuring out what the actual issue is here.
It's not clear to me what the code is trying to do. It's not reproducing the GPU's bilinear because that would be using pixels centered around the texcoord.
In other words, as implemented
vec4 c = tex2DBiLinear(someSampler, someTexcoord);
is NOT equivilent to LINEAR
vec4 c = texture2D(someSampler, someTexcoord);
texture2D looks at pixels someTexcoord +/- texelSize * .5 where as tex2DBiLinear is looking at pixels someTexcoord and someTexcoord + texelSize
You haven't given enough code to repo your issue. I'm guessing the size of the source texture is 512x1024 but since you didn't post that code I have no idea if your source texture matches the defined size. You also didn't post what size your target is. The top image you posted is 471x488. Was that your target size? You also didn't post your code for what texture coordinates you're using and the code that manipulates them.
Guessing that your source is 512x1024, your target is 471x488 I can't repo your issue.
const fs = `
precision highp float;
uniform sampler2D tex;
varying vec2 v_texcoord;
float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i )
{
float fHeight = 1024.0;
float fWidth = 512.0;
float texelSizeX = 1.0/fWidth;
float texelSizeY = 1.0/fHeight;
float p0q0 = texture2D(textureSampler_i, texCoord_i)[0];
float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0];
float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0];
float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0];
float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction.
// Fraction near to valid data.
float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction.
float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction.
float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction.
return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction.
}
void main() {
gl_FragColor = vec4(tex2DBiLinear(tex, v_texcoord), 0, 0, 1);
}
`;
const vs = `
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
void main() {
gl_Position = position;
v_texcoord = texcoord;
}
`;
const gl = document.querySelector('canvas').getContext('webgl');
// compile shaders, link programs, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-1, -1,
1, -1,
-1, 1,
1, 1,
],
},
texcoord: [
0, 0,
1, 0,
0, 1,
1, 1,
],
indices: [
0, 1, 2,
2, 1, 3,
],
});
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 512;
ctx.canvas.height = 1024;
const gradient = ctx.createRadialGradient(256, 512, 0, 256, 512, 700);
gradient.addColorStop(0, 'red');
gradient.addColorStop(1, 'cyan');
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, 512, 1024);
const tex = twgl.createTexture(gl, {
src: ctx.canvas,
minMag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
auto: false,
});
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas width="471" height="488"></canvas>
If you think the issue is related to floating point textures I can't repo there either
const fs = `
precision highp float;
uniform sampler2D tex;
varying vec2 v_texcoord;
float tex2DBiLinear( sampler2D textureSampler_i, vec2 texCoord_i )
{
float fHeight = 1024.0;
float fWidth = 512.0;
float texelSizeX = 1.0/fWidth;
float texelSizeY = 1.0/fHeight;
float p0q0 = texture2D(textureSampler_i, texCoord_i)[0];
float p1q0 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX, 0))[0];
float p0q1 = texture2D(textureSampler_i, texCoord_i + vec2(0, texelSizeY))[0];
float p1q1 = texture2D(textureSampler_i, texCoord_i + vec2(texelSizeX , texelSizeY))[0];
float a = fract( texCoord_i.x * fWidth ); // Get Interpolation factor for X direction.
// Fraction near to valid data.
float pInterp_q0 = mix( p0q0, p1q0, a ); // Interpolates top row in X direction.
float pInterp_q1 = mix( p0q1, p1q1, a ); // Interpolates bottom row in X direction.
float b = fract( texCoord_i.y * fHeight );// Get Interpolation factor for Y direction.
return mix( pInterp_q0, pInterp_q1, b ); // Interpolate in Y direction.
}
void main() {
gl_FragColor = vec4(tex2DBiLinear(tex, v_texcoord), 0, 0, 1);
}
`;
const vs = `
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
void main() {
gl_Position = position;
v_texcoord = texcoord;
}
`;
const gl = document.querySelector('canvas').getContext('webgl');
const ext = gl.getExtension('OES_texture_float');
if (!ext) { alert('need OES_texture_float'); }
// compile shaders, link programs, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-1, -1,
1, -1,
-1, 1,
1, 1,
],
},
texcoord: [
0, 0,
1, 0,
0, 1,
1, 1,
],
indices: [
0, 1, 2,
2, 1, 3,
],
});
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 512;
ctx.canvas.height = 1024;
const gradient = ctx.createRadialGradient(256, 512, 0, 256, 512, 700);
gradient.addColorStop(0, 'red');
gradient.addColorStop(1, 'cyan');
ctx.fillStyle = gradient;
ctx.fillRect(0, 0, 512, 1024);
const tex = twgl.createTexture(gl, {
src: ctx.canvas,
type: gl.FLOAT,
minMag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
auto: false,
});
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
const e = gl.getExtension('WEBGL_debug_renderer_info');
if (e) {
console.log(gl.getParameter(e.UNMASKED_VENDOR_WEBGL));
console.log(gl.getParameter(e.UNMASKED_RENDERER_WEBGL));
}
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas width="471" height="488"></canvas>
If any of the values are off. If your source texture size doesn't match fWidth and fHeigth or if your texture coordinates are different or adjusted in some way then of course maybe I could repo. If any of those are different then I can imagine issues.
Tested in Intel Iris Pro and Intel HD Graphics 630. Also tested on an iPhone6+. Note that you need to make sure your fragment shader is running in precision highp float but that setting would likely only affect mobile GPUs.
We had almost identical issue that ocurred at specific zoom of texture. We found out that positions where artifacts appers can be detected with this conditions:
vec2 imagePosCenterity = fract(uv * imageSize);
if (abs(imagePosCenterity.x-0.5) < 0.001 || abs(imagePosCenterity.y-0.5) < 0.001) {}
Where imageSize is width and height of the texture.
Our solution looks like this:
vec4 texture2DLinear( sampler2D texSampler, vec2 uv) {
vec2 pixelOff = vec2(0.5,0.5)/imageSize;
vec2 imagePosCenterity = fract(uv * imageSize);
if (abs(imagePosCenterity.x-0.5) < 0.001 || abs(imagePosCenterity.y-0.5) < 0.001) {
pixelOff = pixelOff-vec2(0.00001,0.00001);
}
vec4 tl = texture2D(texSampler, uv + vec2(-pixelOff.x,-pixelOff.y));
vec4 tr = texture2D(texSampler, uv + vec2(pixelOff.x,-pixelOff.y));
vec4 bl = texture2D(texSampler, uv + vec2(-pixelOff.x,pixelOff.y));
vec4 br = texture2D(texSampler, uv + vec2(pixelOff.x,pixelOff.y));
vec2 f = fract( (uv.xy-pixelOff) * imageSize );
vec4 tA = mix( tl, tr, f.x );
vec4 tB = mix( bl, br, f.x );
return mix( tA, tB, f.y );
}
It is really dirty solution but it works. Changing texelSize as suggested above only moves artifacts to another positions. We are changing texelSize a little bit only on problematic positions.
Why we are using linear texture interpolation in GLSL shader? It is because we need to use 1 sample per pixel 16 bit per sample texture with broad set of compatibile devices. It is possible to do it only with OES_texture_half_float_linear extension. By our approach it is possible to solve it without using extension.

Working around gl_PointSize limitations in three.js / webGL

I'm using three.js to create an interactive data visualisation. This visualisation involves rendering 68000 nodes, where each different node has a different size and color.
Initially I tried to do this by rendering meshes, but that proved to be very expensive. My current attempt is to use a three.js particle system, with each point being a node in the visualisation.
I can control the color * size of the point, but only to a certain point. On my card, the maximum size for a gl point seems to be 63. As I zoom in to the visualisation, points get larger - to a point, and then remain at 63 pixels.
I'm using a vertex & fragment shader currently:
vertex shader:
attribute float size;
attribute vec3 ca;
varying vec3 vColor;
void main() {
vColor = ca;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( 300.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader:
uniform vec3 color;
uniform sampler2D texture;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( color * vColor, 1.0 );
gl_FragColor = gl_FragColor * texture2D( texture, gl_PointCoord );
}
These are copied almost verbatim from one of the three.js examples.
I'm totally new to GLSL, but I'm looking for a way to draw points larger than 63 pixels. Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise? Are there any other work-arounds I can use to draw points larger than 63 pixels?
You can make your own point system by making arrays of unit quads + the center point then expanding by size in GLSL.
So, you'd have 2 buffers. One buffer is just a 2D unitQuad repeated for how ever many points you want to draw.
var unitQuads = new Float32Array([
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
-0.5, 0.5, 0.5, 0.5, -0.5, -0.5, 0.5, -0.5,
];
The second one is your points except the positions need to be repeated 4 times each
var points = new Float32Array([
p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z, p1.x, p1.y, p1.z,
p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z, p2.x, p2.y, p2.z,
p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z, p3.x, p3.y, p3.z,
p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z, p4.x, p4.y, p4.z,
p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z, p5.x, p5.y, p5.z,
]);
Setup your buffers and attributes
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, unitQuads, gl.STATIC_DRAW);
gl.enableVertexAttribArray(unitQuadLoc);
gl.vertexAttribPointer(unitQuadLoc, 2, gl.FLOAT, false, 0, 0);
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, points, gl.STATIC_DRAW);
gl.enableVertexAttribArray(pointLoc);
gl.vertexAttribPointer(pointLoc, 3, gl.FLOAT, false, 0, 0);
In your GLSL shader, compute the gl_PointSize you want then multiply the unitQuad by that size in view space or screen space. Screen space would match what gl_Point does but often people want their points to scale in 3D like normal stuff in which case view space is what you want.
attribute vec2 a_unitQuad;
attribute vec4 a_position;
uniform mat4 u_view;
uniform mat4 u_viewProjection;
void main() {
float fake_gl_pointsize = 150;
// Get the xAxis and yAxis in view space
// these are unit vectors so they represent moving perpendicular to the view.
vec3 x_axis = view[0].xyz;
vec3 y_axis = view[1].xyz;
// multiply them by the desired size
x_axis *= fake_gl_pointsize;
y_axis *= fake_gl_pointsize;
// multiply them by the unitQuad to make a quad around the origin
vec3 local_point = vec3(x_axis * a_unitQuad.x + y_axis * a_unitQuad.y);
// add in the position you actually want the quad.
local_point += a_position;
// now do the normal math you'd do in a shader.
gl_Position = u_viewProjection * local_point;
}
I'm not sure that made any sense but there's more complicated but a working sample here
Can I do something like draw a mesh for any points larger than a certain size, but use a gl_point otherwise?
Not in WebGL.
You can draw your particle system as a series of quads (ie: two triangles). But that's about it.

Resources