Vertex Displacement Doesn't Work in Three.js - opengl-es

I've spent the last week experimenting with Three.js and WebRTC and feel like I've exhausted documentation on this subject. I'm trying to map uniform sampler2D tDiffuse; to this vertex shader. The brightness of each pixel in tDiffuse should map to a vertex displacement on each vertex of the output. But I get the following error: ERROR: 0:80: 'constructor' : not enough data provided for construction
Right now, this shader is in the effects pipeline after the model is rendered. Do I need to specify width and height or am I missing something? Is there something wrong with my code, which I've cobbled together from a few different sources? Can I even do vertex displacement n this effects pipeline or do I need to apply the shader differently, to the mesh in my scene? I understand the theory behind what I need to do, but GLSL and furthermore this three.js pipeline is new to me, although I have lots of experience with similar graphical applications.
THREE.RuttEtraShader = {
uniforms: {
"tDiffuse": { type: "t", value: null },
"opacity": { type: "f", value: 1.0 }
},
vertexShader: [
'uniform sampler2D tDiffuse;',
'varying vec3 vColor;',
"varying vec2 vUv;",
'void main() {',
'vec4 newVertexPos;',
'vec4 dv;',
'float df;',
"vUv = uv;",
'dv = texture2D( tDiffuse, vUv.xy );',
'df = 0.30*dv.x + 0.59*dv.y + 0.11*dv.z;',
'newVertexPos = vec4( normalize( position ) * df * 10.0 ) + vec4( position, 1.0 );',
'vColor = vec3( dv.x, dv.y, dv.z );',
'gl_Position = projectionMatrix * modelViewMatrix * newVertexPos;',
'}'
].join("\n"),
fragmentShader: [
"uniform float opacity;",
"uniform sampler2D tDiffuse;",
"varying vec2 vUv;",
"void main() {",
"vec4 texel = texture2D( tDiffuse, vUv );",
"gl_FragColor = opacity * texel;",
"}"
].join("\n")
};

Effectively, your GLSL compiler is brain-dead.
It is having trouble with the line: normalize (position) ... because you have not declared position. Instead of giving you a useful message that explains this, instead it complains that position does not have proper dimensions.
This of course assumes you have actually pasted the proper vertex / fragment shaders. I am not convinced, as neither one of those shaders has 80 lines of code.

Related

cast shadow from partly transparent plane

Is there a possibility to cast shadow from a plane for which the texture plays a video with a chromakey shader ? My trial seams to answer by NO but I guess my shader is not adapted. The object is a simple PlabeBufferGeometry and the shader is :
vertexShader is :
varying vec2 vUv;
void main() {vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); gl_Position = projectionMatrix * mvPosition;
}
fragmentShader is :
uniform sampler2D vidtexture;
uniform vec3 color;
varying vec2 vUv;
void main()
{ vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
gl_FragColor = vec4(tColor, a);}
Have you tried using the discard keyword? If your frag shader encounters that keyword, it won't render that fragment, it's as if it didn't exist. You could use this to create shadow outlines defined by your chromakey instead of always geting a square shadow.
void main(){
vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
// Do not render pixels that are less than 10% opaque
if (a < 0.1) discard;
gl_FragColor = vec4(tColor, a);
}
This is the same approach Three.js uses for Material.alphaTest in all their built-in materials. You can see the GLSL source code for that command here.

How to use RGB for tDiffuse in customer fragment shader with THREE.js

I am writing a custom blur shader,
"#include <common>",
// blur samples
"const int SAMPLES = 10;",
"uniform float radius;",
"uniform sampler2D tDiffuse;",
"varying vec2 vUv;",
"void main() {",
// sample the source
" vec2 uv = vUv;",
" vec4 cTextureScreen = texture2D( tDiffuse, vUv );",
" vec3 res = vec3(0);",
" for(int i = 0; i < SAMPLES; ++i) {",
" res += cTextureScreen.a;",
" vec2 d = vec2(0.5) - uv;",
" uv += d * radius;",
" }",
" gl_FragColor = vec4(res/float(SAMPLES), 1.0);",
"}"
Which is a straight port of: https://www.shadertoy.com/view/ltVSRK
For use with the effect composer of THREE.
Unfortunately, it appears that tDiffuse is not packed as rgb, and I am pretty confused as to how I can achieve the desired effect or the conversion I need.
See also the following question for more details:
what do texture2D().r and texture2D().a mean?
I ended up solving my problems (and working around them):
THREE uses a linear colorspace for textures inside (effect composer) shaders.
It provides conversion utilities, such as LinearTosRGB.
My code above contains an error. The effect of the line vec4 cTextureScreen = texture2D( tDiffuse, vUv ); should be repeated inside the for loop in order to properly resample fragments while zooming outward.
I ended up using a more elaborate shader, drawn from the Wagner effects library: https://github.com/spite/Wagner/blob/master/fragment-shaders/zoom-blur-fs.glsl#L1
uniform sampler2D tInput;
uniform vec2 center;
uniform float strength;
uniform vec2 resolution;
varying vec2 vUv;
float random(vec3 scale,float seed){return fract(sin(dot(gl_FragCoord.xyz+seed,scale))*43758.5453+seed);}
void main(){
vec4 color=vec4(0.0);
float total=0.0;
vec2 toCenter=center-vUv*resolution;
float offset=random(vec3(12.9898,78.233,151.7182),0.0);
for(float t=0.0;t<=40.0;t++){
float percent=(t+offset)/40.0;
float weight=4.0*(percent-percent*percent);
vec4 sample=texture2D(tInput,vUv+toCenter*percent*strength/resolution);
sample.rgb*=sample.a;
color+=sample*weight;
total+=weight;
}
gl_FragColor=color/total;
gl_FragColor.rgb/=gl_FragColor.a+0.00001;
}
And I am also looking at fast and efficient implementations of Gaussian Blur (https://github.com/Jam3/glsl-fast-gaussian-blur) to further soften my current results.
All in all, my original issue was the result of both a bug in my own code, and of THREE using a linear colorspace for textures.

Material shader smooth gradient between two colors

I have the following shaders used in a custom material shader in threeJS and applied to metaballs. Positions of each metaballs (2 in this example) are passed to an array allPos and they are compared in distance with vertex normal to find the closest one. Its index is used to assign a color blobColor which is then passed to the fragment shader.
vertex shader
uniform vec3 colChoice[2];
varying vec3 vNormal;
varying vec3 blobColor;
varying vec3 otherColor;
uniform vec3 allPos[2];
varying float mixScale;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vNormal = normalize( normalMatrix * normal );
float prevdist = 1000000000000000000000000000000.0;
for(int i=0;i<2;i++){
float distV = distance(allPos[i],normal.xyz);
if(distV<prevdist){
prevdist = distV;
mixScale = distV;
blobColor = colChoice[i];
otherColor = colChoice[i-1];
}
}
gl_Position = projectionMatrix * mvPosition;
}
fragment shader
"varying vec3 blobColor;",
"varying vec3 otherColor;",
"varying vec3 vNormal;",
"varying float mixScale;",
void main() {
finalColor = (0.45*vNormal) + mix(otherColor,blobColor,mixScale);
gl_FragColor = vec4( finalColor, 1.0 );
}
which gives me something like this:
It is a bit rough for now and I would like to apply a dot gradient between
the two colors. Any suggestion?

GLSL Fragment Shader: control color by time passed

I am trying to write a simple shader (with the help of THREE.js), where the colour will update as time passes (from black to white).
Using examples I am calculating the time passing, and then using this to set my gl_FragColor, however it is not working: The particles stay black, and then suddenly pop to 100% some time in (approx 10seconds).
Here is my fragment Shader:
precision highp float;
uniform float uTime;
uniform float uStartTime;
void main() {
float timePassed = (uTime - uStartTime) / 1000.0 * 0.1;
gl_FragColor = vec4(fract(timePassed), fract(timePassed), fract(timePassed), 1.0);
}
Here is how I set up my material:
const simulationMaterial = new THREE.ShaderMaterial({
uniforms: {
tPositions: { type: 't', value: positionsTexture },
tOrigins: { type: 't', value: originsTexture },
tPerlin: { type: 't', value: perlinTexture },
uTime: { type: 'f', value: 0.0 },
uStartTime: { type: 'f', value: Date.now() },
},
vertexShader: vertexSimulationShader,
fragmentShader: fragmentSimulationShader,
side: THREE.DoubleSide,
transparent: true,
});
And here is how I'm updating the uniforms (in a loop)
simulationMaterial.needsUpdate = true;
simulationMaterial.uniforms.uTime.value = Date.now();
My vertex shader is working fine:
precision highp float;
uniform vec3 color;
uniform sampler2D tPositions;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec2 uv;
attribute vec3 position;
attribute vec3 offset;
attribute vec3 particlePosition;
attribute vec4 orientationStart;
attribute vec4 orientationEnd;
varying vec3 vPosition;
varying vec3 vColor;
void main(){
vPosition = position;
vec4 orientation = normalize( orientationStart );
vec3 vcV = cross( orientation.xyz, vPosition );
vPosition = vcV * ( 2.0 * orientation.w ) + ( cross( orientation.xyz, vcV ) * 2.0 + vPosition );
vec4 data = texture2D( tPositions, uv );
vec3 particlePosition = (data.xyz - 0.5) * 1000.0;
vColor = data.xyz;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition + particlePosition, 1.0 );
}
Really can't see what I'm doing wrong.
The shader's highp float type, which is 32-bit, is not large enough to represent accurately values as big as Date.now(). In fact, the last integer that is accurately representable as a 32-bit float is 16,777,217, which is 5 orders of magnitude smaller than Date.now() today. That is to say, this type is not large enough to meaningfully be able to calculate (Date.now()+10) - Date.now()). Javascript engines represent numbers as 64-bit floats, which has the necessary range for the arithmetic to work sufficiently precisely.
You have found the correct solution yourself - do the really big arithmetic on the CPU in big enough types. Calculate the elapsed time on the CPU and pass it as a uniform to the shader.

Cube map distorts when you translate mesh

Trying to create a skybox using the cubemap shader (like in the examples) and noticed a distortion when you transate the mesh.
If you create a cube of say 1 dimension width, height, and depth. Set the side to be THREE.BackSide and depthWrite to false. Then scale the mesh to say 1000 units in the x, y, and z fields.
When the mesh is positioned in the center of the world everything is fine. But as soon as you translate the mesh the cube map starts to distort badly.
You would want to move the mesh to be the same position as the camera thereby never allowing the skybox to reach its limits if the user walks around.
The shader code I'm using is this:
'cube': {
uniforms: { "tCube": { type: "t", value: null },
"tFlip": { type: "f", value: -1 } },
vertexShader: [
"varying vec3 vWorldPosition;",
"void main() {",
"vec4 worldPosition = modelMatrix * vec4( position, 1.0 );",
"vWorldPosition = worldPosition.xyz;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
"}"
].join("\n"),
fragmentShader: [
"uniform samplerCube tCube;",
"uniform float tFlip;",
"varying vec3 vWorldPosition;",
"void main() {",
"gl_FragColor = textureCube( tCube, vec3( tFlip * vWorldPosition.x, vWorldPosition.yz ) );",
"}"
].join("\n")
}
Does anyone know if the shader can be modified to prevent this distortion?
Many thanks!
After doing some research, I found that the cube map shaders for skyboxes rely on the camera to be in the center of the world. So to get this working in the scenario I described above, instead of setting the position of the skybox to the camera, I simply set the camera's world position to be 0.
Just before rendering the skybox you need to do this:
// Get the current position
this._prevCamPos.getPositionFromMatrix( camera.matrixWorldInverse );
// Now set the position of the camera to be 0,0,0
camera.matrixWorldInverse.elements[12] = 0;
camera.matrixWorldInverse.elements[13] = 0;
camera.matrixWorldInverse.elements[14] = 0;
Then just after its rendered it needs to go back:
// Now revert the camera back
camera.matrixWorldInverse.elements[12] = this._prevCamPos.x;
camera.matrixWorldInverse.elements[13] = this._prevCamPos.y;
camera.matrixWorldInverse.elements[14] = this._prevCamPos.z;

Resources