Blurring (two-pass shader) an object with a transparent background? - opengl-es
I have an object I'm trying to blur.
Render it to a transparent (glClear with 1, 1, 1, 0) FBO.
Render it to a second transparent FBO with a vertical blur shader.
Render it to the screen with a horizontal blur shader.
Here is what an example looks like not blurred, and then blurred with this technique:
Obviously the issue is that white glow around the blurred object.
I think I grasp the basic concept of why this is happening. While the pixels around the object in the FBO are transparent, they still hold the color (1,1,1) and as a result, that color is mixed into the blur.
I just don't know what I would do to remedy this?
Here is my horizontal blur shader, vertical is much of the same:
hBlur.vert
uniform mat4 u_projTrans;
uniform float u_blurPixels;
uniform float u_texelWidth;
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec4 a_color;
varying vec2 v_texCoord;
varying vec2 v_blurTexCoords[14];
void main()
{
v_texCoord = a_texCoord0;
gl_Position = u_projTrans * a_position;
float blurDistance6 = u_blurPixels * u_texelWidth;
float blurDistance5 = blurDistance6 * 0.84;
float blurDistance4 = blurDistance6 * 0.70;
float blurDistance3 = blurDistance6 * 0.56;
float blurDistance2 = blurDistance6 * 0.42;
float blurDistance1 = blurDistance6 * 0.28;
float blurDistance0 = blurDistance6 * 0.14;
v_blurTexCoords[ 0] = v_texCoord + vec2(-blurDistance6, 0.0);
v_blurTexCoords[ 1] = v_texCoord + vec2(-blurDistance5, 0.0);
v_blurTexCoords[ 2] = v_texCoord + vec2(-blurDistance4, 0.0);
v_blurTexCoords[ 3] = v_texCoord + vec2(-blurDistance3, 0.0);
v_blurTexCoords[ 4] = v_texCoord + vec2(-blurDistance2, 0.0);
v_blurTexCoords[ 5] = v_texCoord + vec2(-blurDistance1, 0.0);
v_blurTexCoords[ 6] = v_texCoord + vec2(-blurDistance0, 0.0);
v_blurTexCoords[ 7] = v_texCoord + vec2( blurDistance0, 0.0);
v_blurTexCoords[ 8] = v_texCoord + vec2( blurDistance1, 0.0);
v_blurTexCoords[ 9] = v_texCoord + vec2( blurDistance2, 0.0);
v_blurTexCoords[10] = v_texCoord + vec2( blurDistance3, 0.0);
v_blurTexCoords[11] = v_texCoord + vec2( blurDistance4, 0.0);
v_blurTexCoords[12] = v_texCoord + vec2( blurDistance5, 0.0);
v_blurTexCoords[13] = v_texCoord + vec2( blurDistance6, 0.0);
}
blur.frag
uniform sampler2D u_texture;
varying vec2 v_texCoord;
varying vec2 v_blurTexCoords[14];
void main()
{
gl_FragColor = vec4(0.0);
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 0]) * 0.0044299121055113265;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 1]) * 0.00895781211794;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 2]) * 0.0215963866053;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 3]) * 0.0443683338718;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 4]) * 0.0776744219933;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 5]) * 0.115876621105;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 6]) * 0.147308056121;
gl_FragColor += texture2D(u_texture, v_texCoord ) * 0.159576912161;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 7]) * 0.147308056121;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 8]) * 0.115876621105;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[ 9]) * 0.0776744219933;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[10]) * 0.0443683338718;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[11]) * 0.0215963866053;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[12]) * 0.00895781211794;
gl_FragColor += texture2D(u_texture, v_blurTexCoords[13]) * 0.0044299121055113265;
}
I'd lie if I said I was completely certain what this code is doing. But in summary it's just sampling pixels from within a radius of u_blurPixels and summing up the resulting color for gl_FragColor with pre-determined gaussian weights.
How would I modify this to prevent the white glow due to a transparent background?
This blur procedure is really not meant for transparent images so some adjustment is needed.
To see what you are doing in your code is you are applying surrounding pixel taking effect depending on their distance when you then normalize them, take their average, whatever you want to call it... What I am talking about is that your factors 0.0044299121055113265, 0.00895781211794 are normalized so that the sum of them is always 1. More naturally these values might be for instance (using only 3 pixels) scales = [1, 5, 1] where then the result is (pix[0]*scales[0] + pix[1]*scales[1] + pix[2]*scales[2])/(scales[0]+scales[1]+scales[2]).
So if we take a step back your code can be transformed into:
int offset = 7; // Maximum range taking effect
float offsetScales[offset+1] = { // +1 is for the zero offset
0.159576912161,
0.147308056121,
...
};
float sumOfScales = offsetScales[0];
for(int i=0; i<offset; i++) sumOfScales += offsetScales[i+1]*2.0;
gl_FragColor = texture2D(u_texture, v_texCoord)*offsetScales[0];
for(int i=0; i<offset; i++) {
gl_FragColor += texture2D(u_texture, v_blurTexCoords[6-i]) * offsetScales[i+1];
gl_FragColor += texture2D(u_texture, v_blurTexCoords[7+i]) * offsetScales[i+1];
}
gl_FragColor /= sumOfScales; // sumOfScales in current case is always 1.0
Now unless I made some mistakes this code should do exactly the same as your code. It is a bit more agile though because if you added another pixel in range (offset = 8) you could simply add it and set its value to 0.0022 and your color will never overflow. Where in your case you need to adjust all of the scales so their sum equals to 1.0. But never mind that, your way is closer to optimal so keep using it, I am explaining this to take a step back and find the solution to your problem...
Ok so now that the code is a bit more maintainable let's see what happens when an alpha channel needs to take effect; When a pixel is transparent or semitransparent it should take less or no effect in computing the overall color. It should take effect on the end alpha though. That means that next to those scales we need to also take the alpha scale. But doing so we need to adjust the sum of applied colors:
int offset = 7; // Maximum range taking effect
float offsetScales[offset+1] = { // +1 is for the zero offset
0.159576912161,
0.147308056121,
...
};
highp vec4 summedColor = vec4(0.0); // We best take a large value now
highp float overallAlpha = 0.0; // The actual end alpha (previously sumOfScales)
highp float overallScale = 0.0; // We need this one so alpha doesn't overflow. If the sum of original scales is 1.0 then this factor is 1.0 and is not needed at all.
vec4 fetchedColor = texture2D(u_texture, v_texCoord);
float scaleWithAlpha = fetchedColor.a * offsetScales[0];
overallScale += offsetScales[0];
summedColor += fetchedColor*scaleWithAlpha;
overallAlpha += scaleWithAlpha;
for(int i=0; i<offset; i++) {
vec4 fetchedColor = texture2D(u_texture, v_blurTexCoords[6-i]);
float scaleWithAlpha = fetchedColor.a * offsetScales[i+1];
overallScale += offsetScales[i+1];
summedColor += fetchedColor*scaleWithAlpha;
overallAlpha += scaleWithAlpha;
fetchedColor = texture2D(u_texture, v_blurTexCoords[7+i]);
scaleWithAlpha = fetchedColor.a * offsetScales[i+1];
overallScale += offsetScales[7+i];
summedColor += fetchedColor*scaleWithAlpha;
overallAlpha += scaleWithAlpha;
}
overallAlpha /= overallScale;
summedColor /= overallAlpha; // TODO: if overallAlpha is 0.0 then discard or use clear color
gl_FragColor = vec4(summedColor.xyz, overallAlpha);
Some adjustment may still be needed to this code but I hope this gets you on the right tracks. After you make it work I suggest you again lose all the loops and do it like you originally did (with the new logic). It would also be nice if you posted the code you ended up with.
Feel free to ask any questions...
The problem is that OpenGL ES use post-multipled alpha (it's cheaper in hardware), whereas doing it "properly" needs premultiplied alpha.
You can do the pre-multiplication maths in the shader for each sample you blur:
premult.rgb = source.rgb * source.a;
... but then you incur run-time cost for every texture sample you are blending. It's generally better to premultiply your input art assets offline during texture creation/compression.
If you need post-multiplied data for lighting computation, etc, you can make the error less visible by extruding the object color into the neighboring "transparent" pixels. E.g.:
Note if you shaders are emitting premultipled alpha fragmetn colors you'll need to fix your OpenGL blend equation (use GL_ONE for srcAlpha, not the alpha value).
Related
GLSL sparking vertex shader
I am trying to tweak this ShaderToy example for vertices to create 'sparks' out of them. Have tried to play with gl_PointCoord and gl_FragCoord without any results. Maybe, someone here could help me? I need effect similar to this animated gif: uniform float time; uniform vec2 mouse; uniform vec2 resolution; #define M_PI 3.1415926535897932384626433832795 float rand(vec2 co) { return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); } void main( ) { float size = 30.0; float prob = 0.95; vec2 pos = floor(1.0 / size * gl_FragCoord.xy); float color = 0.0; float starValue = rand(pos); if (starValue > prob) { vec2 center = size * pos + vec2(size, size) * 0.5; float t = 0.9 + sin(time + (starValue - prob) / (1.0 - prob) * 45.0); color = 1.0 - distance(gl_FragCoord.xy, center) / (0.5 * size); color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x)); } else if (rand(gl_FragCoord.xy / resolution.xy) > 0.996) { float r = rand(gl_FragCoord.xy); color = r * ( 0.25 * sin(time * (r * 5.0) + 720.0 * r) + 0.75); } gl_FragColor = vec4(vec3(color), 1.0); } As I understand have to play with vec2 pos, setting it to a vertex position.
You don't need to play with pos. As Vertex Shader is only run by each vertex, there is no way to process its pixel values there using Pos. However, you can do processing pixel using gl_PointCoord. I can think of two ways only for changing the scale of a texture gl_PointSize in Vertex Shader in opengl es In Fragment Shader, you can change the texture UV value, for example, vec4 color = texture(texture0, ((gl_PointCoord-0.5) * factor) + vec2(0.5)); If you don't want to use any texture but only pixel processing in FS, you can set UV like ((gl_PointCoord-0.5) * factor) + vec2(0.5) instead of uv which is normally set as fragCoord.xy / iResolution.xy in Shadertoy
Shader Z space perspective ShaderMaterial BufferGeometry
I'm changing the z coordinate vertices on my geometry but find that the Mesh Stays the same size, and I'm expecting it to get smaller. Tweening between vertex positions works as expected in X,Y space however. This is how I'm calculating my gl_Position by tweening the amplitude uniform in my render function: <script type="x-shader/x-vertex" id="vertexshader"> uniform float amplitude; uniform float direction; uniform vec3 cameraPos; uniform float time; attribute vec3 tweenPosition; varying vec2 vUv; void main() { vec3 pos = position; vec3 morphed = vec3( 0.0, 0.0, 0.0 ); morphed += ( tweenPosition - position ) * amplitude; morphed += pos; vec4 mvPosition = modelViewMatrix * vec4( morphed * vec3(1, -1, 0), 1.0 ); vUv = uv; gl_Position = projectionMatrix * mvPosition; } </script> I also tried something like this from calculating perspective on webglfundamentals: vec4 newPos = projectionMatrix * mvPosition; float zToDivideBy = 1.0 + newPos.z * 1.0; gl_Position = vec4(newPos.xyz, zToDivideBy); This is my loop to calculate another vertex set that I'm tweening between: for (var i = 0; i < positions.length; i++) { if ((i+1) % 3 === 0) { // subtracting from z coord of each vertex tweenPositions[i] = positions[i]- (Math.random() * 2000); } else { tweenPositions[i] = positions[i] } } I get the same results with this -- objects further away in Z-Space do not scale / attenuate / do anything different. What gives?
morphed * vec3(1, -1, 0) z is always zero in your code. [x,y,z] * [1,-1,0] = [x,-y,0]
Three.js/Webgl vertex.y does not update
In effort to learn vertex/fragment shaders I decided to create a simple rain effect by updating the y position of a point in the vertex shader and resetting it back to animate through again using Three.js PointCloud. I got it to animate across the screen once but gets stuck after resetting the y position. uniform float size; uniform float delta; varying float vOpacity; varying float vTexture; void main() { vOpacity = opacity; vTexture = texture; gl_PointSize = 164.0; vec3 p = position; vec3 p = position; p.y -= delta * 50.0; vec4 mvPosition = modelViewMatrix * vec4(1.0 * p, 1.0 ); vec4 nPos = projectionMatrix * mvPosition; if(nPos.y < -200.0){ nPos.y = 100.0; } gl_Position = nPos; } Any ideas? Thanks
shader does not change the vertex position permanently that means gl_Position = nPos; will not propagate to your position attribute in geometry shader only runs on graphics card and has no access to memory of the browser you can change your code to this: nPos.y = mod(nPos.y, 300.0) - 200.0; now the y coordinate should change as you want it to(going from 100 to -200 then back to 100)
WebGL heightmap using vertex shader, using 32 bits instead of 8 bits
I'm using the following vertex shader (courtesy http://stemkoski.github.io/Three.js/Shader-Heightmap-Textures.html) to generate terrain from a grayscale height map: uniform sampler2D bumpTexture; uniform float bumpScale; varying float vAmount; varying vec2 vUV; void main() { vUV = uv; vec4 bumpData = texture2D( bumpTexture, uv ); vAmount = bumpData.r; // assuming map is grayscale it doesn't matter if you use r, g, or b. // move the position along the normal vec3 newPosition = position + normal * bumpScale * vAmount; gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0); } I'd like to have 32-bits of resolution, and have generated a heightmap that encodes heights as RGBA. I have no idea how to go about changing the shader code to accommodate this. Any direction or help?
bumpData.r, .g, .b and .a are all quantities in the range [0.0, 1.0] equivalent to the original byte values divided by 255.0. So depending on your endianness, a naive conversion back to the original int might be: (bumpData.r * 255.0) + (bumpdata.g * 255.0 * 256.0) + (bumpData.b * 255.0 * 256.0 * 256.0) + (bumpData.a * 255.0 * 256.0 * 256.0 * 256.0) So that's the same as a dot product with the vector (255.0, 65280.0, 16711680.0, 4278190080.0), which is likely to be the much more efficient way to implement it.
With threejs const generateHeightTexture = (width) => { // let max_texture_width = RENDERER.capabilities.maxTextureSize; let pixels = new Float32Array(width * width) pixels.fill(0, 0, pixels.length); let texture = new THREE.DataTexture(pixels, width, width, THREE.AlphaFormat, THREE.FloatType); texture.magFilter = THREE.LinearFilter; texture.minFilter = THREE.NearestFilter; // texture.anisotropy = RENDERER.capabilities.getMaxAnisotropy(); texture.needsUpdate = true; console.log('Built Physical Texture:', width, 'x', width) return texture; }
GLSL Shadows with Perlin Noise
So I've recently gotten into using WebGL and more specifically writing GLSL Shaders and I have run into a snag while writing the fragment shader for my "water" shader which is derived from this tutorial. What I'm trying to achieve is a stepped shading (Toon shading, cell shading...) effect on waves generated by my vertex shader but the fragment shader seems to treat the waves as though they are still a flat plane and the entire mesh is drawn as one solid color. What am I missing here? The sphere works perfectly but flat surfaces are all shaded uniformly. I have the same problem if I use a cube. Each face on the cube is shaded independently but the entire face is given a solid color. The Scene This is how I have my test scene set up. I have two meshes using the same material - a sphere and a plane and a light source. The Problem As you can see the shader is working as expected on the sphere. I enabled wireframe for this shot to show that the vertex shader (perlin noise) is working beautifully on the plane. But when I turn the wireframe off you can see that the fragment shader seems to be receiving the same level of light uniformly across the entire plane creating this... Rotating the plane to face the light source will change the color of the material but again the color is applied uniformly over the entire surface of the plane. The Fragment Shader In all it's script kid glory lol. uniform vec3 uMaterialColor; uniform vec3 uDirLightPos; uniform vec3 uDirLightColor; uniform float uKd; uniform float uBorder; varying vec3 vNormal; varying vec3 vViewPosition; void main() { vec4 color; // compute direction to light vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 ); vec3 lVector = normalize( lDirection.xyz ); // N * L. Normal must be normalized, since it's interpolated. vec3 normal = normalize( vNormal ); // check the diffuse dot product against uBorder and adjust // this diffuse value accordingly. float diffuse = max( dot( normal, lVector ), 0.0); if (diffuse > 0.95) color = vec4(1.0,0.0,0.0,1.0); else if (diffuse > 0.85) color = vec4(0.9,0.0,0.0,1.0); else if (diffuse > 0.75) color = vec4(0.8,0.0,0.0,1.0); else if (diffuse > 0.65) color = vec4(0.7,0.0,0.0,1.0); else if (diffuse > 0.55) color = vec4(0.6,0.0,0.0,1.0); else if (diffuse > 0.45) color = vec4(0.5,0.0,0.0,1.0); else if (diffuse > 0.35) color = vec4(0.4,0.0,0.0,1.0); else if (diffuse > 0.25) color = vec4(0.3,0.0,0.0,1.0); else if (diffuse > 0.15) color = vec4(0.2,0.0,0.0,1.0); else if (diffuse > 0.05) color = vec4(0.1,0.0,0.0,1.0); else color = vec4(0.05,0.0,0.0,1.0); gl_FragColor = color; The Vertex Shader vec3 mod289(vec3 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; } vec4 mod289(vec4 x) { return x - floor(x * (1.0 / 289.0)) * 289.0; } vec4 permute(vec4 x) { return mod289(((x*34.0)+1.0)*x); } vec4 taylorInvSqrt(vec4 r) { return 1.79284291400159 - 0.85373472095314 * r; } vec3 fade(vec3 t) { return t*t*t*(t*(t*6.0-15.0)+10.0); } // Classic Perlin noise float cnoise(vec3 P) { vec3 Pi0 = floor(P); // Integer part for indexing vec3 Pi1 = Pi0 + vec3(1.0); // Integer part + 1 Pi0 = mod289(Pi0); Pi1 = mod289(Pi1); vec3 Pf0 = fract(P); // Fractional part for interpolation vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0 vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x); vec4 iy = vec4(Pi0.yy, Pi1.yy); vec4 iz0 = Pi0.zzzz; vec4 iz1 = Pi1.zzzz; vec4 ixy = permute(permute(ix) + iy); vec4 ixy0 = permute(ixy + iz0); vec4 ixy1 = permute(ixy + iz1); vec4 gx0 = ixy0 * (1.0 / 7.0); vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5; gx0 = fract(gx0); vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0); vec4 sz0 = step(gz0, vec4(0.0)); gx0 -= sz0 * (step(0.0, gx0) - 0.5); gy0 -= sz0 * (step(0.0, gy0) - 0.5); vec4 gx1 = ixy1 * (1.0 / 7.0); vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5; gx1 = fract(gx1); vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1); vec4 sz1 = step(gz1, vec4(0.0)); gx1 -= sz1 * (step(0.0, gx1) - 0.5); gy1 -= sz1 * (step(0.0, gy1) - 0.5); vec3 g000 = vec3(gx0.x,gy0.x,gz0.x); vec3 g100 = vec3(gx0.y,gy0.y,gz0.y); vec3 g010 = vec3(gx0.z,gy0.z,gz0.z); vec3 g110 = vec3(gx0.w,gy0.w,gz0.w); vec3 g001 = vec3(gx1.x,gy1.x,gz1.x); vec3 g101 = vec3(gx1.y,gy1.y,gz1.y); vec3 g011 = vec3(gx1.z,gy1.z,gz1.z); vec3 g111 = vec3(gx1.w,gy1.w,gz1.w); vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110))); g000 *= norm0.x; g010 *= norm0.y; g100 *= norm0.z; g110 *= norm0.w; vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111))); g001 *= norm1.x; g011 *= norm1.y; g101 *= norm1.z; g111 *= norm1.w; float n000 = dot(g000, Pf0); float n100 = dot(g100, vec3(Pf1.x, Pf0.yz)); float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z)); float n110 = dot(g110, vec3(Pf1.xy, Pf0.z)); float n001 = dot(g001, vec3(Pf0.xy, Pf1.z)); float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z)); float n011 = dot(g011, vec3(Pf0.x, Pf1.yz)); float n111 = dot(g111, Pf1); vec3 fade_xyz = fade(Pf0); vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z); vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y); float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x); return 2.2 * n_xyz; } // Classic Perlin noise, periodic variant float pnoise(vec3 P, vec3 rep) { vec3 Pi0 = mod(floor(P), rep); // Integer part, modulo period vec3 Pi1 = mod(Pi0 + vec3(1.0), rep); // Integer part + 1, mod period Pi0 = mod289(Pi0); Pi1 = mod289(Pi1); vec3 Pf0 = fract(P); // Fractional part for interpolation vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0 vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x); vec4 iy = vec4(Pi0.yy, Pi1.yy); vec4 iz0 = Pi0.zzzz; vec4 iz1 = Pi1.zzzz; vec4 ixy = permute(permute(ix) + iy); vec4 ixy0 = permute(ixy + iz0); vec4 ixy1 = permute(ixy + iz1); vec4 gx0 = ixy0 * (1.0 / 7.0); vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5; gx0 = fract(gx0); vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0); vec4 sz0 = step(gz0, vec4(0.0)); gx0 -= sz0 * (step(0.0, gx0) - 0.5); gy0 -= sz0 * (step(0.0, gy0) - 0.5); vec4 gx1 = ixy1 * (1.0 / 7.0); vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5; gx1 = fract(gx1); vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1); vec4 sz1 = step(gz1, vec4(0.0)); gx1 -= sz1 * (step(0.0, gx1) - 0.5); gy1 -= sz1 * (step(0.0, gy1) - 0.5); vec3 g000 = vec3(gx0.x,gy0.x,gz0.x); vec3 g100 = vec3(gx0.y,gy0.y,gz0.y); vec3 g010 = vec3(gx0.z,gy0.z,gz0.z); vec3 g110 = vec3(gx0.w,gy0.w,gz0.w); vec3 g001 = vec3(gx1.x,gy1.x,gz1.x); vec3 g101 = vec3(gx1.y,gy1.y,gz1.y); vec3 g011 = vec3(gx1.z,gy1.z,gz1.z); vec3 g111 = vec3(gx1.w,gy1.w,gz1.w); vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110))); g000 *= norm0.x; g010 *= norm0.y; g100 *= norm0.z; g110 *= norm0.w; vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111))); g001 *= norm1.x; g011 *= norm1.y; g101 *= norm1.z; g111 *= norm1.w; float n000 = dot(g000, Pf0); float n100 = dot(g100, vec3(Pf1.x, Pf0.yz)); float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z)); float n110 = dot(g110, vec3(Pf1.xy, Pf0.z)); float n001 = dot(g001, vec3(Pf0.xy, Pf1.z)); float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z)); float n011 = dot(g011, vec3(Pf0.x, Pf1.yz)); float n111 = dot(g111, Pf1); vec3 fade_xyz = fade(Pf0); vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z); vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y); float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x); return 2.2 * n_xyz; } varying vec2 vUv; varying float noise; uniform float time; // for the cell shader varying vec3 vNormal; varying vec3 vViewPosition; float turbulence( vec3 p ) { float w = 100.0; float t = -.5; for (float f = 1.0 ; f <= 10.0 ; f++ ){ float power = pow( 2.0, f ); t += abs( pnoise( vec3( power * p ), vec3( 10.0, 10.0, 10.0 ) ) / power ); } return t; } varying vec3 vertexWorldPos; void main() { vUv = uv; // add time to the noise parameters so it's animated noise = 10.0 * -.10 * turbulence( .5 * normal + time ); float b = 25.0 * pnoise( 0.05 * position + vec3( 2.0 * time ), vec3( 100.0 ) ); float displacement = - 10. - noise + b; vec3 newPosition = position + normal * displacement; gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 ); // for the cell shader effect vNormal = normalize( normalMatrix * normal ); vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); vViewPosition = -mvPosition.xyz; } Worth Mention I am using the Three.js library My light source is an instance of THREE.SpotLight
First of all, shadows are completely different. Your problem here is a lack of change in the per-vertex normal after displacement. Correcting this is not going to get you shadows, but your lighting will at least vary across your displaced geometry. If you have access to partial derivatives, you can do this in the fragment shader. Otherwise, you are kind of out of luck in GL ES, due to a lack of vertex adjacency information. You could also compute per-face normals with a Geometry Shader, but that is not an option in WebGL. This should be all of the necessary changes to implement this, note that it requires partial derivative support (optional extension in OpenGL ES 2.0). Vertex Shader: varying vec3 vertexViewPos; // NEW void main() { ... vec3 newPosition = position + normal * displacement; vertexViewPos = (modelViewMatrix * vec4 (newPosition, 1.0)).xyz; // NEW ... } Fragment Shader: #extension GL_OES_standard_derivatives : require uniform vec3 uMaterialColor; uniform vec3 uDirLightPos; uniform vec3 uDirLightColor; uniform float uKd; uniform float uBorder; varying vec3 vNormal; varying vec3 vViewPosition; varying vec3 vertexViewPos; // NEW void main() { vec4 color; // compute direction to light vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 ); vec3 lVector = normalize( lDirection.xyz ); // N * L. Normal must be normalized, since it's interpolated. vec3 normal = normalize(cross (dFdx (vertexViewPos), dFdy (vertexViewPos))); // UPDATED ... } To enable partial derivative support in WebGL you need to check the extension like this: var ext = gl.getExtension("OES_standard_derivatives"); if (!ext) { alert("OES_standard_derivatives does not exist on this machine"); return; } // proceed with the shaders above.