I'm changing the z coordinate vertices on my geometry but find that the Mesh Stays the same size, and I'm expecting it to get smaller. Tweening between vertex positions works as expected in X,Y space however.
This is how I'm calculating my gl_Position by tweening the amplitude uniform in my render function:
<script type="x-shader/x-vertex" id="vertexshader">
uniform float amplitude;
uniform float direction;
uniform vec3 cameraPos;
uniform float time;
attribute vec3 tweenPosition;
varying vec2 vUv;
void main() {
vec3 pos = position;
vec3 morphed = vec3( 0.0, 0.0, 0.0 );
morphed += ( tweenPosition - position ) * amplitude;
morphed += pos;
vec4 mvPosition = modelViewMatrix * vec4( morphed * vec3(1, -1, 0), 1.0 );
vUv = uv;
gl_Position = projectionMatrix * mvPosition;
}
</script>
I also tried something like this from calculating perspective on webglfundamentals:
vec4 newPos = projectionMatrix * mvPosition;
float zToDivideBy = 1.0 + newPos.z * 1.0;
gl_Position = vec4(newPos.xyz, zToDivideBy);
This is my loop to calculate another vertex set that I'm tweening between:
for (var i = 0; i < positions.length; i++) {
if ((i+1) % 3 === 0) {
// subtracting from z coord of each vertex
tweenPositions[i] = positions[i]- (Math.random() * 2000);
} else {
tweenPositions[i] = positions[i]
}
}
I get the same results with this -- objects further away in Z-Space do not scale / attenuate / do anything different. What gives?
morphed * vec3(1, -1, 0)
z is always zero in your code.
[x,y,z] * [1,-1,0] = [x,-y,0]
I am a complete noob when it comes to creating shaders. Or better said, I just learned about it yesterday.
I am trying to create a really simple circle. I thouht I finally figured it out but it turns out to be to large. It should match the DisplayObject size where the filter is applied to.
The fragment shader:
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.0),1.0);
}
Example using Pixi.js:
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
Pixi.js's vTextureCoord do not go from 0 to 1.
From the docs
V4 filters differ from V3. You can't just add in the shader and assume that texture coordinates are in the [0,1] range.
...
Note: vTextureCoord multiplied by filterArea.xy is the real size of bounding box.
If you want to get the pixel coordinates, use uniform filterArea, it will be passed to the filter automatically.
uniform vec4 filterArea;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
They are in pixels. That won't work if we want something like "fill the ellipse into a bounding box". So, lets pass dimensions too! PIXI doesnt do it automatically, we need a manual fix:
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
Lets combine it in shader!
uniform vec4 filterArea;
uniform vec2 dimensions;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 normalizedCoord = pixelCoord / dimensions;
Here's your snippet updated.
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
void main() {
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 uv = pixelCoord / dimensions;
uv -= 0.5;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>
It seems you've stumbled upon weird floating point precision problems: texture coordinates (vTextureCoord) in your fragment shader aren't strictly in (0, 1) range. Here's what I've got when I've added line gl_FragColor = vec4(vTextureCoord, 0, 1):
It seems good, but if we inspect it closely, lower right pixel should be (1, 1, 0), but it isn't:
The problem goes away if instead of setting size to 500 by 500 we use power-of-two size (say, 512 by 512), the problem goes away:
The other possible way to mitigate the problem would be to try to circumvent Pixi's code that computes projection matrix and provide your own that transforms smaller quad into desired screen position.
I'm playing with a shader concept to radially reveal an image using a shader in OpenGL ES. The end goal is to create a circular progress bar by discarding fragments in a fragment shader that renders a full circular progress texture.
I have coded my idea here in ShaderToy so you can play with it. I can't seem to get it to work, and since there's no way to debug I'm having a hard time figuring out why.
Here's my glsl code for the fragment shader:
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return acos(dot(v1, v2) / (magnitude(v1) * magnitude(v2)));
}
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 360.0);
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
It sweeps out revealing from the bottom on both sides. I just want it to sweep out from a vector pointing straight up, and moving in a clockwise motion.
Try this code:
const float PI = 3.1415926;
const float TWO_PI = 6.2831852;
float magnitude(vec2 vec)
{
return sqrt((vec.x * vec.x) + (vec.y * vec.y));
}
float angleBetween(vec2 v1, vec2 v2)
{
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
}
float getTargetAngle()
{
return clamp( iGlobalTime, 0.0, TWO_PI );
}
// OpenGL uses upper left as origin by default
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
float a = angleBetween(center, fragCoord );
return a <= targetAngle;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
if (shouldDrawFragment(fragCoord)) {
fragColor = texture2D(iChannel0, vec2(uv.x, -uv.y));
} else {
fragColor = texture2D(iChannel1, vec2(uv.x, -uv.y));
}
}
Explanation:
The main change I made was the way the angle between two vectors is calculated:
return atan( v1.x - v2.x, v1.y - v2.y ) + PI;
This is the angle of the difference vector between v1 and v2. If you swap the x and y values it will change the direction of where the 0 angle is, i.e. if you try this:
return atan( v1.y - v2.y, v1.x - v2.x ) + PI;
the circle begins from the right rather than upwards. You can also invert the value of atan to change the direction of the animation.
You also don't need to worry about the up vector when calculating the angle between, notice the code just takes the angle between the center and the current frag co-ordinates:
float a = angleBetween(center, fragCoord );
Other Notes:
Remember calculations are in radians, not degrees so I changed the clamp on time (although this doesn't really affect the output):
return clamp( iGlobalTime, 0.0, TWO_PI );
You have a variable with the same name as one of your functions:
float angleBetween = angleBetween(up, v2);
which should be avoided since not all implementations are happy with this, I couldn't compile your shader on my current machine until I changed this.
Change only two functions below
float getTargetAngle()
{
return clamp(iGlobalTime, 0.0, 6.14);
}
bool shouldDrawFragment(vec2 fragCoord)
{
float targetAngle = getTargetAngle();
float centerX = iResolution.x / 2.0;
float centerY = iResolution.y / 2.0;
vec2 center = vec2(centerX, centerY);
vec2 up = vec2(centerX, 0.0) - center;
vec2 v2 = fragCoord - center;
if(fragCoord.x>320.0)// a half width
{
up += 2.0*vec2(up.x,-up.y);
targetAngle *= 2.;
}
else
{
up -= 2.0*vec2(up.x,-up.y);
targetAngle -= 1.57;
}
float angleBetween = angleBetween(up, v2);
return (angleBetween >= 0.0) && (angleBetween <= targetAngle);
}
Please provide prompt how to make the atmosphere of the Earth so that it is visible from space and from the ground (as shown in the image)
a model of the earth:
Earth = new THREE.Mesh(new THREE.SphereGeometry(6700,32,32),ShaderMaterialEarth);
model of the cosmos:
cosmos= new THREE.Mesh(new THREE.SphereGeometry(50000,32,32),ShaderMaterialCosmos);
and a light source:
sun = new THREE.DirectionalLight();
where to start, just I do not know. Perhaps this should do ShaderMaterialCosmos, where to pass position of the camera, and calculate how should be painted pixel. But how?
I tried using the following but get zero vectors at the entrance of the fragment shader
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html
vertexShader:
#define M_PI 3.1415926535897932384626433832795
const float ESun=1.0;
const float Kr = 0.0025;
const float Km = 0.0015;
const int nSamples = 2;
const float fSamples = 1.0;
const float fScaleDepth = 0.25;
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3CameraPos; , // The camera's current position
uniform vec3 v3LightDir; // Direction vector to the light source
uniform vec3 v3InvWavelength; // 1 / pow(wavelength, 4) for RGB
uniform float fCameraHeight; // The camera's current height
const float fOuterRadius=6500.0; // The outer (atmosphere) radius
const float fInnerRadius=6371.0; // The inner (planetary) radius
const float fKrESun=Kr*ESun; // Kr * ESun
const float fKmESun=Km*ESun; // Km * ESun
const float fKr4PI=Kr*4.0*M_PI; // Kr * 4 * PI
const float fKm4PI=Km*4.0*M_PI; // Km * 4 * PI
const float fScale=1.0/(fOuterRadius-fInnerRadius); // 1 / (fOuterRadius - fInnerRadius)
const float fScaleOverScaleDepth= fScale / fScaleDepth; // fScale / fScaleDepth
const float fInvScaleDepth=1.0/0.25;
float getNearIntersection(vec3 v3Pos, vec3 v3Ray, float fDistance2, float fRadius2)
{
float B = 2.0 * dot(v3Pos, v3Ray);
float C = fDistance2 - fRadius2;
float fDet = max(0.0, B*B - 4.0 * C);
return 0.5 * (-B - sqrt(fDet));
}
float scale(float fCos)
{
float x = 1.0 - fCos;
return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25))));
}
void main() {
// Get the ray from the camera to the vertex and its length (which
// is the far point of the ray passing through the atmosphere)
vec3 v3Pos = position.xyz;
vec3 v3Ray = v3Pos - v3CameraPos;
float fFar = length(v3Ray);
v3Ray /= fFar;
// Calculate the closest intersection of the ray with
// the outer atmosphere (point A in Figure 16-3)
float fNear = getNearIntersection(v3CameraPos, v3Ray, fCameraHeight*fCameraHeight, fOuterRadius*fOuterRadius);
// Calculate the ray's start and end positions in the atmosphere,
// then calculate its scattering offset
vec3 v3Start = v3CameraPos + v3Ray * fNear;
fFar -= fNear;
float fStartAngle = dot(v3Ray, v3Start) / fOuterRadius;
float fStartDepth = exp(-fInvScaleDepth);
float fStartOffset = fStartDepth * scale(fStartAngle);
// Initialize the scattering loop variables
float fSampleLength = fFar / fSamples;
float fScaledLength = fSampleLength * fScale;
vec3 v3SampleRay = v3Ray * fSampleLength;
vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5;
// Now loop through the sample points
vec3 v3FrontColor = vec3(0.0, 0.0, 0.0);
for(int i=0; i<nSamples; i++) {
float fHeight = length(v3SamplePoint);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight));
float fLightAngle = dot(v3LightDir, v3SamplePoint) / fHeight;
float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight;
float fScatter = (fStartOffset + fDepth * (scale(fLightAngle) * scale(fCameraAngle)));
vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI));
v3FrontColor += v3Attenuate * (fDepth * fScaledLength);
v3SamplePoint += v3SampleRay;
}
wPosition = (modelMatrix * vec4(position,1.0)).xyz;
c0.rgb = v3FrontColor * (v3InvWavelength * fKrESun);
c1.rgb = v3FrontColor * fKmESun;
t0 = v3CameraPos - v3Pos;
vUv = uv;
}
fragmentShader:
float getMiePhase(float fCos, float fCos2, float g, float g2){
return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5);
}
// Rayleigh phase function
float getRayleighPhase(float fCos2){
//return 0.75 + 0.75 * fCos2;
return 0.75 * (2.0 + 0.5 * fCos2);
}
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3LightDir;
uniform float g;
uniform float g2;
void main() {
float fCos = dot(v3LightDir, t0) / length(t0);
float fCos2 = fCos * fCos;
gl_FragColor = getRayleighPhase(fCos2) * c0 + getMiePhase(fCos, fCos2, g, g2) * c1;
gl_FragColor = c1;
}
Chapter 16 of GPU Gem 2 has nice explanation and illustration for achieving your goal in real time.
Basically you need to perform ray casting through the atmosphere layer and evaluate the light scattering.
So I've recently gotten into using WebGL and more specifically writing GLSL Shaders and I have run into a snag while writing the fragment shader for my "water" shader which is derived from this tutorial.
What I'm trying to achieve is a stepped shading (Toon shading, cell shading...) effect on waves generated by my vertex shader but the fragment shader seems to treat the waves as though they are still a flat plane and the entire mesh is drawn as one solid color.
What am I missing here? The sphere works perfectly but flat surfaces are all shaded uniformly. I have the same problem if I use a cube. Each face on the cube is shaded independently but the entire face is given a solid color.
The Scene
This is how I have my test scene set up. I have two meshes using the same material - a sphere and a plane and a light source.
The Problem
As you can see the shader is working as expected on the sphere.
I enabled wireframe for this shot to show that the vertex shader (perlin noise) is working beautifully on the plane.
But when I turn the wireframe off you can see that the fragment shader seems to be receiving the same level of light uniformly across the entire plane creating this...
Rotating the plane to face the light source will change the color of the material but again the color is applied uniformly over the entire surface of the plane.
The Fragment Shader
In all it's script kid glory lol.
uniform vec3 uMaterialColor;
uniform vec3 uDirLightPos;
uniform vec3 uDirLightColor;
uniform float uKd;
uniform float uBorder;
varying vec3 vNormal;
varying vec3 vViewPosition;
void main() {
vec4 color;
// compute direction to light
vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 );
vec3 lVector = normalize( lDirection.xyz );
// N * L. Normal must be normalized, since it's interpolated.
vec3 normal = normalize( vNormal );
// check the diffuse dot product against uBorder and adjust
// this diffuse value accordingly.
float diffuse = max( dot( normal, lVector ), 0.0);
if (diffuse > 0.95)
color = vec4(1.0,0.0,0.0,1.0);
else if (diffuse > 0.85)
color = vec4(0.9,0.0,0.0,1.0);
else if (diffuse > 0.75)
color = vec4(0.8,0.0,0.0,1.0);
else if (diffuse > 0.65)
color = vec4(0.7,0.0,0.0,1.0);
else if (diffuse > 0.55)
color = vec4(0.6,0.0,0.0,1.0);
else if (diffuse > 0.45)
color = vec4(0.5,0.0,0.0,1.0);
else if (diffuse > 0.35)
color = vec4(0.4,0.0,0.0,1.0);
else if (diffuse > 0.25)
color = vec4(0.3,0.0,0.0,1.0);
else if (diffuse > 0.15)
color = vec4(0.2,0.0,0.0,1.0);
else if (diffuse > 0.05)
color = vec4(0.1,0.0,0.0,1.0);
else
color = vec4(0.05,0.0,0.0,1.0);
gl_FragColor = color;
The Vertex Shader
vec3 mod289(vec3 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x)
{
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r)
{
return 1.79284291400159 - 0.85373472095314 * r;
}
vec3 fade(vec3 t) {
return t*t*t*(t*(t*6.0-15.0)+10.0);
}
// Classic Perlin noise
float cnoise(vec3 P)
{
vec3 Pi0 = floor(P); // Integer part for indexing
vec3 Pi1 = Pi0 + vec3(1.0); // Integer part + 1
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
// Classic Perlin noise, periodic variant
float pnoise(vec3 P, vec3 rep)
{
vec3 Pi0 = mod(floor(P), rep); // Integer part, modulo period
vec3 Pi1 = mod(Pi0 + vec3(1.0), rep); // Integer part + 1, mod period
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
varying vec2 vUv;
varying float noise;
uniform float time;
// for the cell shader
varying vec3 vNormal;
varying vec3 vViewPosition;
float turbulence( vec3 p ) {
float w = 100.0;
float t = -.5;
for (float f = 1.0 ; f <= 10.0 ; f++ ){
float power = pow( 2.0, f );
t += abs( pnoise( vec3( power * p ), vec3( 10.0, 10.0, 10.0 ) ) / power );
}
return t;
}
varying vec3 vertexWorldPos;
void main() {
vUv = uv;
// add time to the noise parameters so it's animated
noise = 10.0 * -.10 * turbulence( .5 * normal + time );
float b = 25.0 * pnoise( 0.05 * position + vec3( 2.0 * time ), vec3( 100.0 ) );
float displacement = - 10. - noise + b;
vec3 newPosition = position + normal * displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
// for the cell shader effect
vNormal = normalize( normalMatrix * normal );
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vViewPosition = -mvPosition.xyz;
}
Worth Mention
I am using the Three.js library
My light source is an instance of THREE.SpotLight
First of all, shadows are completely different. Your problem here is a lack of change in the per-vertex normal after displacement. Correcting this is not going to get you shadows, but your lighting will at least vary across your displaced geometry.
If you have access to partial derivatives, you can do this in the fragment shader. Otherwise, you are kind of out of luck in GL ES, due to a lack of vertex adjacency information. You could also compute per-face normals with a Geometry Shader, but that is not an option in WebGL.
This should be all of the necessary changes to implement this, note that it requires partial derivative support (optional extension in OpenGL ES 2.0).
Vertex Shader:
varying vec3 vertexViewPos; // NEW
void main() {
...
vec3 newPosition = position + normal * displacement;
vertexViewPos = (modelViewMatrix * vec4 (newPosition, 1.0)).xyz; // NEW
...
}
Fragment Shader:
#extension GL_OES_standard_derivatives : require
uniform vec3 uMaterialColor;
uniform vec3 uDirLightPos;
uniform vec3 uDirLightColor;
uniform float uKd;
uniform float uBorder;
varying vec3 vNormal;
varying vec3 vViewPosition;
varying vec3 vertexViewPos; // NEW
void main() {
vec4 color;
// compute direction to light
vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 );
vec3 lVector = normalize( lDirection.xyz );
// N * L. Normal must be normalized, since it's interpolated.
vec3 normal = normalize(cross (dFdx (vertexViewPos), dFdy (vertexViewPos))); // UPDATED
...
}
To enable partial derivative support in WebGL you need to check the extension like this:
var ext = gl.getExtension("OES_standard_derivatives");
if (!ext) {
alert("OES_standard_derivatives does not exist on this machine");
return;
}
// proceed with the shaders above.