Open GL problem with vertex shader on android emulator, typename expected - filter

I have very little understanding about shaders and this one is not mine. It just can't be loaded on my emulator and it gives me an error in the form of a log file. To elaborate more on the problem: i'm using the Drastic emulator for DS games, this is only a filter o shader for such emulator but i doesn't seem to load.
The log file says:
0:18: L0001: Typename expected, found '\'
So i'm asking to see where the problem is...
I tried very little, since i know very little:
First i tried changing a bit the code, before the log said it found "if" so i moved it down to see if it would do something and it resulted in founding "\" and now i don't know anymore since i'm stupid...
The code is in a file with the extension .dsd and is the following:
<vertex>
attribute vec2 a_vertex_coordinate;
attribute vec2 a_texture_coordinate;
uniform vec4 u_texture_size;
varying vec4 v_texture_coordinate;
varying vec4 v_texture_coordinate_1;
void main()
{
vec2 ps = vec2(1.0 / u_texture_size.z, 1.0 / u_texture_size.w);
float dx = ps.x;
float dy = ps.y;
gl_Position = vec4(a_vertex_coordinate.xy, 0.0, 1.0);
v_texture_coordinate = a_texture_coordinate.xyxy;
v_texture_coordinate_1.xy = vec2(0.0,-dy); // B
v_texture_coordinate_1.zw = vec2(-dx,0.0); // D
}
</vertex>
<fragment>
const vec3 dtt = vec3(65536.0, 255.0, 1.0);
float reduce(vec3 color) {
return dot(color, dtt);
}
uniform sampler2D u_texture;
uniform vec4 u_texture_size;
varying vec4 v_texture_coordinate;
varying vec4 v_texture_coordinate_1;
#define FILTRO(PE, PI, PH, PF, PG, PC, PD, PB, PA, G5, C4, G0, C1, I4, I5, N15, N14, N11, F, H)
\
if ( PE!=PH && ((PH==PF && ( (PE!=PI && (PE!=PB || PE!=PD || PB==C1 && PD==G0 || PF!=PB && PF!=PC || PH!=PD && PH!=PG)) \
|| (PE==PG && (PI==PH || PE==PD || PH!=PD)) \
|| (PE==PC && (PI==PH || PE==PB || PF!=PB)) ))\
|| (PE!=PF && (PE==PC && (PF!=PI && (PH==PI && PF!=PB || PE!=PI && PF==C4) || PE!=PI && PE==PG)))) ) \
{\
N11 = (N11+F)*0.5;\
N14 = (N14+H)*0.5;\
N15 = F;\
}\
else if (PE!=PH && PE!=PF && (PH!=PI && PE==PG && (PF==PI && PH!=PD || PE!=PI && PH==G5)))\
{\
N11 = (N11+H)*0.5;\
N14 = N11;\
N15 = H;\
}\
void main()
{
vec2 fp = fract(v_texture_coordinate.xy * u_texture_size.zw);
vec2 g1 = v_texture_coordinate_1.xy * (step(0.5,fp.x) + step(0.5, fp.y) - 1.0) + v_texture_coordinate_1.zw * (step(0.5,fp.x) - step(0.5, fp.y));
vec2 g2 = v_texture_coordinate_1.xy * (step(0.5,fp.y) - step(0.5, fp.x)) + v_texture_coordinate_1.zw * (step(0.5,fp.x) + step(0.5, fp.y) - 1.0);
vec3 A = texture2D(u_texture, v_texture_coordinate.xy + g1 + g2 ).xyz;
vec3 B = texture2D(u_texture, v_texture_coordinate.xy + g1 ).xyz;
vec3 C = texture2D(u_texture, v_texture_coordinate.xy + g1 - g2 ).xyz;
vec3 D = texture2D(u_texture, v_texture_coordinate.xy + g2 ).xyz;
vec3 E = texture2D(u_texture, v_texture_coordinate.xy ).xyz;
vec3 F = texture2D(u_texture, v_texture_coordinate.xy - g2 ).xyz;
vec3 G = texture2D(u_texture, v_texture_coordinate.xy - g1 + g2 ).xyz;
vec3 H = texture2D(u_texture, v_texture_coordinate.xy - g1 ).xyz;
vec3 I = texture2D(u_texture, v_texture_coordinate.xy - g1 - g2 ).xyz;
vec3 A1 = texture2D(u_texture, v_texture_coordinate.xy + 2.0 * g1 + g2 ).xyz;
vec3 C1 = texture2D(u_texture, v_texture_coordinate.xy + 2.0 * g1 - g2 ).xyz;
vec3 A0 = texture2D(u_texture, v_texture_coordinate.xy + g1 + 2.0 * g2 ).xyz;
vec3 G0 = texture2D(u_texture, v_texture_coordinate.xy - g1 + 2.0 * g2 ).xyz;
vec3 C4 = texture2D(u_texture, v_texture_coordinate.xy + g1 - 2.0 * g2 ).xyz;
vec3 I4 = texture2D(u_texture, v_texture_coordinate.xy - g1 - 2.0 * g2 ).xyz;
vec3 G5 = texture2D(u_texture, v_texture_coordinate.xy - 2.0 * g1 + g2 ).xyz;
vec3 I5 = texture2D(u_texture, v_texture_coordinate.xy - 2.0 * g1 - g2 ).xyz;
vec3 E11 = E;
vec3 E14 = E;
vec3 E15 = E;
float a = reduce(A);
float b = reduce(B);
float c = reduce(C);
float d = reduce(D);
float e = reduce(E);
float f = reduce(F);
float g = reduce(G);
float h = reduce(H);
float i = reduce(I);
float a1 = reduce( A1);
float c1 = reduce( C1);
float a0 = reduce( A0);
float g0 = reduce( G0);
float c4 = reduce( C4);
float i4 = reduce( I4);
float g5 = reduce( G5);
float i5 = reduce( I5);
FILTRO(e, i, h, f, g, c, d, b, a, g5, c4, g0, c1, i4, i5, E15, E14, E11, F, H);
gl_FragColor.rgb = (fp.x < 0.50) ? ((fp.x < 0.25) ? ((fp.y < 0.25) ? E15: (fp.y < 0.50) ? E11: (fp.y < 0.75) ? E14: E15) : ((fp.y < 0.25) ? E14: (fp.y < 0.50) ? E : (fp.y < 0.75) ? E : E11)) : ((fp.x < 0.75) ? ((fp.y < 0.25) ? E11: (fp.y < 0.50) ? E : (fp.y < 0.75) ? E : E14) : ((fp.y < 0.25) ? E15: (fp.y < 0.50) ? E14: (fp.y < 0.75) ? E11 : E15));
}
</fragment>
This code is used from the .dfx file if i'm not mistaken, so here's also the dfx:
<options>
name=4XBR v1.1 Low configuration
textures=1
</options>
<fheader>
#if GL_ES
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
#endif
</fheader>
<texture:0>
input=framebuffer
min_filter=GL_NEAREST
mag_filter=GL_NEAREST
</texture>
<pass>
shader=4XBR_v1.1_Low configuration.dsd
sampler:u_texture=0
</pass>
This should be everything i have gone through, if anyone can help me it would be immensely appreciated, thanks!

Based on the naming used this shader is using the OpenGL ES 2.0 shader language (#version 100). Line continuation isn't supported in ESSL 1.0, so this shader is relying on a vendor extension or out-of-spec behavior.
Line continuation support was added in OpenGL ES 3.0 shader language (#version 300 es), but if you enable that you need to fix the rest of the shader input and output declarations to use the new style (in rather than varying, explicit out variable rather than gl_FragCoord), and need an ES 3.0 context.
The workaround would be to flatten the FILTRO() macro to be a single line to avoid the line continuations (tested locally here on the Mali Offline Compiler and it compiles OK).

Related

Three.js ShaderMaterial Post Processing and Transparent Background

I'm trying to work with this shader, but I need a transparent background and it renders a black background.
I realized that this is done within the fragmentShader, but I haven't figured out how to change it, and I don't even know if it's possible.
Would anyone with experience in shaders know how to tell me?
var myEffect = {
uniforms: {
"tDiffuse": { value: null },
"distort": { value: 0 },
"resolution": { value: new THREE.Vector2(1., innerHeight / innerWidth) },
"uMouse": { value: new THREE.Vector2(-10, -10) },
"uVelo": { value: 0 },
"time": { value: 0 }
},
vertexShader: `uniform float time;
uniform float progress;
uniform vec2 resolution;
varying vec2 vUv;
uniform sampler2D texture1;
const float pi = 3.1415925;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0 );
}`,
fragmentShader: `uniform float time;
uniform float progress;
uniform sampler2D tDiffuse;
uniform vec2 resolution;
varying vec2 vUv;
uniform vec2 uMouse;
uniform float uVelo;
float circle(vec2 uv, vec2 disc_center, float disc_radius, float border_size) {
uv -= disc_center;
uv*=resolution;
float dist = sqrt(dot(uv, uv));
return smoothstep(disc_radius+border_size, disc_radius-border_size, dist);
}
float map(float value, float min1, float max1, float min2, float max2) {
return min2 + (value - min1) * (max2 - min2) / (max1 - min1);
}
float remap(float value, float inMin, float inMax, float outMin, float outMax) {
return outMin + (outMax - outMin) * (value - inMin) / (inMax - inMin);
}
float hash12(vec2 p) {
float h = dot(p,vec2(127.1,311.7));
return fract(sin(h)*43758.5453123);
}
// #define HASHSCALE3 vec3(.1031, .1030, .0973)
vec2 hash2d(vec2 p)
{
vec3 p3 = fract(vec3(p.xyx) * vec3(.1031, .1030, .0973));
p3 += dot(p3, p3.yzx+19.19);
return fract((p3.xx+p3.yz)*p3.zy);
}
void main() {
vec2 newUV = vUv;
vec4 color = vec4(1.,0.,0.,1.);
float c = circle(newUV, uMouse, 0.0, 0.2);
float r = texture2D(tDiffuse, newUV.xy += c * (uVelo * .5)).x;
float g = texture2D(tDiffuse, newUV.xy += c * (uVelo * .525)).y;
float b = texture2D(tDiffuse, newUV.xy += c * (uVelo * .55)).z;
color = vec4(r, g, b, 1.);
gl_FragColor = color;
}`
}
Well, assuming you set up three.js for transparency then my guess is
the last part
void main() {
vec2 newUV = vUv;
vec4 color = vec4(1.,0.,0.,1.);
float c = circle(newUV, uMouse, 0.0, 0.2);
float r = texture2D(tDiffuse, newUV.xy += c * (uVelo * .5)).x;
float g = texture2D(tDiffuse, newUV.xy += c * (uVelo * .525)).y;
float b = texture2D(tDiffuse, newUV.xy += c * (uVelo * .55)).z;
float a = texture2D(tDiffuse, newUV.xy += c * (uVelo * .525)).w; // added
color = vec4(r, g, b, a); // changed
gl_FragColor = color;
}`
This might also work better
vec4 c1 = texture2D(tDiffuse, newUV.xy += c * (0.1 * .5));
vec4 c2 = texture2D(tDiffuse, newUV.xy += c * (0.1 * .525));
vec4 c3 = texture2D(tDiffuse, newUV.xy += c * (0.1 * .55));
float a = min(min(c1.a, c2.a), c3.a);
vec4 color = vec4(c1.r, c2.g, c3.b, a);
gl_FragColor = color;
You maybe also need to premultiply the alpha
gl_FragColor = color;
gl_FragColor.rgb *= gl_FragColor.a;
Thanks #gman, you helped me understand the algorithm. I solved it as follows.
void main() {
vec2 newUV = vUv;
float c = circle(newUV, uMouse, 0.0, 0.2);
float a = texture2D(tDiffuse, newUV.xy+c*(uVelo)).w; //added
float r = texture2D(tDiffuse, newUV.xy += c * (uVelo * .5)).x;
float g = texture2D(tDiffuse, newUV.xy += c * (uVelo * .525)).y;
float b = texture2D(tDiffuse, newUV.xy += c * (uVelo * .55)).z;
vec4 color = vec4(r, g, b, r+g+b+a); //changed
gl_FragColor = color;
}

Glow on shader seems misplaced

I want to get a gradient circle with a glow in the middle. Using the method used in my code. But something is going wrong. The glowing part in the middle doesn't get exactly centered. The bottom part under the glow is bigger than the top part. Checked the pixels on paint just to be sure It wasn't an optical illusion
See image:
Why is this happening?
code:
// Author #patriciogv - 2015
// http://patriciogonzalezvivo.com
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265359
#define TWO_PI 6.28318530718
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
void main(){
vec2 c = gl_FragCoord.xy/u_resolution.xy;
c.x *= u_resolution.x/u_resolution.y;
c.y = c.y;
c = vec2(0.5, 0.5) - c;
float d = smoothstep(0.0, 1.572, 0.336 - length(c.xy));
float glowsize = 30.712;
gl_FragColor = vec4(0., .0, d, 1.) * glowsize ;
}
You have to do the translation first, so that the origin is in the center of the viewport. After that you have to apply the aspect ratio:
precision mediump float;
varying vec2 vertPos;
varying vec4 vertColor;
uniform vec2 u_resolution;
void main()
{
vec2 c = gl_FragCoord.xy/u_resolution.xy;
c = vec2(0.5, 0.5) - c;
c.x *= u_resolution.x/u_resolution.y;
float d = smoothstep(0.0, 1.572, 0.336 - length(c.xy));
float glowsize = 30.712;
gl_FragColor = vec4( vec3(/*0., .0,*/ d), 1.) * glowsize ;
}
Note, if the aspect ratio would be 1/2 and vec2 c = vec2(0.5,0.5) (this is the fragment it the center of the viewport), then your result is:
c = (0.5, 0.5)
c' = (0.5, 0.5) - c * (1.0/2.0, 1.0)
c' = (0.25, 0.0)
If you first translate and do the multiplication by the aspect ratio after that, then the result is:
c = (0.5, 0.5)
c' = [(0.5, 0.5) - c] * (1.0/2.0, 1.0)
c' = (0.0, 0.0)
Update:
With the following shader you can see the the result is perfectly centered:
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
void main() {
vec2 c = gl_FragCoord.xy/u_resolution.xy;
c = vec2(0.5, 0.5) - c;
c.x *= u_resolution.x/u_resolution.y;
float d = smoothstep(0.0, 1.572, 0.336 - length(c.xy));
float glowsize = 30.712;
float dia = step(abs(abs(c.x)-abs(c.y)),0.005);
gl_FragColor = vec4( mix( vec3(0.0, 0.0, d) * glowsize, vec3(1.0,1.0,0.0), dia), 1.0);
}
Preview:

Pure Depth SSAO flickering

I try to implement Pure Depth SSAO, using this tutorial, into an OpenGL ES 2.0 engine.
Now I experience flickering, which looks like I read from somewhere, where I have no data.
Can you see where I made a mistake or do you have an idea how to solve the flickering problem ? I need this to run on mobile and html5 with forward rendering, thats why I use the depth only version of SSAO.
Many thanks
Video: Youtube
GLSL Code:
uniform sampler2D texture0;
uniform sampler2D texture1;
varying vec2 uvVarying;
vec3 GetNormalFromDepth(float depth, vec2 uv);
uniform mediump vec2 agk_resolution;
uniform float ssaoStrength;
uniform float ssaoBase;
uniform float ssaoArea;
uniform float ssaoFalloff;
uniform float ssaoRadius;
const int samples = 16;
vec3 sampleSphere[samples];
void main()
{
highp float depth = texture2D(texture0, uvVarying).r;
vec3 random = normalize( texture2D(texture1, uvVarying * agk_resolution / 64.0).rgb );
vec3 position = vec3(uvVarying, depth);
vec3 normal = GetNormalFromDepth(depth, uvVarying);
sampleSphere[0] = vec3( 0.5381, 0.1856,-0.4319);
sampleSphere[1] = vec3( 0.1379, 0.2486, 0.4430);
sampleSphere[2] = vec3( 0.3371, 0.5679,-0.0057);
sampleSphere[3] = vec3(-0.6999,-0.0451,-0.0019);
sampleSphere[3] = vec3( 0.0689,-0.1598,-0.8547);
sampleSphere[5] = vec3( 0.0560, 0.0069,-0.1843);
sampleSphere[6] = vec3(-0.0146, 0.1402, 0.0762);
sampleSphere[7] = vec3( 0.0100,-0.1924,-0.0344);
sampleSphere[8] = vec3(-0.3577,-0.5301,-0.4358);
sampleSphere[9] = vec3(-0.3169, 0.1063, 0.0158);
sampleSphere[10] = vec3( 0.0103,-0.5869, 0.0046);
sampleSphere[11] = vec3(-0.0897,-0.4940, 0.3287);
sampleSphere[12] = vec3( 0.7119,-0.0154,-0.0918);
sampleSphere[13] = vec3(-0.0533, 0.0596,-0.5411);
sampleSphere[14] = vec3( 0.0352,-0.0631, 0.5460);
sampleSphere[15] = vec3(-0.4776, 0.2847,-0.0271);
float radiusDepth = ssaoRadius/depth;
float occlusion = 0.0;
for(int i=0; i < samples; i++)
{
vec3 ray = radiusDepth * reflect(sampleSphere[i], random);
vec3 hemiRay = position + sign(dot(ray, normal)) * ray;
float occDepth = texture2D(texture0, clamp(hemiRay.xy, 0.0, 1.0)).r;
float difference = depth - occDepth;
occlusion += step(ssaoFalloff, difference) * (1.0 - smoothstep(ssaoFalloff, ssaoArea, difference));
// float rangeCheck = abs(difference) < radiusDepth ? 1.0 : 0.0;
// occlusion += (occDepth <= position.z ? 1.0 : 0.0) * rangeCheck;
}
float ao = 1.0 - ssaoStrength * occlusion * (1.0 / float(samples));
gl_FragColor = vec4(clamp(ao + ssaoBase, 0.0, 1.0));
}
vec3 GetNormalFromDepth(float depth, vec2 uv)
{
vec2 offset1 = vec2(0.0,1.0/agk_resolution.y);
vec2 offset2 = vec2(1.0/agk_resolution.x,0.0);
float depth1 = texture2D(texture0, uv + offset1).r;
float depth2 = texture2D(texture0, uv + offset2).r;
vec3 p1 = vec3(offset1, depth1 - depth);
vec3 p2 = vec3(offset2, depth2 - depth);
vec3 normal = cross(p1, p2);
normal.z = -normal.z;
return normalize(normal);
}
I carefully checked my code and the code you (Rabbid76) created for JSFiddle and came across the if (depth > 0.0) statement which solved the problem... so you somehow answered my question and I would like to thank you and mark you for that

Atmosphere Scattering for Earth from space and on the ground

Please provide prompt how to make the atmosphere of the Earth so that it is visible from space and from the ground (as shown in the image)
a model of the earth:
Earth = new THREE.Mesh(new THREE.SphereGeometry(6700,32,32),ShaderMaterialEarth);
model of the cosmos:
cosmos= new THREE.Mesh(new THREE.SphereGeometry(50000,32,32),ShaderMaterialCosmos);
and a light source:
sun = new THREE.DirectionalLight();
where to start, just I do not know. Perhaps this should do ShaderMaterialCosmos, where to pass position of the camera, and calculate how should be painted pixel. But how?
I tried using the following but get zero vectors at the entrance of the fragment shader
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html
vertexShader:
#define M_PI 3.1415926535897932384626433832795
const float ESun=1.0;
const float Kr = 0.0025;
const float Km = 0.0015;
const int nSamples = 2;
const float fSamples = 1.0;
const float fScaleDepth = 0.25;
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3CameraPos; , // The camera's current position
uniform vec3 v3LightDir; // Direction vector to the light source
uniform vec3 v3InvWavelength; // 1 / pow(wavelength, 4) for RGB
uniform float fCameraHeight; // The camera's current height
const float fOuterRadius=6500.0; // The outer (atmosphere) radius
const float fInnerRadius=6371.0; // The inner (planetary) radius
const float fKrESun=Kr*ESun; // Kr * ESun
const float fKmESun=Km*ESun; // Km * ESun
const float fKr4PI=Kr*4.0*M_PI; // Kr * 4 * PI
const float fKm4PI=Km*4.0*M_PI; // Km * 4 * PI
const float fScale=1.0/(fOuterRadius-fInnerRadius); // 1 / (fOuterRadius - fInnerRadius)
const float fScaleOverScaleDepth= fScale / fScaleDepth; // fScale / fScaleDepth
const float fInvScaleDepth=1.0/0.25;
float getNearIntersection(vec3 v3Pos, vec3 v3Ray, float fDistance2, float fRadius2)
{
float B = 2.0 * dot(v3Pos, v3Ray);
float C = fDistance2 - fRadius2;
float fDet = max(0.0, B*B - 4.0 * C);
return 0.5 * (-B - sqrt(fDet));
}
float scale(float fCos)
{
float x = 1.0 - fCos;
return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25))));
}
void main() {
// Get the ray from the camera to the vertex and its length (which
// is the far point of the ray passing through the atmosphere)
vec3 v3Pos = position.xyz;
vec3 v3Ray = v3Pos - v3CameraPos;
float fFar = length(v3Ray);
v3Ray /= fFar;
// Calculate the closest intersection of the ray with
// the outer atmosphere (point A in Figure 16-3)
float fNear = getNearIntersection(v3CameraPos, v3Ray, fCameraHeight*fCameraHeight, fOuterRadius*fOuterRadius);
// Calculate the ray's start and end positions in the atmosphere,
// then calculate its scattering offset
vec3 v3Start = v3CameraPos + v3Ray * fNear;
fFar -= fNear;
float fStartAngle = dot(v3Ray, v3Start) / fOuterRadius;
float fStartDepth = exp(-fInvScaleDepth);
float fStartOffset = fStartDepth * scale(fStartAngle);
// Initialize the scattering loop variables
float fSampleLength = fFar / fSamples;
float fScaledLength = fSampleLength * fScale;
vec3 v3SampleRay = v3Ray * fSampleLength;
vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5;
// Now loop through the sample points
vec3 v3FrontColor = vec3(0.0, 0.0, 0.0);
for(int i=0; i<nSamples; i++) {
float fHeight = length(v3SamplePoint);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight));
float fLightAngle = dot(v3LightDir, v3SamplePoint) / fHeight;
float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight;
float fScatter = (fStartOffset + fDepth * (scale(fLightAngle) * scale(fCameraAngle)));
vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI));
v3FrontColor += v3Attenuate * (fDepth * fScaledLength);
v3SamplePoint += v3SampleRay;
}
wPosition = (modelMatrix * vec4(position,1.0)).xyz;
c0.rgb = v3FrontColor * (v3InvWavelength * fKrESun);
c1.rgb = v3FrontColor * fKmESun;
t0 = v3CameraPos - v3Pos;
vUv = uv;
}
fragmentShader:
float getMiePhase(float fCos, float fCos2, float g, float g2){
return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5);
}
// Rayleigh phase function
float getRayleighPhase(float fCos2){
//return 0.75 + 0.75 * fCos2;
return 0.75 * (2.0 + 0.5 * fCos2);
}
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3LightDir;
uniform float g;
uniform float g2;
void main() {
float fCos = dot(v3LightDir, t0) / length(t0);
float fCos2 = fCos * fCos;
gl_FragColor = getRayleighPhase(fCos2) * c0 + getMiePhase(fCos, fCos2, g, g2) * c1;
gl_FragColor = c1;
}
Chapter 16 of GPU Gem 2 has nice explanation and illustration for achieving your goal in real time.
Basically you need to perform ray casting through the atmosphere layer and evaluate the light scattering.

GLSL Shadows with Perlin Noise

So I've recently gotten into using WebGL and more specifically writing GLSL Shaders and I have run into a snag while writing the fragment shader for my "water" shader which is derived from this tutorial.
What I'm trying to achieve is a stepped shading (Toon shading, cell shading...) effect on waves generated by my vertex shader but the fragment shader seems to treat the waves as though they are still a flat plane and the entire mesh is drawn as one solid color.
What am I missing here? The sphere works perfectly but flat surfaces are all shaded uniformly. I have the same problem if I use a cube. Each face on the cube is shaded independently but the entire face is given a solid color.
The Scene
This is how I have my test scene set up. I have two meshes using the same material - a sphere and a plane and a light source.
The Problem
As you can see the shader is working as expected on the sphere.
I enabled wireframe for this shot to show that the vertex shader (perlin noise) is working beautifully on the plane.
But when I turn the wireframe off you can see that the fragment shader seems to be receiving the same level of light uniformly across the entire plane creating this...
Rotating the plane to face the light source will change the color of the material but again the color is applied uniformly over the entire surface of the plane.
The Fragment Shader
In all it's script kid glory lol.
uniform vec3 uMaterialColor;
uniform vec3 uDirLightPos;
uniform vec3 uDirLightColor;
uniform float uKd;
uniform float uBorder;
varying vec3 vNormal;
varying vec3 vViewPosition;
void main() {
vec4 color;
// compute direction to light
vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 );
vec3 lVector = normalize( lDirection.xyz );
// N * L. Normal must be normalized, since it's interpolated.
vec3 normal = normalize( vNormal );
// check the diffuse dot product against uBorder and adjust
// this diffuse value accordingly.
float diffuse = max( dot( normal, lVector ), 0.0);
if (diffuse > 0.95)
color = vec4(1.0,0.0,0.0,1.0);
else if (diffuse > 0.85)
color = vec4(0.9,0.0,0.0,1.0);
else if (diffuse > 0.75)
color = vec4(0.8,0.0,0.0,1.0);
else if (diffuse > 0.65)
color = vec4(0.7,0.0,0.0,1.0);
else if (diffuse > 0.55)
color = vec4(0.6,0.0,0.0,1.0);
else if (diffuse > 0.45)
color = vec4(0.5,0.0,0.0,1.0);
else if (diffuse > 0.35)
color = vec4(0.4,0.0,0.0,1.0);
else if (diffuse > 0.25)
color = vec4(0.3,0.0,0.0,1.0);
else if (diffuse > 0.15)
color = vec4(0.2,0.0,0.0,1.0);
else if (diffuse > 0.05)
color = vec4(0.1,0.0,0.0,1.0);
else
color = vec4(0.05,0.0,0.0,1.0);
gl_FragColor = color;
The Vertex Shader
vec3 mod289(vec3 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 mod289(vec4 x)
{
return x - floor(x * (1.0 / 289.0)) * 289.0;
}
vec4 permute(vec4 x)
{
return mod289(((x*34.0)+1.0)*x);
}
vec4 taylorInvSqrt(vec4 r)
{
return 1.79284291400159 - 0.85373472095314 * r;
}
vec3 fade(vec3 t) {
return t*t*t*(t*(t*6.0-15.0)+10.0);
}
// Classic Perlin noise
float cnoise(vec3 P)
{
vec3 Pi0 = floor(P); // Integer part for indexing
vec3 Pi1 = Pi0 + vec3(1.0); // Integer part + 1
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
// Classic Perlin noise, periodic variant
float pnoise(vec3 P, vec3 rep)
{
vec3 Pi0 = mod(floor(P), rep); // Integer part, modulo period
vec3 Pi1 = mod(Pi0 + vec3(1.0), rep); // Integer part + 1, mod period
Pi0 = mod289(Pi0);
Pi1 = mod289(Pi1);
vec3 Pf0 = fract(P); // Fractional part for interpolation
vec3 Pf1 = Pf0 - vec3(1.0); // Fractional part - 1.0
vec4 ix = vec4(Pi0.x, Pi1.x, Pi0.x, Pi1.x);
vec4 iy = vec4(Pi0.yy, Pi1.yy);
vec4 iz0 = Pi0.zzzz;
vec4 iz1 = Pi1.zzzz;
vec4 ixy = permute(permute(ix) + iy);
vec4 ixy0 = permute(ixy + iz0);
vec4 ixy1 = permute(ixy + iz1);
vec4 gx0 = ixy0 * (1.0 / 7.0);
vec4 gy0 = fract(floor(gx0) * (1.0 / 7.0)) - 0.5;
gx0 = fract(gx0);
vec4 gz0 = vec4(0.5) - abs(gx0) - abs(gy0);
vec4 sz0 = step(gz0, vec4(0.0));
gx0 -= sz0 * (step(0.0, gx0) - 0.5);
gy0 -= sz0 * (step(0.0, gy0) - 0.5);
vec4 gx1 = ixy1 * (1.0 / 7.0);
vec4 gy1 = fract(floor(gx1) * (1.0 / 7.0)) - 0.5;
gx1 = fract(gx1);
vec4 gz1 = vec4(0.5) - abs(gx1) - abs(gy1);
vec4 sz1 = step(gz1, vec4(0.0));
gx1 -= sz1 * (step(0.0, gx1) - 0.5);
gy1 -= sz1 * (step(0.0, gy1) - 0.5);
vec3 g000 = vec3(gx0.x,gy0.x,gz0.x);
vec3 g100 = vec3(gx0.y,gy0.y,gz0.y);
vec3 g010 = vec3(gx0.z,gy0.z,gz0.z);
vec3 g110 = vec3(gx0.w,gy0.w,gz0.w);
vec3 g001 = vec3(gx1.x,gy1.x,gz1.x);
vec3 g101 = vec3(gx1.y,gy1.y,gz1.y);
vec3 g011 = vec3(gx1.z,gy1.z,gz1.z);
vec3 g111 = vec3(gx1.w,gy1.w,gz1.w);
vec4 norm0 = taylorInvSqrt(vec4(dot(g000, g000), dot(g010, g010), dot(g100, g100), dot(g110, g110)));
g000 *= norm0.x;
g010 *= norm0.y;
g100 *= norm0.z;
g110 *= norm0.w;
vec4 norm1 = taylorInvSqrt(vec4(dot(g001, g001), dot(g011, g011), dot(g101, g101), dot(g111, g111)));
g001 *= norm1.x;
g011 *= norm1.y;
g101 *= norm1.z;
g111 *= norm1.w;
float n000 = dot(g000, Pf0);
float n100 = dot(g100, vec3(Pf1.x, Pf0.yz));
float n010 = dot(g010, vec3(Pf0.x, Pf1.y, Pf0.z));
float n110 = dot(g110, vec3(Pf1.xy, Pf0.z));
float n001 = dot(g001, vec3(Pf0.xy, Pf1.z));
float n101 = dot(g101, vec3(Pf1.x, Pf0.y, Pf1.z));
float n011 = dot(g011, vec3(Pf0.x, Pf1.yz));
float n111 = dot(g111, Pf1);
vec3 fade_xyz = fade(Pf0);
vec4 n_z = mix(vec4(n000, n100, n010, n110), vec4(n001, n101, n011, n111), fade_xyz.z);
vec2 n_yz = mix(n_z.xy, n_z.zw, fade_xyz.y);
float n_xyz = mix(n_yz.x, n_yz.y, fade_xyz.x);
return 2.2 * n_xyz;
}
varying vec2 vUv;
varying float noise;
uniform float time;
// for the cell shader
varying vec3 vNormal;
varying vec3 vViewPosition;
float turbulence( vec3 p ) {
float w = 100.0;
float t = -.5;
for (float f = 1.0 ; f <= 10.0 ; f++ ){
float power = pow( 2.0, f );
t += abs( pnoise( vec3( power * p ), vec3( 10.0, 10.0, 10.0 ) ) / power );
}
return t;
}
varying vec3 vertexWorldPos;
void main() {
vUv = uv;
// add time to the noise parameters so it's animated
noise = 10.0 * -.10 * turbulence( .5 * normal + time );
float b = 25.0 * pnoise( 0.05 * position + vec3( 2.0 * time ), vec3( 100.0 ) );
float displacement = - 10. - noise + b;
vec3 newPosition = position + normal * displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
// for the cell shader effect
vNormal = normalize( normalMatrix * normal );
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vViewPosition = -mvPosition.xyz;
}
Worth Mention
I am using the Three.js library
My light source is an instance of THREE.SpotLight
First of all, shadows are completely different. Your problem here is a lack of change in the per-vertex normal after displacement. Correcting this is not going to get you shadows, but your lighting will at least vary across your displaced geometry.
If you have access to partial derivatives, you can do this in the fragment shader. Otherwise, you are kind of out of luck in GL ES, due to a lack of vertex adjacency information. You could also compute per-face normals with a Geometry Shader, but that is not an option in WebGL.
This should be all of the necessary changes to implement this, note that it requires partial derivative support (optional extension in OpenGL ES 2.0).
Vertex Shader:
varying vec3 vertexViewPos; // NEW
void main() {
...
vec3 newPosition = position + normal * displacement;
vertexViewPos = (modelViewMatrix * vec4 (newPosition, 1.0)).xyz; // NEW
...
}
Fragment Shader:
#extension GL_OES_standard_derivatives : require
uniform vec3 uMaterialColor;
uniform vec3 uDirLightPos;
uniform vec3 uDirLightColor;
uniform float uKd;
uniform float uBorder;
varying vec3 vNormal;
varying vec3 vViewPosition;
varying vec3 vertexViewPos; // NEW
void main() {
vec4 color;
// compute direction to light
vec4 lDirection = viewMatrix * vec4( uDirLightPos, 0.0 );
vec3 lVector = normalize( lDirection.xyz );
// N * L. Normal must be normalized, since it's interpolated.
vec3 normal = normalize(cross (dFdx (vertexViewPos), dFdy (vertexViewPos))); // UPDATED
...
}
To enable partial derivative support in WebGL you need to check the extension like this:
var ext = gl.getExtension("OES_standard_derivatives");
if (!ext) {
alert("OES_standard_derivatives does not exist on this machine");
return;
}
// proceed with the shaders above.

Resources