Opengl ES diffuse shader not working - opengl-es

I'm implementing simple ray tracing for spheres in a fragment shader and I'm currently working on the function that computes color for a diffusely shaded sphere. Here is the code for the function:
vec3 shadeSphere(vec3 point, vec4 sphere, vec3 material) {
vec3 color = vec3(1.,2.,3.);
vec3 N = (point - sphere.xyz) / sphere.w;
vec3 diffuse = max(dot(Ldir, N), 0.0);
vec3 ambient = material/5;
color = ambient + Lrgb * diffuse * max(0.0, N * Ldir);
return color;
}
I'm getting errors on the two lines where I'm using the max function. I got the code for the line where I'm assigning max(dot(Ldir,N),0.0) from the webgl cheat sheet which uses the function max(dot(ec_light_dir, ec_normal), 0.0);
For some reason, my implementation is not working as I'm getting the error:
ERROR: 0:38: 'max' : no matching overloaded function found
What could be the problem with either of the these max functions I'm using?

There's 2 max statements in your shader. It's the 2nd one that's the problem
max(0.0, N * LDir) makes no sense. N is a vec3. There's no version of max that takes max(float, vec3). There is a version of max that's max(vec3, float) so swap that to be
`max(N * LDir, 0.0)`
and it might work. Basically your shader is NOT an ES 2.0 shader. Maybe it's being used on a driver that is not spec compliant (ie, the driver has a bug). WebGL tries to follow the spec 100%

The dot product is a scalar value not a vec3, you need to either store it in a float
float diffuse = max(dot(Ldir, N),0.0);
or initialize a vec3 with it
vec3 diffuse = vec3(max(dot(Ldir, N),0.0));
Same goes for the ambient term. Usually both diffuse and ambient terms are just scalars.

Related

When does interpolation happen between the vertex and fragment shaders in this WebGL program?

Background
I'm looking at this example code from the WebGL2 library PicoGL.js.
It describes a single triangle (three vertices: (-0.5, -0.5), (0.5, -0.5), (0.0, 0.5)), each of which is assigned a color (red, green, blue) by the vertex shader:
#version 300 es
layout(location=0) in vec4 position;
layout(location=1) in vec3 color;
out vec3 vColor;
void main() {
vColor = color;
gl_Position = position;
}
The vColor output is passed to the fragment shader:
#version 300 es
precision highp float;
in vec3 vColor;
out vec4 fragColor;
void main() {
fragColor = vec4(vColor, 1.0);
}
and together they render the following image:
Question(s)
My understanding is that the vertex shader is called once per vertex, whereas the fragment shader is called once per pixel.
However, the fragment shader references the vColor variable, which is only assigned once per call to each vertex, but there are many more pixels than vertices!
The resulting image clearly shows a color gradient - why?
Does WebGL automatically interpolate values of vColor for pixels in between vertices? If so, how is the interpolation done?
Yes, WebGL automatically interpolates between the values supplied to the 3 vertices.
Copied from this site
A linear interpolation from one value to another would be this
formula
result = (1 - t) * a + t * b
Where t is a value from 0 to 1 representing some position between a and b. 0 at a and 1 at b.
For varyings though WebGL uses this formula
result = (1 - t) * a / aW + t * b / bW
-----------------------------
(1 - t) / aW + t / bW
Where aW is the W that was set on gl_Position.w when the varying was
as set to a and bW is the W that was set on gl_Position.w when the
varying was set to b.
The site linked above shows how that formula generates perspective correct texture mapping coordinates when interpolating varyings
It also shows an animation of the varyings changing
The khronos OpenGL wiki - Fragment Shader has the answer. Namely:
Each fragment has a Window Space position, a few other values, and it contains all of the interpolated per-vertex output values from the last Vertex Processing stage.
(Emphasis mine)

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

get the view coordinate in a point sprite

If you pass a varying view-space position from the vertex shader to a fragment shader then the fragment shader can know the fragment's position relative to the camera (0,0,0 in view-space). This is useful for per-pixel lighting etc. E.g.:
precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec4 pos;
void main() {
pos = (mvMatrix * vec4(vertex,1.0));
gl_Position = pMatrix * pos;
}
However, if you are rendering gl_POINTS and setting the gl_PointSize in the vertex shader, how can the fragment shader determine each fragment's position (as the pos passed in the example above will be for the sprite's centre-point)?
Simple answer: stop using point sprites. Really, they're terrible.
Less simple answer: stop passing the view-space position to the fragment shader entirely. Instead, use gl_FragCoord to compute the view-space position, based on viewport data and so forth. There's even sample GLSL code for it:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
You'll need to give your fragment shader the viewport and invPersMatrix values. gl_DepthRange is built-in. eyePos is what you're looking for.
There's probably a faster way to do it that takes advantage of the fact that you're drawing a screen-aligned quad. It would involve the point size and using gl_PointCoord.

How can a fragment shader use the color values of the previously rendered frame?

I am learning to use shaders in OpenGL ES.
As an example: Here's my playground fragment shader which takes the current video frame and makes it grayscale:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
highp vec4 theColor = texture2D(videoFrame, textureCoordinate);
highp float avrg = (theColor[0] + theColor[1] + theColor[2]) / 3.0;
theColor[0] = avrg; // r
theColor[1] = avrg; // g
theColor[2] = avrg; // b
gl_FragColor = theColor;
}
theColor represents the current pixel. It would be cool to also get access to the previous pixel at this same coordinate.
For sake of curiousity, I would like to add or multiply the color of the current pixel to the color of the pixel in the previous render frame.
How could I keep the previous pixels around and pass them in to my fragment shader in order to do something with them?
Note: It's OpenGL ES 2.0 on the iPhone.
You need to render the previous frame to a texture, using a Framebuffer Object (FBO), then you can read this texture in your fragment shader.
The dot intrinsic function that Damon refers to is a code implementation of the mathematical dot product. I'm not supremely familiar with OpenGL so I'm not sure what the exact function call is, but mathematically a dot product goes like this :
Given a vector a and a vector b, the 'dot' product a 'dot' b produces a scalar result c:
c = a.x * b.x + a.y * b.y + a.z * b.z
Most modern graphics hardware (and CPUs, for that matter) are capable of performing this kind of operation in one pass. In your particular case, you could compute your average easily with a dot product like so:
highp vec4 = (1/3, 1/3, 1/3, 0) //or zero
I always get the 4th component in homogeneous vectors and matrices mixed up for some reason.
highp float avg = theColor DOT vec4
This will multiple each component of theColor by 1/3 (and the 4th component by 0), and then add them together.

Resources