How can a fragment shader use the color values of the previously rendered frame? - opengl-es

I am learning to use shaders in OpenGL ES.
As an example: Here's my playground fragment shader which takes the current video frame and makes it grayscale:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
highp vec4 theColor = texture2D(videoFrame, textureCoordinate);
highp float avrg = (theColor[0] + theColor[1] + theColor[2]) / 3.0;
theColor[0] = avrg; // r
theColor[1] = avrg; // g
theColor[2] = avrg; // b
gl_FragColor = theColor;
}
theColor represents the current pixel. It would be cool to also get access to the previous pixel at this same coordinate.
For sake of curiousity, I would like to add or multiply the color of the current pixel to the color of the pixel in the previous render frame.
How could I keep the previous pixels around and pass them in to my fragment shader in order to do something with them?
Note: It's OpenGL ES 2.0 on the iPhone.

You need to render the previous frame to a texture, using a Framebuffer Object (FBO), then you can read this texture in your fragment shader.

The dot intrinsic function that Damon refers to is a code implementation of the mathematical dot product. I'm not supremely familiar with OpenGL so I'm not sure what the exact function call is, but mathematically a dot product goes like this :
Given a vector a and a vector b, the 'dot' product a 'dot' b produces a scalar result c:
c = a.x * b.x + a.y * b.y + a.z * b.z
Most modern graphics hardware (and CPUs, for that matter) are capable of performing this kind of operation in one pass. In your particular case, you could compute your average easily with a dot product like so:
highp vec4 = (1/3, 1/3, 1/3, 0) //or zero
I always get the 4th component in homogeneous vectors and matrices mixed up for some reason.
highp float avg = theColor DOT vec4
This will multiple each component of theColor by 1/3 (and the 4th component by 0), and then add them together.

Related

When does interpolation happen between the vertex and fragment shaders in this WebGL program?

Background
I'm looking at this example code from the WebGL2 library PicoGL.js.
It describes a single triangle (three vertices: (-0.5, -0.5), (0.5, -0.5), (0.0, 0.5)), each of which is assigned a color (red, green, blue) by the vertex shader:
#version 300 es
layout(location=0) in vec4 position;
layout(location=1) in vec3 color;
out vec3 vColor;
void main() {
vColor = color;
gl_Position = position;
}
The vColor output is passed to the fragment shader:
#version 300 es
precision highp float;
in vec3 vColor;
out vec4 fragColor;
void main() {
fragColor = vec4(vColor, 1.0);
}
and together they render the following image:
Question(s)
My understanding is that the vertex shader is called once per vertex, whereas the fragment shader is called once per pixel.
However, the fragment shader references the vColor variable, which is only assigned once per call to each vertex, but there are many more pixels than vertices!
The resulting image clearly shows a color gradient - why?
Does WebGL automatically interpolate values of vColor for pixels in between vertices? If so, how is the interpolation done?
Yes, WebGL automatically interpolates between the values supplied to the 3 vertices.
Copied from this site
A linear interpolation from one value to another would be this
formula
result = (1 - t) * a + t * b
Where t is a value from 0 to 1 representing some position between a and b. 0 at a and 1 at b.
For varyings though WebGL uses this formula
result = (1 - t) * a / aW + t * b / bW
-----------------------------
(1 - t) / aW + t / bW
Where aW is the W that was set on gl_Position.w when the varying was
as set to a and bW is the W that was set on gl_Position.w when the
varying was set to b.
The site linked above shows how that formula generates perspective correct texture mapping coordinates when interpolating varyings
It also shows an animation of the varyings changing
The khronos OpenGL wiki - Fragment Shader has the answer. Namely:
Each fragment has a Window Space position, a few other values, and it contains all of the interpolated per-vertex output values from the last Vertex Processing stage.
(Emphasis mine)

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

OpenGL - trouble passing ALL data into shader at once

I'm trying to display textures on quads (2 triangles) using opengl 3.3
Drawing a texture on a quad works great; however when I have ONE textures (sprite atlas) but using 2 quads(objects) to display different parts of the atlas. When in draw loop, they end up switching back and fourth(one disappears than appears again, etc) at their individual translated locations.
The way I'm drawing this is not the standard DrawElements for each quad(or object) but I package all quads, uv, translations, etc send them up to the shader as one big chunk (as "in" variables): Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertexPosition_modelspace;
in vec3 vertexColor;
in vec2 vertexUV;
in vec3 translation;
in vec4 rotation;
in vec3 scale;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
...
void main(){
mat4 Model = mat4(1.0);
mat4 t = translationMatrix(translation);
mat4 s = scaleMatrix(scale);
mat4 r = rotationMatrix(vec3(rotation), rotation[3]);
Model *= t * r * s;
gl_Position = MVP * Model * vec4 (vertexPosition_modelspace,1); //* MVP;
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Is the vertex shader working as I think it would with a large chunk of data - that it draws each segment passed up as uniform individually because it does not seem like it? Is my train of thought correct on this?
For completeness this is my fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture2D( myTextureSampler, UV ).rgba;
}
A request for more information was made so I will put how i bind this data up to the vertex shader. The following code is just one I use for my translations. I have more for color, rotation, scale, uv, etc:
gl.BindBuffer(gl.ARRAY_BUFFER, tvbo)
gl.BufferData(gl.ARRAY_BUFFER, len(data.Translations)*4, gl.Ptr(data.Translations), gl.DYNAMIC_DRAW)
tAttrib := uint32(gl.GetAttribLocation(program, gl.Str("translation\x00")))
gl.EnableVertexAttribArray(tAttrib)
gl.VertexAttribPointer(tAttrib, 3, gl.FLOAT, false, 0, nil)
...
gl.DrawElements(gl.TRIANGLES, int32(len(elements)), gl.UNSIGNED_INT, nil)
You have just single sampler2D
which means you have just single texture at your disposal
regardless on how many of them you bind.
If you really need to pass the data as single block
then you should add sampler per each texture you got
not sure how many objects/textures you have
but you are limited by gfx hw limit on texture units with this way of data passing
also you need to add another value to your data telling which primitive use which texture unit
and inside fragment then select the right texture sampler ...
You should add stuff like this:
// vertex
in int usedtexture;
out int txr;
void main()
{
txr=usedtexture;
}
// fragment
uniform sampler2D myTextureSampler0;
uniform sampler2D myTextureSampler1;
uniform sampler2D myTextureSampler2;
uniform sampler2D myTextureSampler3;
in vec2 UV;
in int txr;
out vec4 color;
void main
{
if (txr==0) color = texture2D( myTextureSampler0, UV ).rgba;
else if (txr==1) color = texture2D( myTextureSampler1, UV ).rgba;
else if (txr==2) color = texture2D( myTextureSampler2, UV ).rgba;
else if (txr==3) color = texture2D( myTextureSampler3, UV ).rgba;
else color=vec4(0.0,0.0,0.0,0.0);
}
This way of passing is not good for these reasons:
number of used textures is limited to HW texture units limit
if your rendering would need additional textures like normal/shininess/light maps
then you need more then 1 texture per object type and your limit is suddenly divided by 2,3,4...
You need if/switch statements inside fragment which can slow things down considerably
Yes you can do it brunch less but then you would need to access all textures all the time increasing heat stress on gfx without reason...
This kind of passing is suitable for
all textures inside single image (as you mentioned texture atlas)
which can be faster this way and reasonable for scenes with small number of object types (or materials) but large object count...
Since I needed more input on this matter, I linked this page to reddit and someone was able to help me with one response! Anyways the reddit link is here:
https://www.reddit.com/r/opengl/comments/3gyvlt/opengl_passing_all_scene_data_into_shader_each/
The issue of seeing two individual textures/quads after passing all vertices as one data structure over to vertex shader was because my element indices were off. I needed to determine the correct index of each set of vertices for my 2 triangle(quad) objects. Simply had to do something like this:
vertexInfo.Elements = append(vertexInfo.Elements, uint32(idx*4), uint32(idx*4+1), uint32(idx*4+2), uint32(idx*4), uint32(idx*4+2), uint32(idx*4+3))

Opengl Shader, what's the gl_FragColor's alpha components?

I think it'll be a little bit simple answer.
But I can't find the answer with googling.
It's OpenGLES shader thing. I am using cocos2d-x engine.
This is my fragment shader code.
precision lowp float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform vec4 u_lightPosition;
void main()
{
vec4 col=texture2D(u_texture,v_texCoord);
mediump float lightDistance = distance(gl_FragCoord, u_lightPosition);
mediump float alpha = 100.0/lightDistance;
alpha = min(alpha, 1.0);
alpha = max(alpha, 0.0);
col.w = alpha;
//col.a = alpha;
gl_FragColor=col;
}
I just want to give opacity in some circle area. So I change the color's w value because I thought it's the alpha value of the texel. But the result was very odd.
I am afraid it's not alpha value.
Even if I set the value to 1.0 for testing, whole sprite change to be bright and white.
Its vertex shader is very normal, there is nothing special to attached.
Any idea please.
Updated: For reference, I attach some result image.
case 1)
col.w = alpha;
case 2)
col.w = 1.0
and normal texture before applying shader.)
The GL ES 2.0 reference card defines:
Variable mediump vec4 gl_FragColor;
Description fragment color
Units or coordinate system RGBA color
It further states:
Vector Components In addition to array numeric subscript syntax,
names of vector components are denoted by a single letter. Components
can be swizzled and replicated, e.g.: pos.xx, pos.zy
{x, y, z, w} Use when accessing vectors that represent points or normals
{r, g, b, a} Use when accessing vectors that represent colors
{s, t, p, q} Use when accessing vectors that represent texture coordinates
So, sure, using .a would be more idiomatic but it's explicitly the case that what you store to .w is the output alpha for gl_FragColor.
To answer the question you've set as a title rather than the question in the body, the value returned by texture2D will be whatever is correct for that texture. Either an actual stored value if the texture is GL_RGBA or GL_LUMINANCE_ALPHA or else 1.0.
So you're outputting alpha correctly.
If your output alpha isn't having the mixing effect that you expect then you must have glBlendFunc set to something unexpected, possibly involving GL_CONSTANT_COLOR.

get the view coordinate in a point sprite

If you pass a varying view-space position from the vertex shader to a fragment shader then the fragment shader can know the fragment's position relative to the camera (0,0,0 in view-space). This is useful for per-pixel lighting etc. E.g.:
precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec4 pos;
void main() {
pos = (mvMatrix * vec4(vertex,1.0));
gl_Position = pMatrix * pos;
}
However, if you are rendering gl_POINTS and setting the gl_PointSize in the vertex shader, how can the fragment shader determine each fragment's position (as the pos passed in the example above will be for the sprite's centre-point)?
Simple answer: stop using point sprites. Really, they're terrible.
Less simple answer: stop passing the view-space position to the fragment shader entirely. Instead, use gl_FragCoord to compute the view-space position, based on viewport data and so forth. There's even sample GLSL code for it:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
You'll need to give your fragment shader the viewport and invPersMatrix values. gl_DepthRange is built-in. eyePos is what you're looking for.
There's probably a faster way to do it that takes advantage of the fact that you're drawing a screen-aligned quad. It would involve the point size and using gl_PointCoord.

Resources