How to make a fragment shader replace white with alpha, opengl-es - opengl-es

I am trying to come up with a opengl-es fragment shader that will replace the white pixels with alpha. The image with the checkered background is what I want. The checkered background represents the image after alpha conversion. Any tips? Normally I'd hate asking this here but I can't find anything on it.

Getting the "white pixels" as in the image you posted seems to be getting a grayscale component. That is summing up RGB values dividing by 3. Then output RGB are all .0 in your case and the alpha equals to the grayscale pixel...
vec4 textureSample = texture2D(uniformTexture, textureCoordinate);
lowp float grayscaleComponent = textureSample.x*(1.0/3.0) + textureSample.y*(1.0/3.0) + textureSample.z*(1.0/3.0);
gl_FragColor = lowp vec4(.0, .0, .0, grayscaleComponent);

Properly speaking, grayscale value is 0.2126 * R + 0.7152 * G + 0.0722 * B
http://en.wikipedia.org/wiki/Grayscale

Related

GLSL - Blur only the red channel

So, I have these two functions in GLSL. One that splits a texture by its rgb channels and then displaces them individually. And another that just blurs a texture. I want to combine them. But I want to be able to only blur the channel im displacing. So, for instance I might want to blur the red channel in the rgbShift function.
Problem is that the red channel is a single float and the blur function expects a full sample2D image so it can apply UV and stuff. I guess I need a way to blur just a single float? Im not very experienced with GLSL and ive been trying to figure this out for a few days now. Ill be very thankfull for any pointers or suggestions at all.
The GLSL functions can be viewed below.
vec4 blur5(sampler2D image, vec2 uv, vec2 resolution, vec2 direction) {
vec4 color = vec4(0.0);
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
color += texture2D(image, uv) * 0.29411764705882354;
color += texture2D(image, uv + offset) * 0.35294117647058826;
color += texture2D(image, uv - offset) * 0.35294117647058826;
return color;
}
vec3 rgbShift(sampler2D textureimage, vec2 uv, float offset) {
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(textureimage, uv + displace).r;
float g = texture2D(textureimage, uv).g;
float b = texture2D(textureimage, uv + -displace).b;
return vec3(r, g, b);
}
Heres me thinking out loud:
I guess I want to do something like this:
vec4 blurredTexture = blur5(textureImage);
float red = texture2D(blurredTexture, uv + displace).r;
Or this:
float redChannel = texture2D(blurredTexture, uv + displace).r;
vec4 blurredRedChannel = blur5(redChannel );
But neither will work because I cant figure out how to convert the types. I either need to convert the blurred vec4 into a sample2D for the rgbShift function. Or the red channel float into a sample2D for the blur function. Is it even possible to convert a value into a sample2D one way or another?
Maybe I need some other solution where I dont need to convert sample2D at all.
Is it even possible to convert a value into a sample2D one way or another?
Sort-of. You'll need to write that value to a temporary texture. Then you can bind that texture and run a 2nd pass that will sample from that texture. That's probably an overkill for the simple filtering you're trying to do.
Maybe I need some other solution where I dont need to convert sample2D at all.
A simpler solution is to combine those two functions into one:
vec3 shiftAndBlur(sampler2D image, vec2 uv, float offset, vec2 resolution, vec2 direction) {
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(image, uv + displace).r * 0.29411764705882354
+ texture2D(image, uv + displace + offset).r * 0.35294117647058826
+ texture2D(image, uv + displace - offset).r * 0.35294117647058826;
float g = texture2D(image, uv).g;
float b = texture2D(image, uv - displace).b;
return vec3(r,g,b);
}

WebGL: Change color saturation or luminosity in fragment shader

i found this great page hls picker, and i'm wondering if there is possibility to achieve similar effect in WebGL. I'm passing to my fragment shader some color, for example #FF7400, what is the easiest way to convert it to hsl and change its luminosity, or to have smooth transition to black color (luminosity equel 0). I want to make clouds in my page that have different color(luminosity) depends how far they are from sun. Thanks in advance for any help.
thanks for geat links but i think that i found much simplier way to made easy color transition, all i need is to use webGL method T mix(T x, T y, float a) - linear blend of x and y.
This code i use in shadertoy editor:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
vec4 orange = vec4(0.533, 0.25, 0.145, 1.0);
vec4 blue = vec4(0.18, 0.23, 0.27, 1.0);
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
float ratio = iResolution.x / iResolution.y;
float PI = 3.14159265359;
vec4 mixC = mix(orange, blue, sin(ratio * uv.y));
mixC = mix(mixC, black, cos(2.0 * PI * uv.x) / ratio);
mixC = mix(mixC, black, cos(2.0 * PI * uv.y) / ratio);
mixC = mix(mixC, white, 0.1);
fragColor = mixC;
}
As you can see there, i've made transition between of four colors with just couple lines of code and the results looks like this:
I think about fragment shader as a little photoshop. Every photoshop operation should be possible with WebGL.
If we are talking about 2D image where sun position is relative and you want to just use some basic image processing, you can use functions from this answer rgb2hsv and hsv2rgb. I think it should work with GLSL 1.
Then you can multiply Sat, Lum or Hue and then transfer it back to RGB.
If it doesnt, you might have to reimplement it from common formula, use wiki or this link: http://www.rapidtables.com/convert/color/rgb-to-hsl.htm
If you want to do some more image processing, when neighbour pixels are needed, I suggest this great tutorial where you can easy do some edge sharpening, blur etc.: http://webglfundamentals.org/webgl/lessons/webgl-image-processing.html
original image
Hi guys, i think i found a unorthodox but easy way to desaturate an RGB image, we just need to find the average color for the pixel,
average _color = (R+G+B)/3 keeping the Alpha...
vec4(average_color,average_color,average_color, Alpha);
void main()
{
//write a color of the current fragment to a variable
lowp vec4 color_of_pixel = texture2D(texture_sampler, var_texcoord0.xy);
//Find the average among red, green and blue and it keeps the "force" of the color...
float average_color = ((color_of_pixel.r + color_of_pixel.g + color_of_pixel.b)/3);
lowp vec4 color_of_pixel_final = vec4(average_color,average_color,average_color,color_of_pixel.a);
gl_FragColor = color_of_pixel_final; // write the color_of_pixel to the output gl_FragColor
}
Desaturated image

Colored Vignette Shader (the outer part) - LIBGDX

I've seen lots of tutorials on vignette shaders just like these but none of them say how to change the color of the vignette, they only talk about applying sepia or grey shaders to the whole composite image.
For example the video above gives the below code for the vignette shader. How do I change the color of the vignette ? So it's not black but red or orange and the part of the image in the interior of the vignette remains unmodified.
const float outerRadius = .65, innerRadius = .4, intensity = .6;
void main(){
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
vec2 relativePosition = gl_FragCoord.xy / u_resolution - .5;
relativePosition.y *= u_resolution.x / u_resolution.y;
float len = length(relativePosition);
float vignette = smoothstep(outerRadius, innerRadius, len);
color.rgb = mix(color.rgb, color.rgb * vignette, intensity);
gl_FragColor = color;
}
In the shader you posted, it looks like the vignette value is basically a darkness value that's blended over the image, so in the line with the mix function, it's just multiplied by the texture color.
So to modify this to work with arbitrary color, you need to change it to an opacity value (invert it). And now that it's opacity, you can multiply it by intensity to simplify the next calculation. And finally, you can blend to the vignette color you choose.
So first declare the color you want before the main function.
const vec3 vignetteColor = vec3(1.0, 0.0, 0.0); //red
//this could be a uniform if you want to dynamically change it.
Then your second-to-last two lines change to the following.
float vignetteOpacity = smoothstep(innerRadius, outerRadius, len) * intensity; //note inner and outer swapped to switch darkness to opacity
color.rgb = mix(color.rgb, vignetteColor, vignetteOpacity);
By the way, "intensity = .6f" will cause the shader not to compile on mobile, so remove the f. (Unless you target OpenGL ES 3.0, but that's not fully supported by libgdx yet.)

Opengl Shader, what's the gl_FragColor's alpha components?

I think it'll be a little bit simple answer.
But I can't find the answer with googling.
It's OpenGLES shader thing. I am using cocos2d-x engine.
This is my fragment shader code.
precision lowp float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform vec4 u_lightPosition;
void main()
{
vec4 col=texture2D(u_texture,v_texCoord);
mediump float lightDistance = distance(gl_FragCoord, u_lightPosition);
mediump float alpha = 100.0/lightDistance;
alpha = min(alpha, 1.0);
alpha = max(alpha, 0.0);
col.w = alpha;
//col.a = alpha;
gl_FragColor=col;
}
I just want to give opacity in some circle area. So I change the color's w value because I thought it's the alpha value of the texel. But the result was very odd.
I am afraid it's not alpha value.
Even if I set the value to 1.0 for testing, whole sprite change to be bright and white.
Its vertex shader is very normal, there is nothing special to attached.
Any idea please.
Updated: For reference, I attach some result image.
case 1)
col.w = alpha;
case 2)
col.w = 1.0
and normal texture before applying shader.)
The GL ES 2.0 reference card defines:
Variable mediump vec4 gl_FragColor;
Description fragment color
Units or coordinate system RGBA color
It further states:
Vector Components In addition to array numeric subscript syntax,
names of vector components are denoted by a single letter. Components
can be swizzled and replicated, e.g.: pos.xx, pos.zy
{x, y, z, w} Use when accessing vectors that represent points or normals
{r, g, b, a} Use when accessing vectors that represent colors
{s, t, p, q} Use when accessing vectors that represent texture coordinates
So, sure, using .a would be more idiomatic but it's explicitly the case that what you store to .w is the output alpha for gl_FragColor.
To answer the question you've set as a title rather than the question in the body, the value returned by texture2D will be whatever is correct for that texture. Either an actual stored value if the texture is GL_RGBA or GL_LUMINANCE_ALPHA or else 1.0.
So you're outputting alpha correctly.
If your output alpha isn't having the mixing effect that you expect then you must have glBlendFunc set to something unexpected, possibly involving GL_CONSTANT_COLOR.

How can a fragment shader use the color values of the previously rendered frame?

I am learning to use shaders in OpenGL ES.
As an example: Here's my playground fragment shader which takes the current video frame and makes it grayscale:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
highp vec4 theColor = texture2D(videoFrame, textureCoordinate);
highp float avrg = (theColor[0] + theColor[1] + theColor[2]) / 3.0;
theColor[0] = avrg; // r
theColor[1] = avrg; // g
theColor[2] = avrg; // b
gl_FragColor = theColor;
}
theColor represents the current pixel. It would be cool to also get access to the previous pixel at this same coordinate.
For sake of curiousity, I would like to add or multiply the color of the current pixel to the color of the pixel in the previous render frame.
How could I keep the previous pixels around and pass them in to my fragment shader in order to do something with them?
Note: It's OpenGL ES 2.0 on the iPhone.
You need to render the previous frame to a texture, using a Framebuffer Object (FBO), then you can read this texture in your fragment shader.
The dot intrinsic function that Damon refers to is a code implementation of the mathematical dot product. I'm not supremely familiar with OpenGL so I'm not sure what the exact function call is, but mathematically a dot product goes like this :
Given a vector a and a vector b, the 'dot' product a 'dot' b produces a scalar result c:
c = a.x * b.x + a.y * b.y + a.z * b.z
Most modern graphics hardware (and CPUs, for that matter) are capable of performing this kind of operation in one pass. In your particular case, you could compute your average easily with a dot product like so:
highp vec4 = (1/3, 1/3, 1/3, 0) //or zero
I always get the 4th component in homogeneous vectors and matrices mixed up for some reason.
highp float avg = theColor DOT vec4
This will multiple each component of theColor by 1/3 (and the 4th component by 0), and then add them together.

Resources