different OpenGL ES vertex color interpolation modes? - opengl-es

I'm trying to implement a simple linear gradient like in photoshop. The color interpolation between vertices seems to go by (additive?)numerical value rather than what you would expect in "paint blending." Here is a visual example with green and red:
The one on the left is roughly what I get, and I want to achieve the one on the right.
Is there any easy way to achieve this?

As #Andon commented, using the texture system is a good way to do this. Here's what you need:
Assign one (or more, but you only need one for this trick) texture coordinate(s) to each vertex when you set up attributes in your vertex buffer.
In your vertex shader, read that attribute and write it to a varying so it gets interpolated for use in the fragment shader.
In your fragment shader, read the varying -- this tells you how far along the gradient ramp you should be in the current fragment; i.e. a blending factor.
At this point, you have two choices:
Use a 1d texture image that looks like the gradient you want, and lookup into it with the texture2D shader function and the varying texture coordinate you got. This will fetch the corresponding texel color so you can output it to gl_FragColor.
Calculate the color blend in the fragment shader. If you pass in the endpoint colors in your shader as uniforms, you combine them according to the blending factor using whatever math you can do in GLSL (including things like Photoshop blend modes).

Related

WEBGL: How to Color a Fractal Design within the Fragment Shader

I am looking to color a fractal within my fragment shader in an WebGL project.
Within my fragment shader, I have a vec3 called Color that contains the RGB values from 0.0-1.0.
To make a fractal design similar to
but in black and white,
what would I need to set the Color vector to? This color is then multiplied by the Weighted Lighting for the gl_FragColor.
Mandelbrot shaders typically have loop that evaluates an escape condition. To make a monochrome image, simply set Color to vec3(0) if the number of iterations to escape is less than some threshold T, and set it to vec3(1) otherwise. For an example, see this shader toy.

Webgl texture atlas

I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.

What is "uniform samplerXX iChannel0..3" in this shader?

I happened to see this shader on shader toy.
https://www.shadertoy.com/view/ldf3W8
I wanted to know what:
uniform samplerXX iChannel0..3;
is?
I tried to look at the vertex shader, but I don't find any thing there.
Also how can I convert audio waves to a texture? (which is being done here)
I wanted to know what uniform samplerXX iChannel0..3; is?
uniforms are externally set variables that have the same value for invocations of the shader during a primitive draw (a vertex shader gets called for each vertex a primitive consists of, fragment shaders for each fragment (which is roughly translates to pixels) drawn to by the primitive).
samplers are OpenGL's way of binding texture units to a shader. In the actual OpenGL program you're loading the texture using glGenTextures, glActiveTexture, glBindTexture, glTexImage (and a bunch of other functions, but those are the important ones) and bind the texture unit selected with glActiveTexture to a sampler uniform.
Also how can I convert audio waves to a texture?
Textures are just interpolated lookup tables. You can place any data you like in a LUT. Most of the time a texture is used for image data, but you can loac PCM samples into it just as well. So you simply fetch PCM data from the audio API and pass it into a texture as data.

How to access the depth value of the neighbor pixels in WebGL?

I want to make a toon border effect. For it, I'll use the depth value of the neighbor pixels of each pixel to determine if it is or isn't supposed to be blacked. How can I access that information inside the fragment shader?
When you render your scene in a normal way (vertex shader, then fragment shader - single pass) then in the fragment shader there is no way to access depth values for another pixels.
But:
You can render scene twice and perform some postprocessing effects. In the first run you store depth values and others (like normals, etc) in RenderTarget (in texture) then you use those textures in the second pass.
Here you have effect from XNA, but can be quickly ported to GLSL: http://xnameetingpoint.weebly.com/shader7f31.html
Here some link about Render to Texture: http://learningwebgl.com/blog/?p=1786
Hint: depth values will not be enough for border detection, you have to use normals as well. But it is covered in the above tutorial from XNA.

Is a varying a pixel?

I am writing a Fragment shader for an Open GL ES application, and Im trying to clarify the difference between a Pixel and a Varying?
A varying type in OpenGL ES contains an optional, user-defined output from the vertex shader to the fragment shader (e.g. a surface normal if using per-pixel lighting). It is used to calculate the final fragment color (gl_FragColor) within the fragment shader. While a final color can be output from the vertex shader (e.g. if using per-vertex lighting) as a varying type, this is not the norm and depends on your desired shader behaviour.
A pixel is simply the smallest measured unit of an image or screen. The OpenGL ES pipeline produces fragments (raw data) which are then converted (or not) to pixels, depending on their visibility, depth, stencil, colour, etc.

Resources