How can I best implement a weight-normalizing blend operation in opengl? - opengl-es

Suppose I have a source color in RGBA format (sr, sb, sg, sa), and similarly a destination color (dr, db, dg, da), all components assumed to be in [0.0, 1.0].
Let p = (sa)/(sa+da), and q = da/(sa+da). Note that p+q = 1.0. Do anything you want if sa and da are both 0.0.
I would like to implement blending in opengl so that the blend result =
(p*sr + q*dr, p*sg + q*dg, p*sb + q*db, sa+da).
(Or to be a smidge more rigorous, following https://www.opengl.org/sdk/docs/man/html/glBlendFunc.xhtml, I'd like f_R, f_G, and f_B to be either p for src or q for dst; and f_A = 1.)
For instance, in the special case where (sa+da) == 1.0, I could use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); but I'm specifically attempting to deal with alpha values that do not sum to 1.0. (That's why I call it 'weight-normalizing' - I want to treat the src and dst alphas as weights that need to be normalized into linear combination coefficients).
You can assume that I have full control over the data being passed to opengl, the code rendering, and the vertex and fragment shaders. I'm targeting WebGL, but I'm also just curious in general.
The best I could think of was to blend with ONE, ONE, premultiply all src rgb values by alpha, and do a second pass in the end that divides by alpha. But I'm afraid I sacrifice a lot of color depth this way, especially if the various alpha values are small.

I don't believe standard blend equation can do this. At least I can't think of a way how.
However, this is fairly easy to do with OpenGL. Blending might just be the wrong tool for the job. I would make what you currently describe as "source" and "destination" both input textures to the fragment shader. Then you can mix and combine them any way your heart desires.
Say you have two texture you want to combine in the way you describe. Right now you might have something like this:
Bind texture 1.
Render to default framebuffer, sampling the currently bound texture.
Set up fancy blending.
Bind texture 2.
Render to default framebuffer, sampling the currently bound texture.
What you can do instead:
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.
Now you have the values from both textures available in your shader code, and can apply any kind of logic and math to calculate the combined color.
The same thing works if your original data does not come from a texture, but is the result of rendering. Let's say that you have two parts in your rendering process, which you want to combine in the way you describe:
Attach texture 1 as render target to FBO.
Render first part of content.
Attach texture 2 as render target to FBO.
Render second part of content.
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.

Related

GLES fragment shader, get 0-1 range when using TextureRegion - libGDX

I have a fragment shader in which I use v_texCoords as a base for some effects. This works fine if I use a single Texture, as v_texCoords always ranges from 0 - 1, so the center point is always (0.5, 0.5) for example. If I am drawing from part of a TextureRegion though, my shader messes up because v_texCoords no longer ranges from 0-1. Is there any methods or variabels I can use to get a consistent 0-1 range in my fragment shader? I want to avoid setting uniforms as this would mean I need to flush the batch for every draw.
Thanks!
Nothing like this exists at the shader level - TextureRegions are entirely a libgdx construct that doesn't exist at all at the OpenGL ES API level.
Honestly for what you are trying I'd simply suggest not overloading the texture coordinate for two orthogonal purposes, and just add a separate vertex attribute which provides the 0-to-1 number.

GLES2 : glTexImage2D with GL_LUMINANCE give me black screen/texture

I'm trying to render a video from a bayer buffer.
So I create a texture using GL_LUMINANCE/GL_UNSIGNED_BYTE. I apply some shaders onto this texture to generate a RGBA output.
The following call works fine on my PC and does NOT on the target board (iMX6/GLES2) :
glTexImage2D(GL_TEXTURE_2D, 0, textureFormat, m_texture_size.width(), m_texture_size.height(), 0, bufferFormat, GL_UNSIGNED_BYTE, imageData);
On the target board, I have a black texture.
bufferFormat is GL_LUMINANCE.
textureFormat is GL_LUMINANCE.
GLES2 implements a smaller subset of OpenGL API :
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexImage2D.xml
bufferFormat should be equal to the textureFormat. If I try another formats, it works on the PC. On the target board, I get a black screen and some errors reported by glGetError().
Failing other tests
If I try GL_ALPHA, it seems the texture is filled by (0,0,0,1).
If I try GL_RGBA/GL_RGBA (this makes no sense for the application but it checks the HW/API capabilities), I get a non-black texture on the board. Obviously, the image is not what I should expect.
Why does GL_LUMINANCE give me black texture ? How to make this works ?
Guesses:
the texture is not a power of two in dimensions and you have not set a compatible wrapping mode;
you have not set an appropriate mip mapping mode and the shader is therefore sampling a level other than the one you uploaded.
Does setting GL_CLAMP_TO_EDGE* and GL_LINEAR or GL_NEAREST rather than GL_LINEAR_MIPMAP_... resolve the problem?
Per section 3.8.2 of the ES 2 spec (warning: PDF):
Calling a sampler from a fragment shader will return (R, G, B, A) = (0,0,0,1) if any of the following conditions are true:
• A two-dimensional sampler is called, the minification filter is one that requires a mipmap (neither NEAREST nor LINEAR), and the sampler’s associated texture object is not complete, as defined in sections 3.7.1 and 3.7.10,
• A two-dimensional sampler is called, the minification filter is not one that requires a mipmap (either NEAREST nor LINEAR), and either dimension of the level zero array of the associated texture object is not positive.
• A two-dimensional sampler is called, the corresponding texture image is a non-power-of-two image (as described in the Mipmapping discussion of section 3.7.7), and either the texture wrap mode is not CLAMP_TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, any of the corresponding texture images are non-power-of-two images, and either the texture wrap mode is not CLAMP_- TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, and either the corresponding cube map texture image is not cube complete, or TEXTURE_MIN_FILTER is one that requires a mipmap and the texture is not mipmap cube complete.
... so my guesses are to check the first and third bullet points.

Webgl texture atlas

I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.

different OpenGL ES vertex color interpolation modes?

I'm trying to implement a simple linear gradient like in photoshop. The color interpolation between vertices seems to go by (additive?)numerical value rather than what you would expect in "paint blending." Here is a visual example with green and red:
The one on the left is roughly what I get, and I want to achieve the one on the right.
Is there any easy way to achieve this?
As #Andon commented, using the texture system is a good way to do this. Here's what you need:
Assign one (or more, but you only need one for this trick) texture coordinate(s) to each vertex when you set up attributes in your vertex buffer.
In your vertex shader, read that attribute and write it to a varying so it gets interpolated for use in the fragment shader.
In your fragment shader, read the varying -- this tells you how far along the gradient ramp you should be in the current fragment; i.e. a blending factor.
At this point, you have two choices:
Use a 1d texture image that looks like the gradient you want, and lookup into it with the texture2D shader function and the varying texture coordinate you got. This will fetch the corresponding texel color so you can output it to gl_FragColor.
Calculate the color blend in the fragment shader. If you pass in the endpoint colors in your shader as uniforms, you combine them according to the blending factor using whatever math you can do in GLSL (including things like Photoshop blend modes).

How to draw "glowing" line in OpenGL ES

Could you please share some code (any language) on how draw textured line (that would be smooth or have a glowing like effect, blue line, four points) consisting of many points like on attached image using OpenGL ES 1.0.
What I was trying was texturing a GL_LINE_STRIP with texture 16x16 or 1x16 pixels, but without any success.
In ES 1.0 you can use render-to-texture creatively to achieve the effect that you want, but it's likely to be costly in terms of fill rate. Gamasutra has an (old) article on how glow was achieved in the Tron 2.0 game — you'll want to pay particular attention to the DirectX 7.0 comments since that was, like ES 1.0, a fixed pipeline. In your case you probably want just to display the Gaussian image rather than mixing it with an original since the glow is all you're interested in.
My summary of the article is:
render all lines to a texture as normal, solid hairline lines. Call this texture the source texture.
apply a linear horizontal blur to that by taking the source texture you just rendered and drawing it, say, five times to another texture, which I'll call the horizontal blur texture. Draw one copy at an offset of x = 0 with opacity 1.0, draw two further copies — one at x = +1 and one at x = -1 — with opacity 0.63 and a final two copies — one at x = +2 and one at x = -2 with an opacity of 0.17. Use additive blending.
apply a linear vertical blur to that by taking the horizontal blur texture and doing essentially the same steps but with y offsets instead of x offsets.
Those opacity numbers were derived from the 2d Gaussian kernel on this page. Play around with them to affect the fall off towards the outside of your lines.
Note the extra costs involved here: you're ostensibly adding ten full-screen textured draws plus some framebuffer swapping. You can probably get away with fewer draws by using multitexturing. A shader approach would likely do the horizontal and vertical steps in a single pass.

Resources