GLES2 : glTexImage2D with GL_LUMINANCE give me black screen/texture - opengl-es

I'm trying to render a video from a bayer buffer.
So I create a texture using GL_LUMINANCE/GL_UNSIGNED_BYTE. I apply some shaders onto this texture to generate a RGBA output.
The following call works fine on my PC and does NOT on the target board (iMX6/GLES2) :
glTexImage2D(GL_TEXTURE_2D, 0, textureFormat, m_texture_size.width(), m_texture_size.height(), 0, bufferFormat, GL_UNSIGNED_BYTE, imageData);
On the target board, I have a black texture.
bufferFormat is GL_LUMINANCE.
textureFormat is GL_LUMINANCE.
GLES2 implements a smaller subset of OpenGL API :
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexImage2D.xml
bufferFormat should be equal to the textureFormat. If I try another formats, it works on the PC. On the target board, I get a black screen and some errors reported by glGetError().
Failing other tests
If I try GL_ALPHA, it seems the texture is filled by (0,0,0,1).
If I try GL_RGBA/GL_RGBA (this makes no sense for the application but it checks the HW/API capabilities), I get a non-black texture on the board. Obviously, the image is not what I should expect.
Why does GL_LUMINANCE give me black texture ? How to make this works ?

Guesses:
the texture is not a power of two in dimensions and you have not set a compatible wrapping mode;
you have not set an appropriate mip mapping mode and the shader is therefore sampling a level other than the one you uploaded.
Does setting GL_CLAMP_TO_EDGE* and GL_LINEAR or GL_NEAREST rather than GL_LINEAR_MIPMAP_... resolve the problem?
Per section 3.8.2 of the ES 2 spec (warning: PDF):
Calling a sampler from a fragment shader will return (R, G, B, A) = (0,0,0,1) if any of the following conditions are true:
• A two-dimensional sampler is called, the minification filter is one that requires a mipmap (neither NEAREST nor LINEAR), and the sampler’s associated texture object is not complete, as defined in sections 3.7.1 and 3.7.10,
• A two-dimensional sampler is called, the minification filter is not one that requires a mipmap (either NEAREST nor LINEAR), and either dimension of the level zero array of the associated texture object is not positive.
• A two-dimensional sampler is called, the corresponding texture image is a non-power-of-two image (as described in the Mipmapping discussion of section 3.7.7), and either the texture wrap mode is not CLAMP_TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, any of the corresponding texture images are non-power-of-two images, and either the texture wrap mode is not CLAMP_- TO_EDGE, or the minification filter is neither NEAREST nor LINEAR.
• A cube map sampler is called, and either the corresponding cube map texture image is not cube complete, or TEXTURE_MIN_FILTER is one that requires a mipmap and the texture is not mipmap cube complete.
... so my guesses are to check the first and third bullet points.

Related

How can I best implement a weight-normalizing blend operation in opengl?

Suppose I have a source color in RGBA format (sr, sb, sg, sa), and similarly a destination color (dr, db, dg, da), all components assumed to be in [0.0, 1.0].
Let p = (sa)/(sa+da), and q = da/(sa+da). Note that p+q = 1.0. Do anything you want if sa and da are both 0.0.
I would like to implement blending in opengl so that the blend result =
(p*sr + q*dr, p*sg + q*dg, p*sb + q*db, sa+da).
(Or to be a smidge more rigorous, following https://www.opengl.org/sdk/docs/man/html/glBlendFunc.xhtml, I'd like f_R, f_G, and f_B to be either p for src or q for dst; and f_A = 1.)
For instance, in the special case where (sa+da) == 1.0, I could use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); but I'm specifically attempting to deal with alpha values that do not sum to 1.0. (That's why I call it 'weight-normalizing' - I want to treat the src and dst alphas as weights that need to be normalized into linear combination coefficients).
You can assume that I have full control over the data being passed to opengl, the code rendering, and the vertex and fragment shaders. I'm targeting WebGL, but I'm also just curious in general.
The best I could think of was to blend with ONE, ONE, premultiply all src rgb values by alpha, and do a second pass in the end that divides by alpha. But I'm afraid I sacrifice a lot of color depth this way, especially if the various alpha values are small.
I don't believe standard blend equation can do this. At least I can't think of a way how.
However, this is fairly easy to do with OpenGL. Blending might just be the wrong tool for the job. I would make what you currently describe as "source" and "destination" both input textures to the fragment shader. Then you can mix and combine them any way your heart desires.
Say you have two texture you want to combine in the way you describe. Right now you might have something like this:
Bind texture 1.
Render to default framebuffer, sampling the currently bound texture.
Set up fancy blending.
Bind texture 2.
Render to default framebuffer, sampling the currently bound texture.
What you can do instead:
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.
Now you have the values from both textures available in your shader code, and can apply any kind of logic and math to calculate the combined color.
The same thing works if your original data does not come from a texture, but is the result of rendering. Let's say that you have two parts in your rendering process, which you want to combine in the way you describe:
Attach texture 1 as render target to FBO.
Render first part of content.
Attach texture 2 as render target to FBO.
Render second part of content.
Bind texture 1 to texture unit 0.
Bind texture 2 to texture unit 1.
Render to default framebuffer, sampling both bound textures.

Webgl texture atlas

I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.

OpenGL ES 2.0 Vertex Shader Texture Reads not possible from FBO?

I'm currently working on a GPGPU project that uses OpenGL ES 2.0. I have a rendering pipeline that uses framebuffer objects (FBOs) as targets, i.e. the result of each rendering pass is saved in a texture which is attached to an FBO. So far, this works when using fragment shaders. For example I have to following rendering pipeline:
Preprocessing (downscaling, grayscale conversion)
-> Adaptive Thresholding Pass 1 -> Adapt. Thresh. Pass 2
-> Copy back to CPU
However, I wanted to extend this pipeline by adding a grayscale histogram calculation after the proprocessing step. With OpenGL ES 2.0 this only works with texture reads in the vertex shader, as far as I know [1]. I can confirm that my shaders work in a different program where the input is a "real" image, not a rendered texture that is attached to an FBO. Hence I think it is not possible to read texture data in a vertex shader if it comes from an FBO. Can anyone confirm this assumption or am I missing something? I'm using a Nexus 10 for my experiments.
[1]: It basically works by reading each pixel value from the texture in the vertex shader, then calculating of the histogram bin from it and "adding" it in the fragment shader by using alpha blending.
Texture reads within a vertex shader are not a required element in OpenGL ES 2.0, so you'll find some manufacturers supporting them and some not. In fact, there was a weird situation where iOS supported it on some devices for one version of iOS, but not the next (and it's now officially supported in iOS 7). That might be the source of the inconsistency you see here.
To work around this, I implemented a histogram calculation by instead feeding the colors from the FBO (or its attached texture) in as vertices and using a scattering operation similar to what you describe. This doesn't require a texture read of any kind in the vertex shader, but it does involve a round-trip from and to the GPU and potentially a lot of vertices. It works on all OpenGL ES 2.0 hardware, but it can be costly.

performance - drawing many 2d circles in opengl

I am trying to draw large numbers of 2d circles for my 2d games in opengl. They are all the same size and have the same texture. Many of the sprites overlap. What would be the fastest way to do this?
an example of the kind of effect I'm making http://img805.imageshack.us/img805/6379/circles.png
(It should be noted that the black edges are just due to the expanding explosion of circles. It was filled in a moment after this screen-shot was taken.
At the moment I am using a pair of textured triangles to make each circle. I have transparency around the edges of the texture so as to make it look like a circle. Using blending for this proved to be very slow (and z culling was not possible as they were rendered as squares to the depth buffer). Instead I am not using blending but having my fragment shader discard any fragments with an alpha of 0. This works, however it means that early z is not possible (as fragments are discarded).
The speed is limited by the large amounts of overdraw and the gpu's fillrate. The order that the circles are drawn in doesn't really matter (provided it doesn't change between frames creating flicker) so I have been trying to ensure each pixel on the screen can only be written to once.
I attempted this by using the depth buffer. At the start of each frame it is cleared to 1.0f. Then when a circle is drawn it changes that part of the depth buffer to 0.0f. When another circle would normally be drawn there it is not as the new circle also has a z of 0.0f. This is not less than the 0.0f that is currently there in the depth buffer so it is not drawn. This works and should reduce the number of pixels which have to be drawn. However; strangely it isn't any faster. I have already asked a question about this behavior (opengl depth buffer slow when points have same depth) and the suggestion was that z culling was not being accelerated when using equal z values.
Instead I have to give all of my circles separate false z-values from 0 upwards. Then when I render using glDrawArrays and the default of GL_LESS we correctly get a speed boost due to z culling (although early z is not possible as fragments are discarded to make the circles possible). However this is not ideal as I've had to add in large amounts of z related code for a 2d game which simply shouldn't require it (and not passing z values if possible would be faster). This is however the fastest way I have currently found.
Finally I have tried using the stencil buffer, here I used
glStencilFunc(GL_EQUAL, 0, 1);
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Where the stencil buffer is reset to 0 each frame. The idea is that after a pixel is drawn to the first time. It is then changed to be none-zero in the stencil buffer. Then that pixel should not be drawn to again therefore reducing the amount of overdraw. However this has proved to be no faster than just drawing everything without the stencil buffer or a depth buffer.
What is the fastest way people have found to write do what I am trying?
The fundamental problem is that you're fill limited, which is the GPUs inability to shade all the fragments you ask it to draw in the time you're expecting. The reason that you're depth buffering trick isn't effective is that the most time-comsuming part of processing is shading the fragments (either through your own fragment shader, or through the fixed-function shading engine), which occurs before the depth test. The same issue occurs for using stencil; shading the pixel occurs before stenciling.
There are a few things that may help, but they depend on your hardware:
render your sprites from front to back with depth buffering. Modern GPUs often try to determine if a collection of fragments will be visible before sending them off to be shaded. Roughly speaking, the depth buffer (or a represenation of it) is checked to see if the fragment that's about to be shaded will be visible, and if not, it's processing is terminated at that point. This should help reduce the number of pixels that need to be written to the framebuffer.
Use a fragment shader that immediately checks your texel's alpha value, and discards the fragment before any additional processing, as in:
varying vec2 texCoord;
uniform sampler2D tex;
void main()
{
vec4 texel = texture( tex, texCoord );
if ( texel.a < 0.01 ) discard;
// rest of your color computations
}
(you can also use alpha test in fixed-function fragment processing, but it's impossible to say if the test will be applied before the completion of fragment shading).

image smoothing in opengl?

Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.
You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.

Resources