Read data from texture with any internal format on ES - opengl-es

I am trying to read the data of a texture on OpenGL ES. The problem with my method is that framebuffers do not accept textures with GL_ALPHA as format. If the texture has GL_RGBA as format, everything works fine. I do not want to change the texture format to RGBA, so is there another way to read the texture data as GL_RGBA format even if the texture has the GL_ALPHA format?
I have created a texture with this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, null);
I am trying to read the data with a framebuffer with a texture attachment and glReadPixels
ByteBuffer pixels = memAlloc(4 * width * height); // This line is java specific, it works like a byte array
glBindTexture(GL_TEXTURE_2D, texture);
int fbo = glGenFramebuffers();
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
// glCheckFramebufferStatus(GL_FRAMEBUFFER) returns GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT after this line
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(fbo);
glBindTexture(GL_TEXTURE_2D, 0);

Related

Scale OpenGL texture and return bitmap in CPU memory

I have a texture on the GPU defined by an OpenGL textureID and target.
I need for further processing a 300 pixel bitmap in CPU memory (width 300 pixel, height proportional depending on the source width).
The pixel format should be RGBA, ARGB or BGRA with float components.
How can this be done?
Thanks for your reply.
I tried the following. But I get only white pixels back:
glEnable(GL_TEXTURE_2D);
// create render texture
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, 0,GL_RGBA, GL_FLOAT, 0);
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
// draw texture
glBindTexture(GL_TEXTURE_2D, inTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Draw a textured quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(0, 0, 0);
glTexCoord2f(0, 1); glVertex3f(0, 1, 0);
glTexCoord2f(1, 1); glVertex3f(1, 1, 0);
glTexCoord2f(1, 0); glVertex3f(1, 0, 0);
glEnd();
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status == GL_FRAMEBUFFER_COMPLETE)
{
}
unsigned char *buffer = CGBitmapContextGetData(mainCtx);
glReadPixels(0, 0, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, GL_RGBA, GL_FLOAT, buffer);
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0); //
glDeleteFramebuffers(1, &fbo);
Create a second texture with the size 300*width/height x 300
Create a Framebuffer Object and attach the new texture as color buffer.
Set approrpiate texture filters for the (unscaled) source texture. You have the choice between point sampling (GL_NEAREST) and bilinear filtering (GL_LINEAR). If you are downscaling by more than a factor of 2 you might consider also using mipmapping, and might want to call glGenerateMipmap on the source texture first, and use one of the GL_..._MIPMAP_... minification filters. However, the availability of mipmapping will depend on how the source texture was created, if it is an immutable texture object without the mipmap pyramid, this won't work.
Render a textured object (with the original source texture) to the new texture. Most intuitive geometry would be a viewport-filling rectangle, most efficient would be a single triangle.
Read back the scaled texture with glReadPixels (via the FBO) or glGetTexImage (directly from the texture). For improved performance, you might consider asynchronous readbacks via Pixel Buffer Objects.

OpenGLES2: How to load and access a big float array

I have a large WxH float array:
float floatArray[W][H];
I want to access it in a fragment shader and I need to load/access it through a texture due to its size:
vec4 v4 = texture2D(tex, v_texCoord);
//Getting v4.x as floatArray[v_texCoord.x * W][v_texCoord.y * H]
I load the texture like this:
int texturenames[1];
glGenTextures(1, texturenames);
glActiveTexture(GL_TEXTURE0 + texturenames[0]);
glBindTexture(GL_TEXTURE_2D, texturenames[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, w, h, 0, GL_LUMINANCE, GL_FLOAT, floatArray);
glUniform1i(glGetUniformLocation(program_, "tex"), texturenames[0]);
I don't get the right values. Note that the third (internalformat) and seventh (format) parameters of glTexImage2D are GL_LUMINANCE.
void glTexImage2D(GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid * data);
How can I load and access a big float array in OpenGLES2?
Short answer - you can't. OpenGL ES 2.0 doesn't support floating point texturing.
Given you only want a single channel perhaps you could encode it in an RGBA unorm texture and recover the value algorithmically in the shader, but it sounds horribly expensive on a mobile GPU.
OpenGL ES 3.0 does support float texturing, so that might provide more luck.

Generating and updating 8-bit gray-scale texture in OpenGL ES 2.0

For an OpenGL texture cache I need to initialize a large (≥ 2048x2048) texture and then frequently update little sections of it.
The following (pseudo-)code works:
// Setup texture
int[] buffer = new int[2048*2048 / 4]; // Generate dummy buffer with 1 byte per pixel
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 2048, 2048, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer);
// Perform update
glBindTexture(GL_TEXTURE_2D, id);
glTexSubImage2D(GL_TEXTURE_2D, 0, x, y, width, height, GL_ALPHA, GL_UNSIGNED_BYTE, data);
But I find the totally unnecessary creation of a 4MB int-buffer a bit undesirable to say the least. So, I tried the following instead:
// Setup texture
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 0, 0, 2048, 2048, 0);
This gave me a GL_INVALID_OPERATION error, which I believe is caused by the fact that the frame-buffer does not contain an alpha value, so rather than just setting that to 1, the call fails.
Next attempt:
// Setup texture
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 0, 0, 2048, 2048, 0);
This works, but now my glTexSubImage2D call fails with GL_INVALID_OPERATION because it specifies GL_ALPHA instead of GL_LUMINANCE. So, I changed that as well to get:
// Perform update
glBindTexture(GL_TEXTURE_2D, id);
glTexSubImage2D(GL_TEXTURE_2D, 0, x, y, width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);
And I changed my shader to read the value from the r rather than the a component.
This works on some devices, but on the iPhone 3GS, I still get the GL_INVALID_OPERATION error in the glTexSubImage2D call. Why? And how can I fix this? Is there some way, for example, to change the internal texture format? Or can I create some other framebuffer that does have an alpha component that I can use as the source for glCopyTexImage2D?
data can be NULL in your glTexImage2D() call if you just want to allocate an empty texture:
data may be a null pointer. In this case, texture memory is allocated to accommodate a texture of width and height. You can then download subtextures to initialize this texture memory. The image is undefined if the user tries to apply an uninitialized portion of the texture image to a primitive.

Render OpenGL ES 2.0 to image

I am trying to do some OpenGL ES 2.0 rendering to an image file, independent of the rendering being shown on the screen to the user. The image I'm rendering to is a different size than the user's screen. I just need a byte array of GL_RGB data. I'm familiar with glReadPixels, but I don't think it would do the trick in this case since I'm not pulling from an already-rendered user screen.
Pseudocode:
// Switch rendering to another buffer (framebuffer? renderbuffer?)
// Draw code here
// Save byte array of rendered data GL_RGB to file
// Switch rendering back to user's screen.
How can I do this without interrupting the user's display? I'd rather not have to flicker the user's screen, drawing my desired information for a single frame, glReadPixel-ing and then having it disappear.
Again, I don't want it to show anything to the user. Here's my code. Doesn't work.. am I missing something?
unsigned int canvasFrameBuffer;
bglGenFramebuffers(1, &canvasFrameBuffer);
bglBindFramebuffer(BGL_RENDERBUFFER, canvasFrameBuffer);
unsigned int canvasRenderBuffer;
bglGenRenderbuffers(1, &canvasRenderBuffer);
bglBindRenderbuffer(BGL_RENDERBUFFER, canvasRenderBuffer);
bglRenderbufferStorage(BGL_RENDERBUFFER, BGL_RGBA4, width, height);
bglFramebufferRenderbuffer(BGL_FRAMEBUFFER, BGL_COLOR_ATTACHMENT0, BGL_RENDERBUFFER, canvasRenderBuffer);
unsigned int canvasTexture;
bglGenTextures(1, &canvasTexture);
bglBindTexture(BGL_TEXTURE_2D, canvasTexture);
bglTexImage2D(BGL_TEXTURE_2D, 0, BGL_RGB, width, height, 0, BGL_RGB, BGL_UNSIGNED_BYTE, 0);
bglFramebufferTexture2D(BGL_FRAMEBUFFER, BGL_COLOR_ATTACHMENT0, BGL_TEXTURE_2D, canvasTexture, 0);
Matrix::matrix_t identity;
Matrix::LoadIdentity(&identity);
bglClearColor(1.0f, 1.0f, 1.0f, 1.0f);
bglClear(BGL_COLOR_BUFFER_BIT);
Draw(&identity, &identity, this);
bglFlush();
bglFinish();
byte *buffer = (byte*)Z_Malloc(width * height * 4, ZT_STATIC);
bglReadPixels(0, 0, width, height, BGL_RGB, BGL_UNSIGNED_BYTE, buffer);
SaveTGA("canvas.tga", buffer, width, height);
Z_Free(buffer);
// unbind frame buffer
bglBindRenderbuffer(BGL_RENDERBUFFER, 0);
bglBindFramebuffer(BGL_FRAMEBUFFER, 0);
bglDeleteTextures(1, &canvasTexture);
bglDeleteRenderbuffers(1, &canvasRenderBuffer);
bglDeleteFramebuffers(1, &canvasFrameBuffer);
Here's the solution, for anybody who needs it:
// Create framebuffer
unsigned int canvasFrameBuffer;
glGenFramebuffers(1, &canvasFrameBuffer);
glBindFramebuffer(GL_RENDERBUFFER, canvasFrameBuffer);
// Attach renderbuffer
unsigned int canvasRenderBuffer;
glGenRenderbuffers(1, &canvasRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, canvasRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, canvasRenderBuffer);
// Clear the target (optional)
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw whatever you want here
char *buffer = (char*)malloc(width * height * 3);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
SaveTGA("canvas.tga", buffer, width, height); // Your own function to save the image data to a file (in this case, a TGA)
free(buffer);
// unbind frame buffer
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteRenderbuffers(1, &canvasRenderBuffer);
glDeleteFramebuffers(1, &canvasFrameBuffer);
You can render to a texture, read the pixels and draw a quad with that texture (if you want to show that to the user). It should not flicker but it degrades performance obviously.
On iOS for example:
OpenGL ES Render to Texture
Reading a openGL ES texture to a raw array

Render to texture problem with alpha

When I render to texture, and then draw the same image, it seems to make everything darker. To get this image:
http://img24.imageshack.us/img24/8061/87993367.png
I'm rendering the upper-left square with color (1, 1, 1, .8) to a texture, then rendering that texture, plus the middle square (same color) to another texture, then finally that texture plus the lower-right square (same color) to the screen.
As you can see, each time I render to texture, everything gets a little darker.
My render-to-texture code looks like: (I'm using OpenGL ES on the iPhone)
// gen framebuffer
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
// gen texture
GLuint texture;
glGenTextures(1, &texture);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
// hook it up
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, texture, 0);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES))
return false;
// set up drawing
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glViewport(0, 0, Screen::Width, Screen::Height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, Screen::Width, 0, Screen::Height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor4f(1, 1, 1, 1);
// do whatever drawing we'll do here
Draw();
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
Is there anything that I'm doing wrong here? Do you need more code to figure it out? What might be going on here?
I'm only guessing:
Drawing the first texture gives you 204 (0.8 * 255) in the RGB and alpha channels. When you draw for the second time (with GL_BLEND enabled, I presume), you're getting the light gray 204 RGB with 80% alpha which gives you a medium gray.
Solution: use glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and premultiply your colors.

Resources