glReadPixels always returns a black image - image

Some times ago I wrote a code to draw an OpenGL scene to a bitmap in Delphi RAD Studio XE7, that worked well. This code draw and finalize a scene, then get the pixels using the glReadPixels function. I recently tried to compile the exactly same code on Lazarus, however I get only a black image.
Here is the code
// create main render buffer
glGenFramebuffers(1, #m_OverlayFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, m_OverlayFrameBuffer);
// create and link color buffer to render to
glGenRenderbuffers(1, #m_OverlayRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
m_OverlayRenderBuffer);
// create and link depth buffer to use
glGenRenderbuffers(1, #m_OverlayDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
m_OverlayDepthBuffer);
// check if render buffers were created correctly and return result
Result := (glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE);
...
// flush OpenGL
glFinish;
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
// create pixels buffer
SetLength(pixels, (m_pOwner.ClientWidth * m_Factor) * (m_pOwner.ClientHeight * m_Factor) * 4);
// is alpha blending or antialiasing enabled?
if (m_Transparent or (m_Factor <> 1)) then
// notify that pixels will be read from color buffer
glReadBuffer(GL_COLOR_ATTACHMENT0);
// copy scene from OpenGL to pixels buffer
glReadPixels(0,
0,
m_pOwner.ClientWidth * m_Factor,
m_pOwner.ClientHeight * m_Factor,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels);
As I already verified and I'm 100% sure that something is really drawn on my scene (also on the Lazarus side), the GLext is well initialized and the framebuffer is correctly built (the condition glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE returns effectively true), I would be very grateful if someone could point me what I'm doing wrong in my code, knowing one more time that on the same computer it works well in Delphi RAD Studio XE7 but not in Lazarus.
Regards

Related

Anti-aliasing/smoothing/supersampling 2d images with opengl

I'm playing with 2D graphics in OpenGL - fractals and other fun stuff ^_^. My basic setup is rendering a couple triangles to fill the screen and using a fragment shader to draw cool stuff on them. I'd like to smooth things out a bit, so I started looking into supersampling. It's not obvious to me how to go about this. Here's what I've tried so far...
First, I looked at the Apple docs on anti-aliasing. I updated my pixel format initialization:
NSOpenGLPixelFormatAttribute attrs[] =
{
NSOpenGLPFADoubleBuffer,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion4_1Core,
NSOpenGLPFASupersample,
NSOpenGLPFASampleBuffers, 1,
NSOpenGLPFASamples, 4,
0
};
I also added the glEnable(GL_MULTISAMPLE); line. GL_MULTISAMPLE_FILTER_HINT_NV doesn't seem to be defined (docs appear to be out of date), so I wasn't sure what to do there.
That made my renders slower but doesn't seem to be doing anti-aliasing, so I tried the "Render-to-FBO" approach described on the OpenGL Wiki on Multisampling. I've tried a bunch of variations, with a variety of outcomes: successful rendering (which don't appear to be anti-aliased), rendering garbage to the screen (fun!), crashes (app evaporates and I get a system dialog about graphics issues), and making my laptop unresponsive aside from the cursor (got the system dialog about graphics issues after hard reboot).
I am checking my framebuffer's status before drawing, so I know that's not the issue. And I'm sure I'm rendering with hardware, not software - saw that suggestion on other posts.
I've spent a fair amount of time on it and still don't quite understand how to approach this. One thing I'd love some help on is how to query GL to see if supersampling is enabled properly, or how to tell how many times my fragment shader is called, etc. I'm also a bit confused about where some of the calls go - most examples I find just say which methods to call, but don't specify which ones need to go in the draw callback. Anybody have a simple example of SSAA with OpenGL 3 or 4 and OSX... or other things to try?
Edit: drawing code - super broken (don't judge me), but for reference:
- (void)draw
{
glBindVertexArray(_vao); // todo: is this necessary? (also in init)
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), points, GL_STATIC_DRAW);
glGenTextures( 1, &_tex );
glBindTexture( GL_TEXTURE_2D_MULTISAMPLE, _tex );
glTexImage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8, _width * 2, _height * 2, false );
glGenFramebuffers( 1, &_fbo );
glBindFramebuffer( GL_FRAMEBUFFER, _fbo );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, _tex, 0 );
GLint status;
status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"incomplete buffer 0x%x", status);
return;
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glDrawBuffer(GL_BACK);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glDeleteTextures(1, &_tex);
glDeleteFramebuffers(1, &_fbo);
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
}
Update:
I changed my code per Reto's suggestion below:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
This caused the program to render garbage to the screen. I then got rid of the * 2 multiplier, and it still drew garbage to the screen. I then turned off the NSOpenGLPFA options related to multi/super-sampling, and it rendered normally, with no anti-aliasing.
I also tried using a non-multisample texture, but kept getting incomplete attachment errors. I'm not sure if this is due to the NVidia issue mentioned on the OpenGL wiki (will post ina comment since I don't have enough rep to post more than 2 links) or something else. If someone could suggest a way to find out why the attachment is incomplete, that would be very, very helpful.
Finally, I tried using a renderbuffer instead of a texture, and found that specifying width and height greater than the viewport size in glRenderbufferStorage doesn't seem to work as expected.
GLuint rb;
glGenRenderbuffers(1, &rb);
glBindRenderbuffer(GL_RENDERBUFFER, rb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, _width * 2, _height * 2);
// ...
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
... renders in the bottom left hand 1/4 of the screen. It doesn't appear to be smoother though...
Update 2: doubled the viewport size, it's no smoother. Turning NSOpenGLPFASupersample still causes it to draw garbage to the screen. >.<
Update 3: I'm an idiot, it's totally smoother. It just doesn't look good because I'm using an ugly color scheme. And I have to double all my coordinates because the viewport is 2x. Oh well. I'd still love some help understanding why NSOpenGLPFASupersample is causing such crazy behavior...
Your sequence of calls here looks like it wouldn't do what you intended:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glDrawBuffer(GL_BACK);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
When you call glClear() and glDrawArrays(), your current draw framebuffer, which is determined by the last call to glBindFramebuffer(GL_DRAW_FRAMEBUFFER, ...), is the default framebuffer. So you never render to the FBO. Let me annotate the above:
// Set draw framebuffer to default (0) framebuffer. This is where the rendering will go.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// Set read framebuffer to the FBO.
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
// This is redundant, GL_BACK is the default draw buffer for the default framebuffer.
glDrawBuffer(GL_BACK);
// Clear the current draw framebuffer, which is the default framebuffer.
glClear(GL_COLOR_BUFFER_BIT);
// Draw to the current draw framebuffer, which is the default framebuffer.
glDrawArrays(GL_TRIANGLES, 0, 6);
// Copy from read framebuffer (which is the FBO) to the draw framebuffer (which is the
// default framebuffer). Since no rendering was done to the FBO, this will copy garbage
// into the default framebuffer, wiping out what was previously rendered.
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
To get this working, you need to set the draw framebuffer to the FBO while rendering, and then set the read framebuffer to the FBO and the draw framebuffer to the default for the copy:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
Recap:
Draw commands write to GL_DRAW_FRAMEBUFFER.
glBlitFramebuffer() copies from GL_READ_FRAMEBUFFER to GL_DRAW_FRAMEBUFFER.
A couple more remarks on the code:
Since you're creating a multisample texture of twice the size, you're using both multisampling and supersampling at the same time:
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, _tex);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8,
_width * 2, _height * 2, false);
Which is entirely legal. But it's... a lot of sampling. If you just want supersampling, you can use a regular texture.
You could use a renderbuffer instead of a texture for the FBO color target. No huge advantage, but it's simpler, and potentially more efficient. You only need to use textures as attachments if you want to sample the result later, which is not the case here.

Save image from opengl (pyqt) in high resolution:

I have a PyQt application in which an QtOpenGL.QGLWidget shows a wind turbine.
I would like to save the turbine as an image in high resulution, but so far I am only able to save the rendered screen image, i.e. in my case 1680x1050 minus borders,toolbars etc.
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, self.width, self.height, GL_RGBA, GL_UNSIGNED_BYTE)
How can I get around this limitation?
EDIT
I have tried using a framebuffer,
from __future__ import division
import OpenGL
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from PIL import Image
import time, sys
import numpy as np
WIDTH = 400
HEIGHT = 300
def InitGL():
glMatrixMode(GL_PROJECTION)
gluPerspective(45.0, float(WIDTH) / float(HEIGHT), 0.1, 100.0)
glMatrixMode(GL_MODELVIEW)
def DrawGLScene():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glLoadIdentity()
glTranslatef(0, 0., -3)
glutWireTeapot(1)
glFlush()
def capture_screen():
DrawGLScene()
glPixelStorei(GL_PACK_ALIGNMENT, 1)
data = glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE)
image = Image.fromstring("RGBA", (WIDTH, HEIGHT), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
def capture_fbo(width=800, height=600):
fbo = glGenFramebuffers(1)
render_buf = glGenRenderbuffers(1)
glBindRenderbuffer(GL_RENDERBUFFER, render_buf)
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height);
glBindFramebuffer(GL_FRAMEBUFFER, fbo)
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glTranslatef(1, 1., -3)
glScale(width / WIDTH, height / HEIGHT, 1)
glutWireTeapot(1.0)
glFlush()
glReadBuffer(GL_COLOR_ATTACHMENT0);
data = glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
image = Image.fromstring("RGBA", (width, height), data)
image.transpose(Image.FLIP_TOP_BOTTOM).show()
glDeleteFramebuffers(1, [fbo]);
glDeleteRenderbuffers(1, [render_buf]);
glutInit(sys.argv)
glutInitDisplayMode(GLUT_RGBA | GLUT_DEPTH)
glutInitWindowSize(WIDTH, HEIGHT)
window = glutCreateWindow("")
glutDisplayFunc(DrawGLScene)
InitGL()
DrawGLScene()
capture_screen()
capture_fbo()
glutMainLoop()
but nothing is drawn at the areas outside the normal screen window area
The way glReadPixels works is it reads the pixels from the currently selected framebuffer.
Not sure what a framebuffer is? Here's the wiki page: http://www.opengl.org/wiki/Framebuffer_Object
A quick explanation is, you can think of the framebuffer as a "canvas" that your graphics card draws on. You can have as many framebuffers as you want (sorta... most drivers define a limit somewhere), but only one will be displayed. This is the selected framebuffer and can only be as large as the window that you are displaying on screen.
Here's a tutorial on making a framebuffer: http://www.lighthouse3d.com/tutorials/opengl-short-tutorials/opengl_framebuffer_objects/
Modify it so that the framebuffer you are creating is the resolution that you want to take a screenshot from.
Once you have the framebuffer set up, you can draw to it like so:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, buffer);
/* drawing code */
glReadPixels();
If you want to be able to still see something on screen, you will need to draw to the default frame buffer after drawing to the new framebuffer:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); //"0" means return to default framebuffer
/* same drawing code */
Note that this means you will be drawing your entire scene twice! Make sure your framerate can handle it.

Image Rotation by using Opengl ES

I'm working on Opengl ES 2.0 using OMAP3530 development board on Windows CE 7.
My Task is to Load a 24-Bit Image File & rotate it about an angle in z-Axis & export the image file(Buffer).
For this task I've created a FBO for off-screen rendering & loaded this image file as a Texture by using glTexImage2D() & I've applied this Texture to a Quad & rotate that QUAD by using PVRTMat4::RotationZ() API & Read-Back by using ReadPixels() API. Since it is a single frame process i just made only 1 loop.
Here are the problems I'm facing now.
1) All API's are taking distinct processing time on every run.ie Sometimes when i run my application i get different processing time for all API's.
2) glDrawArrays() is taking too much time (~50 ms - 80 ms)
3) glReadPixels() is also taking too much time ~95 ms for Image(800x600)
4) Loading 32-Bit image is much faster than 24-Bit image so conversion is needed.
I'd like to ask you all if anybody facing/Solved similar problem kindly suggest me any
Here is the Code snippet of my Application.
[code]
[i]
void BindTexture(){
glGenTextures(1, &m_uiTexture);
glBindTexture(GL_TEXTURE_2D, m_uiTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageWidth, ImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pTexData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, TCHAR *lpCmdLine, int nCmdShow)
{
// Fragment and vertex shaders code
char* pszFragShader = "Same as in RenderToTexture sample;
char* pszVertShader = "Same as in RenderToTexture sample;
CreateWindow(Imagewidth, ImageHeight);//For this i've referred OGLES2HelloTriangle_Windows.cpp example
LoadImageBuffers();
BindTexture();
Generate& BindFrame,Render Buffer();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_auiFbo, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ImageWidth, ImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_auiDepthBuffer);
BindTexture();
GLfloat Angle = 0.02f;
GLfloat afVertices[] = {Vertices to Draw a QUAD};
glGenBuffers(1, &ui32Vbo);
LoadVBO's();//Aps's to load VBO's refer
// Draws a triangle for 1 frames
while(g_bDemoDone==false)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_auiFbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
PVRTMat4 mRot,mTrans, mMVP;
mTrans = PVRTMat4::Translation(0,0,0);
mRot = PVRTMat4::RotationZ(Angle);
glBindBuffer(GL_ARRAY_BUFFER, ui32Vbo);
glDisable(GL_CULL_FACE);
int i32Location = glGetUniformLocation(uiProgramObject, "myPMVMatrix");
mMVP = mTrans * mRot ;
glUniformMatrix4fv(i32Location, 1, GL_FALSE, mMVP.ptr());
// Pass the vertex data
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
// Pass the texture coordinates data
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//
glReadPixels(0,0,ImageWidth ,ImageHeight,GL_RGBA,GL_UNSIGNED_BYTE,pOutTexData) ;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
eglSwapBuffers(eglDisplay, eglSurface);
}
DeInitAll();[/i][/code]
The PowerVR architecture can not render a single frame and allow the ARM to read it back quickly. It is just not designed to work that way fast - it is a deferred rendering tile-based architecture. The execution times you are seeing are too be expected and using an FBO is not going to make it faster either. Also, beware that the OpenGL ES drivers on OMAP for Windows CE are really poor quality. Consider yourself lucky if they work at all.
A better design would be to display the OpenGL ES rendering directly to the DSS and avoid using glReadPixels() and the FBO completely.
I've got improved performance for rotating a Image Buffer my using multiple FBO's & PBO's.
Here is the pseudo code snippet of my application.
InitGL()
GenerateShaders();
Generate3Textures();//Generate 3 Null Textures
Generate3FBO();//Generate 3 FBO & Attach each Texture to 1 FBO.
Generate3PBO();//Generate 3 PBO & to readback from FBO.
DrawGL()
{
BindFBO1;
BindTexture1;
UploadtoTexture1;
Do Some Processing & Draw it in FBO1;
BindFBO2;
BindTexture2;
UploadtoTexture2;
Do Some Processing & Draw it in FBO2;
BindFBO3;
BindTexture3;
UploadtoTexture3;
Do Some Processing & Draw it in FBO3;
BindFBO1;
ReadPixelfromFBO1;
UnpackToPBO1;
BindFBO2;
ReadPixelfromFBO2;
UnpackToPBO2;
BindFBO3;
ReadPixelfromFBO3;
UnpackToPBO3;
}
DeinitGL();
DeallocateALL();
By this way I've achieved 50% increased performance for overall processing.

Render OpenGL ES 2.0 to image

I am trying to do some OpenGL ES 2.0 rendering to an image file, independent of the rendering being shown on the screen to the user. The image I'm rendering to is a different size than the user's screen. I just need a byte array of GL_RGB data. I'm familiar with glReadPixels, but I don't think it would do the trick in this case since I'm not pulling from an already-rendered user screen.
Pseudocode:
// Switch rendering to another buffer (framebuffer? renderbuffer?)
// Draw code here
// Save byte array of rendered data GL_RGB to file
// Switch rendering back to user's screen.
How can I do this without interrupting the user's display? I'd rather not have to flicker the user's screen, drawing my desired information for a single frame, glReadPixel-ing and then having it disappear.
Again, I don't want it to show anything to the user. Here's my code. Doesn't work.. am I missing something?
unsigned int canvasFrameBuffer;
bglGenFramebuffers(1, &canvasFrameBuffer);
bglBindFramebuffer(BGL_RENDERBUFFER, canvasFrameBuffer);
unsigned int canvasRenderBuffer;
bglGenRenderbuffers(1, &canvasRenderBuffer);
bglBindRenderbuffer(BGL_RENDERBUFFER, canvasRenderBuffer);
bglRenderbufferStorage(BGL_RENDERBUFFER, BGL_RGBA4, width, height);
bglFramebufferRenderbuffer(BGL_FRAMEBUFFER, BGL_COLOR_ATTACHMENT0, BGL_RENDERBUFFER, canvasRenderBuffer);
unsigned int canvasTexture;
bglGenTextures(1, &canvasTexture);
bglBindTexture(BGL_TEXTURE_2D, canvasTexture);
bglTexImage2D(BGL_TEXTURE_2D, 0, BGL_RGB, width, height, 0, BGL_RGB, BGL_UNSIGNED_BYTE, 0);
bglFramebufferTexture2D(BGL_FRAMEBUFFER, BGL_COLOR_ATTACHMENT0, BGL_TEXTURE_2D, canvasTexture, 0);
Matrix::matrix_t identity;
Matrix::LoadIdentity(&identity);
bglClearColor(1.0f, 1.0f, 1.0f, 1.0f);
bglClear(BGL_COLOR_BUFFER_BIT);
Draw(&identity, &identity, this);
bglFlush();
bglFinish();
byte *buffer = (byte*)Z_Malloc(width * height * 4, ZT_STATIC);
bglReadPixels(0, 0, width, height, BGL_RGB, BGL_UNSIGNED_BYTE, buffer);
SaveTGA("canvas.tga", buffer, width, height);
Z_Free(buffer);
// unbind frame buffer
bglBindRenderbuffer(BGL_RENDERBUFFER, 0);
bglBindFramebuffer(BGL_FRAMEBUFFER, 0);
bglDeleteTextures(1, &canvasTexture);
bglDeleteRenderbuffers(1, &canvasRenderBuffer);
bglDeleteFramebuffers(1, &canvasFrameBuffer);
Here's the solution, for anybody who needs it:
// Create framebuffer
unsigned int canvasFrameBuffer;
glGenFramebuffers(1, &canvasFrameBuffer);
glBindFramebuffer(GL_RENDERBUFFER, canvasFrameBuffer);
// Attach renderbuffer
unsigned int canvasRenderBuffer;
glGenRenderbuffers(1, &canvasRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, canvasRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, canvasRenderBuffer);
// Clear the target (optional)
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw whatever you want here
char *buffer = (char*)malloc(width * height * 3);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
SaveTGA("canvas.tga", buffer, width, height); // Your own function to save the image data to a file (in this case, a TGA)
free(buffer);
// unbind frame buffer
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteRenderbuffers(1, &canvasRenderBuffer);
glDeleteFramebuffers(1, &canvasFrameBuffer);
You can render to a texture, read the pixels and draw a quad with that texture (if you want to show that to the user). It should not flicker but it degrades performance obviously.
On iOS for example:
OpenGL ES Render to Texture
Reading a openGL ES texture to a raw array

Is it possible to copy data from one framebuffer to another in OpenGL?

I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?
glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}
You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}

Resources