Is it possible to copy data from one framebuffer to another in OpenGL? - opengl-es

I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?

glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);

Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}

You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}

Related

Scale OpenGL texture and return bitmap in CPU memory

I have a texture on the GPU defined by an OpenGL textureID and target.
I need for further processing a 300 pixel bitmap in CPU memory (width 300 pixel, height proportional depending on the source width).
The pixel format should be RGBA, ARGB or BGRA with float components.
How can this be done?
Thanks for your reply.
I tried the following. But I get only white pixels back:
glEnable(GL_TEXTURE_2D);
// create render texture
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, 0,GL_RGBA, GL_FLOAT, 0);
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
// draw texture
glBindTexture(GL_TEXTURE_2D, inTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Draw a textured quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(0, 0, 0);
glTexCoord2f(0, 1); glVertex3f(0, 1, 0);
glTexCoord2f(1, 1); glVertex3f(1, 1, 0);
glTexCoord2f(1, 0); glVertex3f(1, 0, 0);
glEnd();
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status == GL_FRAMEBUFFER_COMPLETE)
{
}
unsigned char *buffer = CGBitmapContextGetData(mainCtx);
glReadPixels(0, 0, (GLsizei)analyzeWidth, (GLsizei)analyzeHeight, GL_RGBA, GL_FLOAT, buffer);
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0); //
glDeleteFramebuffers(1, &fbo);
Create a second texture with the size 300*width/height x 300
Create a Framebuffer Object and attach the new texture as color buffer.
Set approrpiate texture filters for the (unscaled) source texture. You have the choice between point sampling (GL_NEAREST) and bilinear filtering (GL_LINEAR). If you are downscaling by more than a factor of 2 you might consider also using mipmapping, and might want to call glGenerateMipmap on the source texture first, and use one of the GL_..._MIPMAP_... minification filters. However, the availability of mipmapping will depend on how the source texture was created, if it is an immutable texture object without the mipmap pyramid, this won't work.
Render a textured object (with the original source texture) to the new texture. Most intuitive geometry would be a viewport-filling rectangle, most efficient would be a single triangle.
Read back the scaled texture with glReadPixels (via the FBO) or glGetTexImage (directly from the texture). For improved performance, you might consider asynchronous readbacks via Pixel Buffer Objects.

glReadPixels always returns a black image

Some times ago I wrote a code to draw an OpenGL scene to a bitmap in Delphi RAD Studio XE7, that worked well. This code draw and finalize a scene, then get the pixels using the glReadPixels function. I recently tried to compile the exactly same code on Lazarus, however I get only a black image.
Here is the code
// create main render buffer
glGenFramebuffers(1, #m_OverlayFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, m_OverlayFrameBuffer);
// create and link color buffer to render to
glGenRenderbuffers(1, #m_OverlayRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
m_OverlayRenderBuffer);
// create and link depth buffer to use
glGenRenderbuffers(1, #m_OverlayDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
m_OverlayDepthBuffer);
// check if render buffers were created correctly and return result
Result := (glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE);
...
// flush OpenGL
glFinish;
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
// create pixels buffer
SetLength(pixels, (m_pOwner.ClientWidth * m_Factor) * (m_pOwner.ClientHeight * m_Factor) * 4);
// is alpha blending or antialiasing enabled?
if (m_Transparent or (m_Factor <> 1)) then
// notify that pixels will be read from color buffer
glReadBuffer(GL_COLOR_ATTACHMENT0);
// copy scene from OpenGL to pixels buffer
glReadPixels(0,
0,
m_pOwner.ClientWidth * m_Factor,
m_pOwner.ClientHeight * m_Factor,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels);
As I already verified and I'm 100% sure that something is really drawn on my scene (also on the Lazarus side), the GLext is well initialized and the framebuffer is correctly built (the condition glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE returns effectively true), I would be very grateful if someone could point me what I'm doing wrong in my code, knowing one more time that on the same computer it works well in Delphi RAD Studio XE7 but not in Lazarus.
Regards

Image Rotation by using Opengl ES

I'm working on Opengl ES 2.0 using OMAP3530 development board on Windows CE 7.
My Task is to Load a 24-Bit Image File & rotate it about an angle in z-Axis & export the image file(Buffer).
For this task I've created a FBO for off-screen rendering & loaded this image file as a Texture by using glTexImage2D() & I've applied this Texture to a Quad & rotate that QUAD by using PVRTMat4::RotationZ() API & Read-Back by using ReadPixels() API. Since it is a single frame process i just made only 1 loop.
Here are the problems I'm facing now.
1) All API's are taking distinct processing time on every run.ie Sometimes when i run my application i get different processing time for all API's.
2) glDrawArrays() is taking too much time (~50 ms - 80 ms)
3) glReadPixels() is also taking too much time ~95 ms for Image(800x600)
4) Loading 32-Bit image is much faster than 24-Bit image so conversion is needed.
I'd like to ask you all if anybody facing/Solved similar problem kindly suggest me any
Here is the Code snippet of my Application.
[code]
[i]
void BindTexture(){
glGenTextures(1, &m_uiTexture);
glBindTexture(GL_TEXTURE_2D, m_uiTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageWidth, ImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pTexData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, TCHAR *lpCmdLine, int nCmdShow)
{
// Fragment and vertex shaders code
char* pszFragShader = "Same as in RenderToTexture sample;
char* pszVertShader = "Same as in RenderToTexture sample;
CreateWindow(Imagewidth, ImageHeight);//For this i've referred OGLES2HelloTriangle_Windows.cpp example
LoadImageBuffers();
BindTexture();
Generate& BindFrame,Render Buffer();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_auiFbo, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ImageWidth, ImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_auiDepthBuffer);
BindTexture();
GLfloat Angle = 0.02f;
GLfloat afVertices[] = {Vertices to Draw a QUAD};
glGenBuffers(1, &ui32Vbo);
LoadVBO's();//Aps's to load VBO's refer
// Draws a triangle for 1 frames
while(g_bDemoDone==false)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_auiFbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
PVRTMat4 mRot,mTrans, mMVP;
mTrans = PVRTMat4::Translation(0,0,0);
mRot = PVRTMat4::RotationZ(Angle);
glBindBuffer(GL_ARRAY_BUFFER, ui32Vbo);
glDisable(GL_CULL_FACE);
int i32Location = glGetUniformLocation(uiProgramObject, "myPMVMatrix");
mMVP = mTrans * mRot ;
glUniformMatrix4fv(i32Location, 1, GL_FALSE, mMVP.ptr());
// Pass the vertex data
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
// Pass the texture coordinates data
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//
glReadPixels(0,0,ImageWidth ,ImageHeight,GL_RGBA,GL_UNSIGNED_BYTE,pOutTexData) ;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
eglSwapBuffers(eglDisplay, eglSurface);
}
DeInitAll();[/i][/code]
The PowerVR architecture can not render a single frame and allow the ARM to read it back quickly. It is just not designed to work that way fast - it is a deferred rendering tile-based architecture. The execution times you are seeing are too be expected and using an FBO is not going to make it faster either. Also, beware that the OpenGL ES drivers on OMAP for Windows CE are really poor quality. Consider yourself lucky if they work at all.
A better design would be to display the OpenGL ES rendering directly to the DSS and avoid using glReadPixels() and the FBO completely.
I've got improved performance for rotating a Image Buffer my using multiple FBO's & PBO's.
Here is the pseudo code snippet of my application.
InitGL()
GenerateShaders();
Generate3Textures();//Generate 3 Null Textures
Generate3FBO();//Generate 3 FBO & Attach each Texture to 1 FBO.
Generate3PBO();//Generate 3 PBO & to readback from FBO.
DrawGL()
{
BindFBO1;
BindTexture1;
UploadtoTexture1;
Do Some Processing & Draw it in FBO1;
BindFBO2;
BindTexture2;
UploadtoTexture2;
Do Some Processing & Draw it in FBO2;
BindFBO3;
BindTexture3;
UploadtoTexture3;
Do Some Processing & Draw it in FBO3;
BindFBO1;
ReadPixelfromFBO1;
UnpackToPBO1;
BindFBO2;
ReadPixelfromFBO2;
UnpackToPBO2;
BindFBO3;
ReadPixelfromFBO3;
UnpackToPBO3;
}
DeinitGL();
DeallocateALL();
By this way I've achieved 50% increased performance for overall processing.

I need to minimize the number of glDraw* calls

I'm working on a little 2D graphics/game library for personal use and currently I'm trying to think of a way to improve performance when drawing tiled maps. Currently I am creating a static GL_QUADS VBO for each tile in the map and then drawing it to the screen. Each VBO is referencing a texture loaded into memory which is sub-imaged and mapped to the VBO.
Currently, I have a 20 x 20 tile map that I am testing with. With my current implementation, since I have to draw each individual tile, that is 400 glDraw* calls every frame.
Is there any way to, for example, make each row of the tile map ONE VBO? This would reduce the glDraw* calls to 20, for this example. How would I map the sub-images? Individual tiles can be rotated.
I have seen some references to using a Texture Atlas. Would that be a good alternative? Any useful links on how to implement this in opengl?
CODE:
Current render method:
public void render() {
texture.bind();
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
for (SpriteSheet spriteSheet : spriteSheets) {
VBO vbo = spriteSheet.getVBO();
float angle = spriteSheet.getAngle();
vbo.bind();
if (angle != 0) {
glPushMatrix();
Vector2f position = spriteSheet.getPosition();
glTranslatef(position.x, position.y, 0);
glRotatef(angle, 0.0f, 0.0f, 1);
glTranslatef(-position.x, -position.y, 0);
glVertexPointer(Vertex.positionElementCount, GL_FLOAT, Vertex.stride, Vertex.positionByteOffset);
glColorPointer(Vertex.colorElementCount, GL_FLOAT, Vertex.stride, Vertex.colorByteOffset);
glTexCoordPointer(Vertex.textureElementCount, GL_FLOAT, Vertex.stride, Vertex.textureByteOffset);
glDrawArrays(vbo.getMode(), 0, Vertex.elementCount);
glPopMatrix();
}
else {
glVertexPointer(Vertex.positionElementCount, GL_FLOAT, Vertex.stride, Vertex.positionByteOffset);
glColorPointer(Vertex.colorElementCount, GL_FLOAT, Vertex.stride, Vertex.colorByteOffset);
glTexCoordPointer(Vertex.textureElementCount, GL_FLOAT, Vertex.stride, Vertex.textureByteOffset);
glDrawArrays(vbo.getMode(), 0, Vertex.elementCount);
}
vbo.unbind();
}
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
texture.unbind();
}
There are several things you can do.
A texture atlas one option, but you could use a GL_TEXTURE_2D_ARRAY as well, using the 3rd texture coordinate to select which layer to use.
The next thing to think about is instancing: Have a single quad in the buffer and make OpenGL draw it several times, using an additional buffer to select texture layer and rotation based on the drawn instance.

How to efficiently copy depth buffer to texture on OpenGL ES

I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.

Resources