Generating and updating 8-bit gray-scale texture in OpenGL ES 2.0 - opengl-es

For an OpenGL texture cache I need to initialize a large (≥ 2048x2048) texture and then frequently update little sections of it.
The following (pseudo-)code works:
// Setup texture
int[] buffer = new int[2048*2048 / 4]; // Generate dummy buffer with 1 byte per pixel
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 2048, 2048, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer);
// Perform update
glBindTexture(GL_TEXTURE_2D, id);
glTexSubImage2D(GL_TEXTURE_2D, 0, x, y, width, height, GL_ALPHA, GL_UNSIGNED_BYTE, data);
But I find the totally unnecessary creation of a 4MB int-buffer a bit undesirable to say the least. So, I tried the following instead:
// Setup texture
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 0, 0, 2048, 2048, 0);
This gave me a GL_INVALID_OPERATION error, which I believe is caused by the fact that the frame-buffer does not contain an alpha value, so rather than just setting that to 1, the call fails.
Next attempt:
// Setup texture
int id = glGenTexture();
glBindTexture(GL_TEXTURE_2D, id);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 0, 0, 2048, 2048, 0);
This works, but now my glTexSubImage2D call fails with GL_INVALID_OPERATION because it specifies GL_ALPHA instead of GL_LUMINANCE. So, I changed that as well to get:
// Perform update
glBindTexture(GL_TEXTURE_2D, id);
glTexSubImage2D(GL_TEXTURE_2D, 0, x, y, width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);
And I changed my shader to read the value from the r rather than the a component.
This works on some devices, but on the iPhone 3GS, I still get the GL_INVALID_OPERATION error in the glTexSubImage2D call. Why? And how can I fix this? Is there some way, for example, to change the internal texture format? Or can I create some other framebuffer that does have an alpha component that I can use as the source for glCopyTexImage2D?

data can be NULL in your glTexImage2D() call if you just want to allocate an empty texture:
data may be a null pointer. In this case, texture memory is allocated to accommodate a texture of width and height. You can then download subtextures to initialize this texture memory. The image is undefined if the user tries to apply an uninitialized portion of the texture image to a primitive.

Related

Read data from texture with any internal format on ES

I am trying to read the data of a texture on OpenGL ES. The problem with my method is that framebuffers do not accept textures with GL_ALPHA as format. If the texture has GL_RGBA as format, everything works fine. I do not want to change the texture format to RGBA, so is there another way to read the texture data as GL_RGBA format even if the texture has the GL_ALPHA format?
I have created a texture with this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, null);
I am trying to read the data with a framebuffer with a texture attachment and glReadPixels
ByteBuffer pixels = memAlloc(4 * width * height); // This line is java specific, it works like a byte array
glBindTexture(GL_TEXTURE_2D, texture);
int fbo = glGenFramebuffers();
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
// glCheckFramebufferStatus(GL_FRAMEBUFFER) returns GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT after this line
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(fbo);
glBindTexture(GL_TEXTURE_2D, 0);

OpenGL ES depth framebuffer GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT

I've been trying to add shadow mapping to my OpenGL ES project and I've just found out that my framebuffer status returns GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT.
Here's my code to create the framebuffer :
// create fbo
int[] fboPtr = new int[1];
GLES30.glGenFramebuffers(1, fboPtr, 0);
fbo = fboPtr[0];
// use fbo
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, fbo);
// create depthMap
int[] depthMapPtr = new int[1];
GLES30.glGenTextures(1, depthMapPtr, 0);
depthMap = depthMapPtr[0];
// use depthMap
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, depthMap);
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_DEPTH_COMPONENT, size, size,
0, GLES30.GL_DEPTH_COMPONENT, GLES30.GL_FLOAT, null);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_REPEAT);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_REPEAT);
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_DEPTH_ATTACHMENT, GLES30.GL_TEXTURE_2D, depthMap, 0);
// draw buffer
int[] buffer = {GLES30.GL_NONE};
GLES30.glDrawBuffers(1, buffer, 0);
GLES30.glReadBuffer(GLES30.GL_NONE);
int status = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
I've bound the texture, so I don't know what can be causing the error.
Your internalFormat parameter for glTexImage2D isn't legal.
https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml
You need to use a sized internal format for depth. So I think this should work:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_DEPTH_COMPONENT32F,
size, size, 0, GLES30.GL_DEPTH_COMPONENT, GLES30.GL_FLOAT, null);
Off topic: Learn to use the KHR_debug extension if you can - it gives you readable error messages, often with a precise reason why something failed.

glReadPixels always returns a black image

Some times ago I wrote a code to draw an OpenGL scene to a bitmap in Delphi RAD Studio XE7, that worked well. This code draw and finalize a scene, then get the pixels using the glReadPixels function. I recently tried to compile the exactly same code on Lazarus, however I get only a black image.
Here is the code
// create main render buffer
glGenFramebuffers(1, #m_OverlayFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, m_OverlayFrameBuffer);
// create and link color buffer to render to
glGenRenderbuffers(1, #m_OverlayRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
m_OverlayRenderBuffer);
// create and link depth buffer to use
glGenRenderbuffers(1, #m_OverlayDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_OverlayDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
m_OverlayDepthBuffer);
// check if render buffers were created correctly and return result
Result := (glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE);
...
// flush OpenGL
glFinish;
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
// create pixels buffer
SetLength(pixels, (m_pOwner.ClientWidth * m_Factor) * (m_pOwner.ClientHeight * m_Factor) * 4);
// is alpha blending or antialiasing enabled?
if (m_Transparent or (m_Factor <> 1)) then
// notify that pixels will be read from color buffer
glReadBuffer(GL_COLOR_ATTACHMENT0);
// copy scene from OpenGL to pixels buffer
glReadPixels(0,
0,
m_pOwner.ClientWidth * m_Factor,
m_pOwner.ClientHeight * m_Factor,
GL_RGBA,
GL_UNSIGNED_BYTE,
pixels);
As I already verified and I'm 100% sure that something is really drawn on my scene (also on the Lazarus side), the GLext is well initialized and the framebuffer is correctly built (the condition glCheckFramebufferStatus(GL_FRAMEBUFFER) = GL_FRAMEBUFFER_COMPLETE returns effectively true), I would be very grateful if someone could point me what I'm doing wrong in my code, knowing one more time that on the same computer it works well in Delphi RAD Studio XE7 but not in Lazarus.
Regards

Anti-aliasing/smoothing/supersampling 2d images with opengl

I'm playing with 2D graphics in OpenGL - fractals and other fun stuff ^_^. My basic setup is rendering a couple triangles to fill the screen and using a fragment shader to draw cool stuff on them. I'd like to smooth things out a bit, so I started looking into supersampling. It's not obvious to me how to go about this. Here's what I've tried so far...
First, I looked at the Apple docs on anti-aliasing. I updated my pixel format initialization:
NSOpenGLPixelFormatAttribute attrs[] =
{
NSOpenGLPFADoubleBuffer,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion4_1Core,
NSOpenGLPFASupersample,
NSOpenGLPFASampleBuffers, 1,
NSOpenGLPFASamples, 4,
0
};
I also added the glEnable(GL_MULTISAMPLE); line. GL_MULTISAMPLE_FILTER_HINT_NV doesn't seem to be defined (docs appear to be out of date), so I wasn't sure what to do there.
That made my renders slower but doesn't seem to be doing anti-aliasing, so I tried the "Render-to-FBO" approach described on the OpenGL Wiki on Multisampling. I've tried a bunch of variations, with a variety of outcomes: successful rendering (which don't appear to be anti-aliased), rendering garbage to the screen (fun!), crashes (app evaporates and I get a system dialog about graphics issues), and making my laptop unresponsive aside from the cursor (got the system dialog about graphics issues after hard reboot).
I am checking my framebuffer's status before drawing, so I know that's not the issue. And I'm sure I'm rendering with hardware, not software - saw that suggestion on other posts.
I've spent a fair amount of time on it and still don't quite understand how to approach this. One thing I'd love some help on is how to query GL to see if supersampling is enabled properly, or how to tell how many times my fragment shader is called, etc. I'm also a bit confused about where some of the calls go - most examples I find just say which methods to call, but don't specify which ones need to go in the draw callback. Anybody have a simple example of SSAA with OpenGL 3 or 4 and OSX... or other things to try?
Edit: drawing code - super broken (don't judge me), but for reference:
- (void)draw
{
glBindVertexArray(_vao); // todo: is this necessary? (also in init)
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), points, GL_STATIC_DRAW);
glGenTextures( 1, &_tex );
glBindTexture( GL_TEXTURE_2D_MULTISAMPLE, _tex );
glTexImage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8, _width * 2, _height * 2, false );
glGenFramebuffers( 1, &_fbo );
glBindFramebuffer( GL_FRAMEBUFFER, _fbo );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, _tex, 0 );
GLint status;
status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"incomplete buffer 0x%x", status);
return;
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glDrawBuffer(GL_BACK);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glDeleteTextures(1, &_tex);
glDeleteFramebuffers(1, &_fbo);
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
}
Update:
I changed my code per Reto's suggestion below:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
This caused the program to render garbage to the screen. I then got rid of the * 2 multiplier, and it still drew garbage to the screen. I then turned off the NSOpenGLPFA options related to multi/super-sampling, and it rendered normally, with no anti-aliasing.
I also tried using a non-multisample texture, but kept getting incomplete attachment errors. I'm not sure if this is due to the NVidia issue mentioned on the OpenGL wiki (will post ina comment since I don't have enough rep to post more than 2 links) or something else. If someone could suggest a way to find out why the attachment is incomplete, that would be very, very helpful.
Finally, I tried using a renderbuffer instead of a texture, and found that specifying width and height greater than the viewport size in glRenderbufferStorage doesn't seem to work as expected.
GLuint rb;
glGenRenderbuffers(1, &rb);
glBindRenderbuffer(GL_RENDERBUFFER, rb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, _width * 2, _height * 2);
// ...
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
... renders in the bottom left hand 1/4 of the screen. It doesn't appear to be smoother though...
Update 2: doubled the viewport size, it's no smoother. Turning NSOpenGLPFASupersample still causes it to draw garbage to the screen. >.<
Update 3: I'm an idiot, it's totally smoother. It just doesn't look good because I'm using an ugly color scheme. And I have to double all my coordinates because the viewport is 2x. Oh well. I'd still love some help understanding why NSOpenGLPFASupersample is causing such crazy behavior...
Your sequence of calls here looks like it wouldn't do what you intended:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glDrawBuffer(GL_BACK);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
When you call glClear() and glDrawArrays(), your current draw framebuffer, which is determined by the last call to glBindFramebuffer(GL_DRAW_FRAMEBUFFER, ...), is the default framebuffer. So you never render to the FBO. Let me annotate the above:
// Set draw framebuffer to default (0) framebuffer. This is where the rendering will go.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// Set read framebuffer to the FBO.
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
// This is redundant, GL_BACK is the default draw buffer for the default framebuffer.
glDrawBuffer(GL_BACK);
// Clear the current draw framebuffer, which is the default framebuffer.
glClear(GL_COLOR_BUFFER_BIT);
// Draw to the current draw framebuffer, which is the default framebuffer.
glDrawArrays(GL_TRIANGLES, 0, 6);
// Copy from read framebuffer (which is the FBO) to the draw framebuffer (which is the
// default framebuffer). Since no rendering was done to the FBO, this will copy garbage
// into the default framebuffer, wiping out what was previously rendered.
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
To get this working, you need to set the draw framebuffer to the FBO while rendering, and then set the read framebuffer to the FBO and the draw framebuffer to the default for the copy:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _width * 2, _height * 2, 0, 0, _width, _height,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
Recap:
Draw commands write to GL_DRAW_FRAMEBUFFER.
glBlitFramebuffer() copies from GL_READ_FRAMEBUFFER to GL_DRAW_FRAMEBUFFER.
A couple more remarks on the code:
Since you're creating a multisample texture of twice the size, you're using both multisampling and supersampling at the same time:
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, _tex);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8,
_width * 2, _height * 2, false);
Which is entirely legal. But it's... a lot of sampling. If you just want supersampling, you can use a regular texture.
You could use a renderbuffer instead of a texture for the FBO color target. No huge advantage, but it's simpler, and potentially more efficient. You only need to use textures as attachments if you want to sample the result later, which is not the case here.

access to VBO from vertex shader with OpenGL ES 3.0

I have four VBO's (BufferA, BufferB, BufferC and BufferD) and two programs (program1 and program2).
Main steps of logic are:
glUseProgram(progran1);
glBindBuffer(GL_ARRAY_BUFFER, BufferA);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, BufferB);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, Vertex1Count);
glEndTransformFeedback();
swap(BufferA, BufferB);
glUseProgram(progran2);
glBindBuffer(GL_ARRAY_BUFFER, BufferC);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, BufferD);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, Vertex2Count);
glEndTransformFeedback();
swap(BufferC, BufferD);
Questions: What do I need to do to gain access to BufferB from program2?
Can I bind BufferB as texture somehow and read it with texelfetch?
I am using iOS 7 and OpenGL es 3.0
Yes, you can. You may use buffer as PBO and then create a texture from your buffer.
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BufferB);
GLuint someTex;
glActiveTexture(GL_TEXTURE0);
glGenTexutre(1, &someTex);
glBindTexture(GL_TEXTURE_2D, someTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1, sizeOfYourBuffer, 0, GL_RGBA, GL_FLOAT, nullptr);
// nullptr is interpret as an offset in your buffer
In case of using PBO, TexImage* works fast since CPU is not involved in texture initialization.
Disadvantage of the approach is that the texture is not allowed to be changed. But if you are implementing iterating method you may use the "pin pong strategy" (Have different buffers for previous and new state; Swap it after visualization).

Resources