Trying to understand Open GL - opengl-es

I am reading the Open GL ES guide provided by Apple but I don't understand it in full detail and have a question. I am very grateful if you help me understand. On page 28, the chapter about drawing, it says the following:
To correctly create a framebuffer:
Create a framebuffer object.
Create one or more targets (renderbuffers or textures), allocate storage for them, and attach each to an attachment point on the
framebuffer object.
Test the framebuffer for completeness.
My question is: In point 2, shouldn't it say "create one or more sources..."? As I currently understand it, the frame buffer is what will be rendered to the screen in my draw method. Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer. Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.

Your program renders into the framebuffer, by executing actions that cause OpenGL to rasterize fragments into the framebuffer.
But, the framebuffer isn't displayed anywhere, unless you do as the documentation says and send it out to a target.
It's a bit like this (very very rough and off the top of my head):
+----------+ +--------+ +---------------------+ +----------------------+
|draw calls|---|pipeline|---|pixels in framebuffer|---|pixels in renderbuffer|
+----------+ +--------+ +---------------------+ +----------------------+

Target is correct. A framebuffer renders to a region of memory that later will either be used to be composited to the screen (renderbuffer) or will be used as texture in a secondary render pass.

The framebuffer attachment is first the target of the writes, then the source of the reads. Note that when the spec says "target", it's usually the active object that's being processed or changed, regardless of actual operation.

Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.
Yes you fundamentally misunderstood something. A Framebuffer Object is used when you want to render things not to the screen but to some off-screen image. For example a texutre, or some image later being post-processed. If you want to render just to the screen, you don't need a FBO.
Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer.
No. A framebuffer object is not a source of image content, but a "canvas", a receiver, you can draw on. Or more precisely "frame" for the actual canvas, where the target is the canvas to draw on.

The framebuffer sources have nothing to do with the stuff you render. You can attach multiple render targets to each framebuffer and switch between them. However, this is something you don't need actually and it has nothing to do with the rendering step itself until you use multiple passes or need some other render to texture stuff.
I would skip this part of your OpenGL learning and start directly with VBO's and shaders and just use some template for the framebuffers at this point. Later you may need them but not now.

Related

Rendering to CAMetalLayer from dedicated render thread / loop

In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.

DX11 add a simple black box on a texture

I want to add a simple black box(like this: effect) on a texture(ID3D11ShaderResourceView), is there a simple way to do it in DX11? don't want write a shadow to do it.
Well, what you're trying to do is actually "initializing texture programmatically". Textures from D3D POV are nothing more than pieces of memory with clearly defined layout. Normally, you create a texture resource, read data from a texture file (like *.BMP for example), put the data in the texture and then feed it to the pipeline for sampling.
In your case though, you need an additional step:
Create texture resource using either D3D11_USAGE_DEFAULT or D3D11_USAGE_DYNAMIC flag - so you can access it from the CPU
Read the color map to your texture
Depending on the chosen type, either add your data to the initial data or Map/Unmap and add your data (by your data I mean overwrite each edge of the image with black color)
This can be also done to kind of "generate" textures, like for example checker-board or clouds.
All the information you need can be found here.

Using a FrameBufferObject with several Color Texture attachments

I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.

How to redraw partially in opengl Es 2.0

As per my need I want to redraw only some part of the scene for each frame instead
of redrawing the entire scene only if some portion of it is updated.
Is there a way to do that in OpenGL ES 2.0?
Please any input on this will be really helpful
OpenGL does not really support incremental rendering. You need to draw the entire frame every time you are asked to redraw.
The closest I can think of is that you render your static data to an offscreen framebuffer, using a FBO (Frame Buffer Object). You should be able to find plenty of examples online and in books if you look for keywords like "OpenGL FBO". You will be using calls like glGenFramebuffers(), glBindFramebuffer(), glFramebufferTexture2D(), etc.
Once you rendered the static content into an FBO, you can copy it to the default framebuffer at the start of each redraw, and then render the dynamic content on top of it. This can be a worthwhile method if rendering the static content is very expensive. Otherwise, doing the copy from FBO to default framebuffer can be more expensive than simply re-rendering the static content each time.
The above is pretty easy if the static content is in the background, and the dynamic content is completely in front of it. If static and dynamic content overlap, it gets trickier. You will then have to restore the depth buffer resulting from rendering the static content each time before starting to render the dynamic content. I can't think of a good way to do that in ES 2.0. The features to do this relatively smoothly (depth textures, glBlitFramebuffer) are only in ES 3.0 and later.
There is one other option that I don't think is very appealing, but I wanted to mention it for completeness sake: EGL defines a EGL_SWAP_BEHAVIOR attribute that can be set to EGL_BUFFER_PRESERVED. One big caveat is that it's optional, and not supported on all devices. It also only preserves the color buffer, and not auxiliary buffers, like the depth buffer. If you want to read up on it anyway, see eglSwapBuffers and eglSurfaceAttrib.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Resources