How to redraw partially in opengl Es 2.0 - opengl-es

As per my need I want to redraw only some part of the scene for each frame instead
of redrawing the entire scene only if some portion of it is updated.
Is there a way to do that in OpenGL ES 2.0?
Please any input on this will be really helpful

OpenGL does not really support incremental rendering. You need to draw the entire frame every time you are asked to redraw.
The closest I can think of is that you render your static data to an offscreen framebuffer, using a FBO (Frame Buffer Object). You should be able to find plenty of examples online and in books if you look for keywords like "OpenGL FBO". You will be using calls like glGenFramebuffers(), glBindFramebuffer(), glFramebufferTexture2D(), etc.
Once you rendered the static content into an FBO, you can copy it to the default framebuffer at the start of each redraw, and then render the dynamic content on top of it. This can be a worthwhile method if rendering the static content is very expensive. Otherwise, doing the copy from FBO to default framebuffer can be more expensive than simply re-rendering the static content each time.
The above is pretty easy if the static content is in the background, and the dynamic content is completely in front of it. If static and dynamic content overlap, it gets trickier. You will then have to restore the depth buffer resulting from rendering the static content each time before starting to render the dynamic content. I can't think of a good way to do that in ES 2.0. The features to do this relatively smoothly (depth textures, glBlitFramebuffer) are only in ES 3.0 and later.
There is one other option that I don't think is very appealing, but I wanted to mention it for completeness sake: EGL defines a EGL_SWAP_BEHAVIOR attribute that can be set to EGL_BUFFER_PRESERVED. One big caveat is that it's optional, and not supported on all devices. It also only preserves the color buffer, and not auxiliary buffers, like the depth buffer. If you want to read up on it anyway, see eglSwapBuffers and eglSurfaceAttrib.

Related

Cache scene in Three.js

PIXI.js has Container#cacheAsBitmap which causes the container to "render" itself to an image, save that, render the image instead of its children and when a child is added or removed or updated, the cache is updated.
What's the alternative for Three.js (but instead of an image it would be a mesh)?
I may not be understanding your question properly, but your reply to Sabee's answer was helpful. It sounds like you're looking to either merge multiple geometries into a single mesh or implement a form of model instancing, with the goal of reducing draw calls.
There is more than one way to accomplish this, depending on your requirements. You can merge multiple geometries into a single geometry object, and provide either one material or an array of materials (where each index corresponds to one of the merged geometries). You can also use GPU-accelerated instancing to achieve a similar effect with only a single copy of the geometry.
I'll refer to Dusan Bosnjak's excellent Medium series on instancing, which starts here: https://medium.com/#pailhead011/instancing-with-three-js-36b4b62bc127
As well, here are the three.js examples regarding instancing: https://threejs.org/examples/?q=instanc#webgl_buffergeometry_instancing_dynamic
Pixi.js is a 2D javascript library, using WebGL to render the images(frames) into html5 canvas. Three.js allows the creation of Graphical Processing Unit (GPU)-accelerated 3D animations using WebGL.
The browser cannot store rendered 3D frames, this work allows the GPU Accelerated Render Cache, which depends on what hardware's they run. Helpful post understanding what's going on behind the scenes.
But you can cahce your assets in browser like images, json objects of 3D models and etc.
In Three.js Cache class is a global object, used by assets loaders (TextureLoader, ImageLoader, AudioLoader ...), by default is disabled (false). To enable it you can set THREE.Cache.enabled = true ;
I think by default the browser should cache the textures for performance reasons, but if you want to be sure simply enable the cache by force code it in Three.js. Also the creator of Three.js answered this question.

Using a FrameBufferObject with several Color Texture attachments

I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.

Trying to understand Open GL

I am reading the Open GL ES guide provided by Apple but I don't understand it in full detail and have a question. I am very grateful if you help me understand. On page 28, the chapter about drawing, it says the following:
To correctly create a framebuffer:
Create a framebuffer object.
Create one or more targets (renderbuffers or textures), allocate storage for them, and attach each to an attachment point on the
framebuffer object.
Test the framebuffer for completeness.
My question is: In point 2, shouldn't it say "create one or more sources..."? As I currently understand it, the frame buffer is what will be rendered to the screen in my draw method. Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer. Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.
Your program renders into the framebuffer, by executing actions that cause OpenGL to rasterize fragments into the framebuffer.
But, the framebuffer isn't displayed anywhere, unless you do as the documentation says and send it out to a target.
It's a bit like this (very very rough and off the top of my head):
+----------+ +--------+ +---------------------+ +----------------------+
|draw calls|---|pipeline|---|pixels in framebuffer|---|pixels in renderbuffer|
+----------+ +--------+ +---------------------+ +----------------------+
Target is correct. A framebuffer renders to a region of memory that later will either be used to be composited to the screen (renderbuffer) or will be used as texture in a secondary render pass.
The framebuffer attachment is first the target of the writes, then the source of the reads. Note that when the spec says "target", it's usually the active object that's being processed or changed, regardless of actual operation.
Clearly, I am misunderstanding something fundamental since in what I describe, the target is the screen and everything else is a source.
Yes you fundamentally misunderstood something. A Framebuffer Object is used when you want to render things not to the screen but to some off-screen image. For example a texutre, or some image later being post-processed. If you want to render just to the screen, you don't need a FBO.
Therefore it would make sense to me if we specify what images we want to have rendered by attaching them to the frame buffer.
No. A framebuffer object is not a source of image content, but a "canvas", a receiver, you can draw on. Or more precisely "frame" for the actual canvas, where the target is the canvas to draw on.
The framebuffer sources have nothing to do with the stuff you render. You can attach multiple render targets to each framebuffer and switch between them. However, this is something you don't need actually and it has nothing to do with the rendering step itself until you use multiple passes or need some other render to texture stuff.
I would skip this part of your OpenGL learning and start directly with VBO's and shaders and just use some template for the framebuffers at this point. Later you may need them but not now.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Resources