Reusing parts of the previous frame when drawing 2D with WebGL - opengl-es

I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?

If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.

Related

StretchBlt a bitmap without overwriting what's already drawn at the destination

I have a drawing application that draws many lines, polygons etc. on a device context. I'm also drawing background bitmaps that come from an external source and take a long time to load.
When drawing a new frame I first start threads that load the bitmaps, then draw my vector data, and at the end would like to draw the loaded bitmaps while preserving the vector data. I need the bitmaps "under" the vector data but I can't draw them first because they're not loaded, and waiting for them to load would slow things down a lot.
My idea was to apply the "bitmap with transparency" technique:
Copy the portion of my device context that would be covered by
the bitmap into a monochrome image, anything drawn with the
background color should be drawn on, everything else is off-limits.
Copy the image over the bitmap (with the proper ROP code) to mark
what needs to be transparent in the bitmap
StretchBlt the modified bitmap onto the device context
The bitmap needs to be transformed to fit properly on my device context, so I use SetWorldTransform to apply an affine transformation to my device context. The transformation has both a rotation and a shear.
Unfortunately, this fails at step 1 because, as per the documentation of StretchBlt:
If the source transformation has a rotation or shear, an error occurs.
Now, I did try setting the inverse transformation on my monochrome DC, which would transform my sheared data into a proper rectangle, but the function still fails.
So I guess my question is: how do I bitblit a raster image without deleting my data (a function where I give a transparent color in the destination, not the source, would be perfect), OR is there an easy way to extract the color data from a device context that has a rotate and shear transform on it?

Copying a non-multisampled FBO to a multisampled one

I have been trying to implement a render to texture approach in our application that uses GLES 3 and I have got it working but I am a little disappointed with the frame rate drop.
So far we have been rendering directly to the main FBO, which has been a multisampled FBO created using EGL_SAMPLES=8.
What I want basically is to be able to get a hold of the pixels that have been already drawn, while I'm still drawing. So I thought a render to texture approach should do it. Then I'd just read a section of the off-screen FBO's texture whenever I want and when I'm done rendering to it I'll blit the whole thing to the main FBO.
Digging into this I found I had to implement a system with a multisampled FBO as well as a non-multisampled textured FBO in which I had to resolve the multisampled one. Then just blit the resolved to the main FBO.
This all works but the problem is that by using the above system and a non-multisampled main FBO (EGL_SAMPLES=0) I get quite a big frame rate drop compared to the frame rate I get when I use just the main FBO with EGL_SAMPLES=8.
Digging a bit more into this I found people reporting online as well as a post here https://community.arm.com/thread/6925 that says that the fastest approach to multisampling is to use EGL_SAMPLES. And indeed that's what it looks like on the jetson tk1 too which is our target board.
Which finally leads me to the question, and apologies for the long introduction:
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
The only point of MSAA is to anti-alias geometry edges. It only provides benefit if multiple triangle edges appear in the same pixel. For rendering pipelines which are doing multiple off-screen passes you want to enable multiple samples for the off-screen passes which contain your geometry (normally one of the early passes in the pipeline, before any post-processing effects).
Applying MSAA at the end of the pipeline on the final blit will provide zero benefit, and probably isn't free (it will be close to free on tile-based renderers like IMG Series 6 and Mali (the blog you linked), less free on immediate-mode renders like the Nvidia in your Jetson board).
Note for off-screen anti-aliasing the "standard" approach is rendering to an MSAA framebuffer, and then resolving as a second pass (e.g. using glBlitFramebuffer to blit into a single sampled buffer). This bounce is inefficient on many architectures, so this extension exists to help:
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
Effectively this provides the same implicit resolve as the EGL window surface functionality.
Answers to your questions in the comments.
Is the resulting texture a multisampled texture in that case?
From the application point of view, no. The multisampled data is inside an implicitly allocated buffer, allocated by the driver. See this bit of the spec:
"The implementation allocates an implicit multisample buffer with TEXTURE_SAMPLES_EXT samples and the same internalformat, width, and height as the specified texture level."
This may require a real MSAA buffer allocation in main memory on some GPU architectures (and so be no faster than the manual glBlitFramebuffer approach without the extension), but is known to be effectively free on others (i.e. tile-based GPUs where the implicit "buffer" is a small RAM inside the GPU, and not in main memory at all).
The goal is to blur the background behind widgets
MSAA is not in any way a general purpose blur - it only anti-aliases the pixels which are coincident with edges of triangles. If you want to blur triangle faces you'd be better off just using a separable gaussian blur implemented as a pair of fragment shaders, and implement it as a 2D post-processing pass.
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
Not in any way which is genuinely useful.
Framebuffer blitting does allow blits from single-sampled buffers to multisample buffers. But all that does is give every sample within a pixel the same value as the one from the source.
Blitting cannot generate new information. So you won't get any actual antialiasing. All you will get is the same data stored in a much less efficient way.

Print a 3D object animation in 2D

Here what I need to do is print the 3D animation transitions in my animation clip in 2D format. For example printing a human animation walking on paper. Is there a way to do this?
There are several aspects to consider:
Splitting the animation up into frames
Rendering the frame
Retrieving the rendered frame
Splitting the animation up into frames
Before you render a frame, you have to decide, which frame it should be. Depending on your requirements you may want to render only keyframes (frames that have been defined by the animation artist or the mocap data, but may not be evenly distributed over the duration of the animation), or to render frames that are evenly distributed over the duration of the animation (i.e. render one frame every 0.2 seconds).
Once you know the frame, you can tell the AnimationController to jump to that specific frame and update the 3D model accordingly.
Rendering the frame
Again, depending on your requirements, you may want to consider using a Camera with an orthographic projection. This will remove any perspective distortions, making it easier to visually analyze the animation. If of course the goal is a more visually appealing or "artistic" presentation, a perspective projection may be desired as well.
Retrieving the frame
As you want to print out the rendering, you need to retrieve the result of the rendering. You can do this either by letting Unity take a screenshot (Application.CaptureScreenshot), or by having a RenderTexture and setting this as the render target of your Camera. You can then copy the contents of the RenderTexture to a Texture2D (Texture2D.ReadPixels()) and save that to disk (Texture2D.EncodeToPNG/JPG())

use core-image in 3d

i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)

What can we use instead of blending in OpenGL ES?

I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.

Resources