Copying a non-multisampled FBO to a multisampled one - opengl-es

I have been trying to implement a render to texture approach in our application that uses GLES 3 and I have got it working but I am a little disappointed with the frame rate drop.
So far we have been rendering directly to the main FBO, which has been a multisampled FBO created using EGL_SAMPLES=8.
What I want basically is to be able to get a hold of the pixels that have been already drawn, while I'm still drawing. So I thought a render to texture approach should do it. Then I'd just read a section of the off-screen FBO's texture whenever I want and when I'm done rendering to it I'll blit the whole thing to the main FBO.
Digging into this I found I had to implement a system with a multisampled FBO as well as a non-multisampled textured FBO in which I had to resolve the multisampled one. Then just blit the resolved to the main FBO.
This all works but the problem is that by using the above system and a non-multisampled main FBO (EGL_SAMPLES=0) I get quite a big frame rate drop compared to the frame rate I get when I use just the main FBO with EGL_SAMPLES=8.
Digging a bit more into this I found people reporting online as well as a post here https://community.arm.com/thread/6925 that says that the fastest approach to multisampling is to use EGL_SAMPLES. And indeed that's what it looks like on the jetson tk1 too which is our target board.
Which finally leads me to the question, and apologies for the long introduction:
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?

The only point of MSAA is to anti-alias geometry edges. It only provides benefit if multiple triangle edges appear in the same pixel. For rendering pipelines which are doing multiple off-screen passes you want to enable multiple samples for the off-screen passes which contain your geometry (normally one of the early passes in the pipeline, before any post-processing effects).
Applying MSAA at the end of the pipeline on the final blit will provide zero benefit, and probably isn't free (it will be close to free on tile-based renderers like IMG Series 6 and Mali (the blog you linked), less free on immediate-mode renders like the Nvidia in your Jetson board).
Note for off-screen anti-aliasing the "standard" approach is rendering to an MSAA framebuffer, and then resolving as a second pass (e.g. using glBlitFramebuffer to blit into a single sampled buffer). This bounce is inefficient on many architectures, so this extension exists to help:
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
Effectively this provides the same implicit resolve as the EGL window surface functionality.
Answers to your questions in the comments.
Is the resulting texture a multisampled texture in that case?
From the application point of view, no. The multisampled data is inside an implicitly allocated buffer, allocated by the driver. See this bit of the spec:
"The implementation allocates an implicit multisample buffer with TEXTURE_SAMPLES_EXT samples and the same internalformat, width, and height as the specified texture level."
This may require a real MSAA buffer allocation in main memory on some GPU architectures (and so be no faster than the manual glBlitFramebuffer approach without the extension), but is known to be effectively free on others (i.e. tile-based GPUs where the implicit "buffer" is a small RAM inside the GPU, and not in main memory at all).
The goal is to blur the background behind widgets
MSAA is not in any way a general purpose blur - it only anti-aliases the pixels which are coincident with edges of triangles. If you want to blur triangle faces you'd be better off just using a separable gaussian blur implemented as a pair of fragment shaders, and implement it as a 2D post-processing pass.

Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
Not in any way which is genuinely useful.
Framebuffer blitting does allow blits from single-sampled buffers to multisample buffers. But all that does is give every sample within a pixel the same value as the one from the source.
Blitting cannot generate new information. So you won't get any actual antialiasing. All you will get is the same data stored in a much less efficient way.

Related

Draw bitmap using Metal

Question is quite simple. On Windows you have BitBlt() to draw to the screen. On OS X you could normally use OpenGl but how do I draw the bitmap to the screen using Apples new Metal framework? I cant find anything valuable in Apples Metal references.
I'm right now using Core Graphics for the drawing part but since my bitmap is updating all the time, I feel like I should move to Metal to reduce the overhead.
The very short answer is this: First, you need to setup a standard rendering pipeline using Metal, which is a bit of work, if you don't know anything about the rendering pipeline (note: but this can be simplified by avoiding 3D stuff by rendering a quad with two triangles, just give xy coordinates for the vertices). There is a lot of sample code from Apple that shows how to setup a standard rendering pipeline, the simplest is maybe MetalImageProcessing, so you could strip that down (even though that uses a compute shader which is overcomplicated to draw in, you'd want to substitute it for standard vertex and fragment shaders).
Second, you need to learn how vertex and fragment shaders work and how to draw stuff with them, see shadertoy for this.
Just noticed the users last comment, if you have an array of bytes that represent an image then you can just make an MTLTexture out of that and render it to the screen, see Apple's example above but change the compute shader to standard vertex and fragments shaders for faster performance.

Reusing parts of the previous frame when drawing 2D with WebGL

I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?
If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.

How to draw on an image in openGL?

I load an image (biological image scans) and want to a) display it and b) draw markers on it. How would I program the shaders? I guess the vertex shaders are simple enough, since it is an 2D image. On idea I had was to overwrite the image data in the buffer, the pixels with the markers set to a specific values. My markers are boxes (so lines), is this the right way to go? I read that there are different primitives, lines too, so is there a way to draw my lines on my image without manipulating the data in the buffer, simply an overlay, so to speak? My framework is vispy, but pseudocode would also help.
Draw a rectangle/square with your image as a texture on it. Then, draw the markers (probably as monotone quads/rectangles).
If you want the lines to be over the image, but under the markers, simply put the rendering code in between.
No shaders are required, if older OpenGL is suitable for you (since OpenGL 3.3 most old stuff was moved to compatibility profile, while modern features are core profile; the latter requires self-written shaders, but they should be pretty simple for your case).
To sum up, the things that you need understanding of are primitives (lines, triangles) and basic texturing.

OpenCL - Draw image and zoom into it

I'm trying to draw an image directly to the screen. The is calculated by a kernel and will be stored in the GPU-memory, so it should not be copied back to the CPU-Memory. It should also be possible to dynamically zoom into the image while calculating more data at the same time.
Can anyone point me to a method that does this effectively? Everything I've found so far was incredibly slow (for example using OpenGL to draw a cube with the image as texture). What is the fastest way to achieve this? Like I said, while zooming into the image, it should be possible to calculate more data to refine the image dynamically. From what I've read, OpenGL might be the fastest way (performance-wise) but it seems incredibly slow to me to put the image on a cube and run a ton of shaders over it if it's already the way I want it.
Mathe172
Find one of the many OpenCL / OpenGL interop examples, they all show what you want. They create an OpenCL image made to interop with an OpenGL texture, and after rendering OpenCL data into the image they display it using OpenGL -- no copy back to CPU memory over the PCIe bus. The zooming into the image can be done just by scaling up the OpenGL quad that you draw the texture onto. If you are talking lots of scaling, then you will need a high-resolution texture. If lots and lots of zooming you should rethink the problem and compute just what you need for the current display (since with lots of zoom, plenty of data will be off-screen).

Copy arbitrarily sized block of pixels into OpenGL ES texture... somehow?

I'm writing a drawing application, and the drawing canvas is an OpenGL texture. When you draw onto the canvas, it determines which region of the canvas texture has been changed, and copies that pixel data out (using glReadPixels) before applying the changes you made.
To undo, I want to simply revert to the previous texture state using that pixel data that was copied out. However, OpenGL ES doesn't provide a glDrawPixels command. What's the best way to do it?
I've considered two options, but I'm not sure either is that great:
Create a temporary texture using the pixels I copied out and draw that in. (However, copied region is not a power of two!)
Unbind the large canvas texture completely, manually alter the bytes of the texture, and then put it back into OpenGL. I'm not using any sort of compression, so this might not be that bad. But it seems like a hack?
Anybody have any ideas? I'd really appreciate it!
In case anyone stumbles across this while trying to do something similar, I've come up with a solution that seems to work well.
Grab an image of the current texture by binding it to the framebuffer and then writing the framebuffer to a CGImageRef.
Create a new CGContext and draw in the existing texture CGImageRef. Then draw old texture data in to the portion that the user changed, effectively "undoing" that change to the image.
Destroy old OpenGL texture and create a texture from the CGContext.
I think this is a pretty slow way of going about things, but I don't need huge performance - my real concern was limiting the amount of data being kept to represent the "old" texture.
If you need help with this (there's quite a bit of code) feel free to email me.

Resources