How to implement multiple features in one opengl program? - opengl-es

I have to implement multiple features in one opengl program. For example, to deal with one whole image file, we have 3 features:
(1). YUV->RGB
(2). image filter
(3). RGB->YUV
then just one vertex shader and 3 fragment shaders will be OK. I have implemented these 3 shaders one by one, and the function work for each, but I don't know how to link them like a pipe together? Can somebody help, thanks.
I have googled 2 ways may work for my case:
1. Use glUseProgram() to switch between shaders, but it will only make effect for the last fragment shader.
2. Write a complicated fragment shader to embody all these features. But I don't know how, seems impossible.

Use FBOs (Frame Buffer Objects) to ping-pong draw calls. For example when applying some image filters. Draw texture to FBO using first image filter shader. Then you may use second shader (another filter) to draw content of FBO (texture) to framebuffer.
If you need more than two processing shaders use two FBOs and ping-pong draw calls between them in background until processing is done.

Related

Copying a non-multisampled FBO to a multisampled one

I have been trying to implement a render to texture approach in our application that uses GLES 3 and I have got it working but I am a little disappointed with the frame rate drop.
So far we have been rendering directly to the main FBO, which has been a multisampled FBO created using EGL_SAMPLES=8.
What I want basically is to be able to get a hold of the pixels that have been already drawn, while I'm still drawing. So I thought a render to texture approach should do it. Then I'd just read a section of the off-screen FBO's texture whenever I want and when I'm done rendering to it I'll blit the whole thing to the main FBO.
Digging into this I found I had to implement a system with a multisampled FBO as well as a non-multisampled textured FBO in which I had to resolve the multisampled one. Then just blit the resolved to the main FBO.
This all works but the problem is that by using the above system and a non-multisampled main FBO (EGL_SAMPLES=0) I get quite a big frame rate drop compared to the frame rate I get when I use just the main FBO with EGL_SAMPLES=8.
Digging a bit more into this I found people reporting online as well as a post here https://community.arm.com/thread/6925 that says that the fastest approach to multisampling is to use EGL_SAMPLES. And indeed that's what it looks like on the jetson tk1 too which is our target board.
Which finally leads me to the question, and apologies for the long introduction:
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
The only point of MSAA is to anti-alias geometry edges. It only provides benefit if multiple triangle edges appear in the same pixel. For rendering pipelines which are doing multiple off-screen passes you want to enable multiple samples for the off-screen passes which contain your geometry (normally one of the early passes in the pipeline, before any post-processing effects).
Applying MSAA at the end of the pipeline on the final blit will provide zero benefit, and probably isn't free (it will be close to free on tile-based renderers like IMG Series 6 and Mali (the blog you linked), less free on immediate-mode renders like the Nvidia in your Jetson board).
Note for off-screen anti-aliasing the "standard" approach is rendering to an MSAA framebuffer, and then resolving as a second pass (e.g. using glBlitFramebuffer to blit into a single sampled buffer). This bounce is inefficient on many architectures, so this extension exists to help:
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
Effectively this provides the same implicit resolve as the EGL window surface functionality.
Answers to your questions in the comments.
Is the resulting texture a multisampled texture in that case?
From the application point of view, no. The multisampled data is inside an implicitly allocated buffer, allocated by the driver. See this bit of the spec:
"The implementation allocates an implicit multisample buffer with TEXTURE_SAMPLES_EXT samples and the same internalformat, width, and height as the specified texture level."
This may require a real MSAA buffer allocation in main memory on some GPU architectures (and so be no faster than the manual glBlitFramebuffer approach without the extension), but is known to be effectively free on others (i.e. tile-based GPUs where the implicit "buffer" is a small RAM inside the GPU, and not in main memory at all).
The goal is to blur the background behind widgets
MSAA is not in any way a general purpose blur - it only anti-aliases the pixels which are coincident with edges of triangles. If you want to blur triangle faces you'd be better off just using a separable gaussian blur implemented as a pair of fragment shaders, and implement it as a 2D post-processing pass.
Is there any way that I can design this to use a non-multisampled off-screen fbo for all the rendering that eventually is blitted to a main multisampled FBO that uses EGL_SAMPLES?
Not in any way which is genuinely useful.
Framebuffer blitting does allow blits from single-sampled buffers to multisample buffers. But all that does is give every sample within a pixel the same value as the one from the source.
Blitting cannot generate new information. So you won't get any actual antialiasing. All you will get is the same data stored in a much less efficient way.

Draw bitmap using Metal

Question is quite simple. On Windows you have BitBlt() to draw to the screen. On OS X you could normally use OpenGl but how do I draw the bitmap to the screen using Apples new Metal framework? I cant find anything valuable in Apples Metal references.
I'm right now using Core Graphics for the drawing part but since my bitmap is updating all the time, I feel like I should move to Metal to reduce the overhead.
The very short answer is this: First, you need to setup a standard rendering pipeline using Metal, which is a bit of work, if you don't know anything about the rendering pipeline (note: but this can be simplified by avoiding 3D stuff by rendering a quad with two triangles, just give xy coordinates for the vertices). There is a lot of sample code from Apple that shows how to setup a standard rendering pipeline, the simplest is maybe MetalImageProcessing, so you could strip that down (even though that uses a compute shader which is overcomplicated to draw in, you'd want to substitute it for standard vertex and fragment shaders).
Second, you need to learn how vertex and fragment shaders work and how to draw stuff with them, see shadertoy for this.
Just noticed the users last comment, if you have an array of bytes that represent an image then you can just make an MTLTexture out of that and render it to the screen, see Apple's example above but change the compute shader to standard vertex and fragments shaders for faster performance.

How to draw on an image in openGL?

I load an image (biological image scans) and want to a) display it and b) draw markers on it. How would I program the shaders? I guess the vertex shaders are simple enough, since it is an 2D image. On idea I had was to overwrite the image data in the buffer, the pixels with the markers set to a specific values. My markers are boxes (so lines), is this the right way to go? I read that there are different primitives, lines too, so is there a way to draw my lines on my image without manipulating the data in the buffer, simply an overlay, so to speak? My framework is vispy, but pseudocode would also help.
Draw a rectangle/square with your image as a texture on it. Then, draw the markers (probably as monotone quads/rectangles).
If you want the lines to be over the image, but under the markers, simply put the rendering code in between.
No shaders are required, if older OpenGL is suitable for you (since OpenGL 3.3 most old stuff was moved to compatibility profile, while modern features are core profile; the latter requires self-written shaders, but they should be pretty simple for your case).
To sum up, the things that you need understanding of are primitives (lines, triangles) and basic texturing.

Shader Management for Lighting?

I am currently learning to apply more materials with lighting to my application, but then I got confused on how I should scale it. I'm using WebGL and I'm learning from learningwebgl.com (which they say the same as NeHe OpenGL tutorial), and it only shows simple shader programs that every sample have one program with embedded lighting on it.
Say I have multiple lighting setup, like some point lights/spot lights, and I have multiple meshes with different materials, but every mesh need to react with those lights. What should I do? make individual shader programs where you put colors/textures to meshes and then switch to lighting program? or always have every shader strings in my application with those lights (as functions) as default in it, append it to loaded shaders, and simply make variable passes to enable them?
Also I am focusing on per-fragment lighting, so maybe things only happen in fragment shaders.
There are generally 2 approches
Have an uber shader
In this case you make a big shader with every option possible and lots of branching or ways to effectively nullify parts of the shader (like multiplying with 0)
A simple example might be to have an array of lights uniforms in the shader. For lights you don't want to have an effect you just set their color 0,0,0,0 or their power to 0 so they are still calculated but they contribute nothing to the final scene.
Generate shaders on the fly
In this case for each model you figure out what options you need for that shader and generate the appropriate shader with the exact features you need.
A variation of #2 is the same but all the various shaders needed are generated offline.
Most game engines use technique #2 as it's far more efficient to use the smallest shader possible for each situation than to run an uber shader but many smaller projects and even game prototypes often use an uber shader because it's easier then generating shaders. Especially if you don't know all the options you'll need yet.

Multi-pass shaders in OpenGL ES 2.0

First - Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
I quite understand what multi-pass shaders are.
Well what is my picture:
Draw group of something (e.g. sprites) to FBO using some shader.
Think of FBO as big texture for big screen sized quad and use another shader, which, for example, turn texture colors to grayscale.
Draw FBO textured quad to screen with grayscaled colors.
Or is this called else?
So multi-pass = use another shader output to another shader input? So we render one object twice or more? How shader output get to another shader input?
For example
glUseProgram(shader_prog_1);//Just plain sprite draw
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, /*some texture_id*/);
//Setting input for shader_prog_1
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//Disabling arrays, buffers
glUseProgram(shader_prog_1);//Uses same vertex, but different fragment shader program
//Setting input for shader_prog_2
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can anyone provide simple example of this in basic way?
In general, the term "multi-pass rendering" refers to rendering the same object multiple times with different shaders, and accumulating the results in the framebuffer. The accumulation is generally done via blending, not with shaders. That is, the second shader doesn't take the output of the first. They each perform part of the computation, and the blend stage combines them into the final value.
Nowadays, this is primarily done for lighting in forward-rendering scenarios. You render each object once for each light, passing different lighting parameters and possibly using different shaders each time you render a light. The blend mode used to accumulate the results is additive, since light reflectance is an additive property.
Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
This is a completely different question from the entire rest of your post, but the answer is yes and no.
No, in the sense that ARB_shader_subroutine is an OpenGL extension, and it therefore could be implemented by any OpenGL implementation. Yes, in the practical sense that any hardware that actually could implement shader_subroutine could also implement the rest of GL 4.x and therefore would already be advertising 4.x functionality.
In practice, you won't find shader_subroutine supported by non-4.x OpenGL implementations.
It is unavailable in GLSL ES 2.0 because it's GLSL ES. Do not confuse desktop OpenGL with OpenGL ES. They are two different things, with different GLSL versions and different featuresets. They don't even share extensions (except for a very few recent ones).

Resources