Layered rendering with OpenGL ES - opengl-es

I am trying to implement the layer rendering. I have some doubts.
1)Can we do layered rendering without a Geometry shader. I mean passing the layer number value from the vertex shader to the fragment shader. set gl_Layer in the fragment shader.
3)I tried setting the gl_Layer value hardcoded in the fragment shader. first, I set it to Zero and I observed the image and I saw my triangle rendered at the bottom left corner of the image.
Next, I changed the gl_Layer value to 1 .but again I see it in the same place as before.
How to layer rendering works ??? and let's say I have 4 layers, all the layers will be present side by side in a single image after drawing to 4 layers??
can someone please explain this?
Thank you

Can we do layered rendering without a Geometry shader.
Both OpenGL ES and desktop OpenGL supports layered rendering through the Geometry Shader. However, only desktop GL has the ARB_shader_viewport_layer_array extension that allows VS's to define the layer being rendered to.
I tried setting the gl_Layer value hardcoded in the fragment shader.
In a fragment shader, gl_Layer is an input; modifying it will accomplish nothing. The layer to be rendered to is a property of the primitive (which is why the GS is the one that outputs it). A primitive is rasterized for a particular layer. By the time a fragment is generated, the layer that the fragment targets has already been set.
Layered rendering allows rendering to layered images. This means that each primitive is assigned a layer by the vertex processing step. When the rasterizer rasterizes the primitive, the fragments generated will be written to the specific layer to which the primitive was assigned.
The images are not "side by side"; they are part of layered images like array textures.

Related

How to implement multiple features in one opengl program?

I have to implement multiple features in one opengl program. For example, to deal with one whole image file, we have 3 features:
(1). YUV->RGB
(2). image filter
(3). RGB->YUV
then just one vertex shader and 3 fragment shaders will be OK. I have implemented these 3 shaders one by one, and the function work for each, but I don't know how to link them like a pipe together? Can somebody help, thanks.
I have googled 2 ways may work for my case:
1. Use glUseProgram() to switch between shaders, but it will only make effect for the last fragment shader.
2. Write a complicated fragment shader to embody all these features. But I don't know how, seems impossible.
Use FBOs (Frame Buffer Objects) to ping-pong draw calls. For example when applying some image filters. Draw texture to FBO using first image filter shader. Then you may use second shader (another filter) to draw content of FBO (texture) to framebuffer.
If you need more than two processing shaders use two FBOs and ping-pong draw calls between them in background until processing is done.

How to store and access per fragment attributes in WebGL

I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.

OpenGL ES 2.0 Vertex Shader Texture Reads not possible from FBO?

I'm currently working on a GPGPU project that uses OpenGL ES 2.0. I have a rendering pipeline that uses framebuffer objects (FBOs) as targets, i.e. the result of each rendering pass is saved in a texture which is attached to an FBO. So far, this works when using fragment shaders. For example I have to following rendering pipeline:
Preprocessing (downscaling, grayscale conversion)
-> Adaptive Thresholding Pass 1 -> Adapt. Thresh. Pass 2
-> Copy back to CPU
However, I wanted to extend this pipeline by adding a grayscale histogram calculation after the proprocessing step. With OpenGL ES 2.0 this only works with texture reads in the vertex shader, as far as I know [1]. I can confirm that my shaders work in a different program where the input is a "real" image, not a rendered texture that is attached to an FBO. Hence I think it is not possible to read texture data in a vertex shader if it comes from an FBO. Can anyone confirm this assumption or am I missing something? I'm using a Nexus 10 for my experiments.
[1]: It basically works by reading each pixel value from the texture in the vertex shader, then calculating of the histogram bin from it and "adding" it in the fragment shader by using alpha blending.
Texture reads within a vertex shader are not a required element in OpenGL ES 2.0, so you'll find some manufacturers supporting them and some not. In fact, there was a weird situation where iOS supported it on some devices for one version of iOS, but not the next (and it's now officially supported in iOS 7). That might be the source of the inconsistency you see here.
To work around this, I implemented a histogram calculation by instead feeding the colors from the FBO (or its attached texture) in as vertices and using a scattering operation similar to what you describe. This doesn't require a texture read of any kind in the vertex shader, but it does involve a round-trip from and to the GPU and potentially a lot of vertices. It works on all OpenGL ES 2.0 hardware, but it can be costly.

OpenGL ES. Hide layers in 2D?

For example I have 2 layers: background and image. In my case I must show or hide an image on zoom value changed (simply float variable).
The only solution I know is to keep 2 various frame buffers for both background and image and not to draw the image when it is not necessary.
But is it possible to do this in an easier way?
Just don't pass the geometry to glDrawArrays() for the layer you want to hide when the zoom occurs. OpenGL ES completely re-renders everything every frame. You should have a glClear() call at the start of your frame render loop. So, removing something is done by just not sending its triangles. You might need to divide your geometry into separate lists for each layer.

What's the difference between glClient*** gl***?

I'm learning GLES. There are many pair functions like glClientActiveTexture/glActiveTexture.
What's the difference between them? (Especially the case of glClientActiveTexture)
From the openGL documentation:
glActiveTexture glActiveTexture selects which texture unit subsequent texture state calls will affect.
glClientActiveTexture selects the vertex array client state parameters to be modified by glTexCoordPointer.
On one hand, glClientActiveTexture is used to control subsequent glTexCoordPointer calls (with vertex arrays). On the other hand glActiveTexture affects subsequent calls to glTexCoord calls (used by display lists or immediate mode (non existent in OpenGL ES (AFAIK)).
Adding to Kenji's answer, here's a bit more detail (and bearing in mind that not everything in OpenGL is available in ES).
Terminology
But first some quick terminology to try to prevent confusion between all the texturish things (oversimplified for the same reason).
Texture Unit: A set of texture targets (e.g. GL_TEXTURE_2D). It is activated via glActiveTexture
Texture Target: A binding point for a texture. It specifies a type of texture (e.g. 2D). It is set via glBindTexture.
Texture: A thing containing texels (image data, e.g. pixels) and parameters (e.g. wrap and filter mode) that comprise a texture image. It is identified by a key/name/ID, a new one of which you get via glGenTextures, and it is actually instantiated via (first call to) glBindTexture with its name.
Summary: A texture unit has texture targets that are bound to textures.
Texture Unit Activation
In order to have multitexturing with 2D textures (for example), wherein more than one 2D texture must be bound simultaneously for the draw, you use different texture units - bind one texture on one unit's 2D target, and bind the next on another unit's 2D target. glActiveTexture is used to specify which texture unit the subsequent call to glBindTexture will apply to, and glBindTexture is used to specify which texture is bound to which target on the active texture unit.
Since you have multiple textures, you're likely to also have multiple texcoords. Therefore you need some way to communicate to OpenGL/GLSL that one set of texcoords is for one texture/unit, and another set of texcoords is for another. Usually you can do this by just having multiple texcoord attributes in your vertex and corresponding shader. If you're using older rendering techniques that rely on pre-defined vertex attributes in the shader, then you have to do something different, which is where glClientActiveTexture comes in. The confusion happens because it takes an argument for a texture unit, but it does not have the same implications as glActiveTexture of activating a texture unit in the state machine, but rather it designates which pre-defined gl_MultiTexCoord attribute (in the shader) will source the texcoords described in the subsequent call to glTexCoordPointer.
In a nutshell, glActiveTexture is for texture changes, and glClientActiveTexture is for texcoord changes. Which API calls you use depends on which rendering technique you use:
glActiveTexture sets which texture unit will be affected by subsequent context state calls, such as glBindTexture. It does not affect glMultiTexCoord, glTexCoord, or glTexCoordPointer, because those have to do with texcoords, not textures.
glClientActiveTexture sets which texture unit subsequent vertex array state calls apply to, such as glTexCoordPointer. It does not affect glBindTexture, because that has to do with textures, not texcoords.
It also does not affect glMultiTexCoord which uses DSA to select a texture unit via its target parameter, nor does it affect glTexCoord, which targets texture unit 0 (GL_TEXTURE0). This pretty much just leaves glTexCoordPointer as the thing it affects, and glTexCoordPointer is deprecated in favor of glVertexAttribPointer, which doesn't need glClientActiveTexture. As you can see, glClientActiveTexture isn't the most useful, and is thusly deprecated along with the related glWhateverPointer calls.
You'll use glActiveTexture + glBindTexture with all techniques to bind each texture on a specific texture unit. You'll additionally use glClientActiveTexture + glTexCoordPointer when you're using DrawArrays/DrawElements without glVertexAttribPointer.
Multitexturing Examples
As OpenGL has evolved over the years, there are many ways in which to implement multitextured draws. Here are some of the most common:
Use immediate mode (glBegin/glEnd pairs) with GLSL shaders targeted for version 130 or older, and use glMultiTexCoord calls to specify the texcoords.
Use glBindVertexArray and glBindBuffer + glDrawArrays/glDrawElements to specify vertex data, and calls to glVertexPointer, glNormalPointer, glTexCoordPointer, etc., to specify vertex attributes during the draw routine.
Use glBindVertexArray + glDrawArrays/glDrawElements to specify vertex data, with calls to glVertexAttribPointer to specify vertex attributes during the init routine.
Here are some examples of the API flow for multitexturing with two 2D textures using each of these techniques (trimmed for brevity):
Using immediate mode:
init
draw
shutdown
vertex shader
fragment shader
Using DrawElements but not VertexAttribPointer:
init
draw
shutdown
vertex shader (same as immediate mode)
fragment shader (same as immediate mode)
Using DrawElements and VertexAttribPointer:
init
draw
shutdown
vertex shader
fragment shader
Misc. Notes
glMultiTexCoord populates unused coords as 0,0,0,1.
e.g. glMultiTexCoord2f == s,t,0,1
Always use GL_TEXTURE0 + i, not just i.
When using GL_TEXTURE0 + i, i must be within 0 - GL_MAX_TEXTURE_COORDS-1.
GL_MAX_TEXTURE_COORDS is implementation-dependent, but must be at least two, and is at least 80 as of OpenGL 4.0.
glMultiTexCoord is supported on OpenGL 1.3+, or with ARB_multitexture
Do not call glActiveTexture or glBindTexture between glBegin and glEnd. Rather, bind all needed textures to all needed texture units before the draw call.

Resources