Omit list of vertices in a buffer - opengl-es

I have a buffer which contains vertex information which I use for glDrawArrays. The triangles in the buffer are spaced around the screen as sprites. I would like somehow to omit drawing some of those items without having to update the entire buffer.
Is there someway I can modify the vertices so that nothing will be drawn when they are encountered? I don't wish to completely remove it since that involves updating the entire buffer again.
I'm targetted some devices with only OpenGL ES 2.0 support.

You can use glDrawElements and provide an indices buffer.

glDrawArrays has offset and count parameters. You can use these parameters to draw only the elements within the buffer that are visible. This results in multiple glDrawArray calls for a single buffer.
An other alternative is to skip the triangles within the shader using the discard command in the fragment shader. In this case you have to provide information about which triangles need to be rendered to the shader (e.g. by uniforms)

Related

OpenGL ES 3.x How to (performantly) render blended triangles front-to-back with alpha-blending and early-reject occluded fragments?

I recently found out that one can render alpha-blended primitives correctly not just back-to-front but also front-to-back (http://hacksoflife.blogspot.com/2010/02/alpha-blending-back-to-front-front-to.html) by using GL_ONE_MINUS_DST_ALPHA, GL_ONE, premultiplying the fragment's alpha in the fragment shader and clearing destination alpha to black before rendering.
It occurred to me that it would then be great if one could combine this with EITHER early-z rejection OR some kind of early "destination-alpha testing" in order to discard fragments that won't contribute to the final pixel color.
When rendering with front-to-back alpha-blending, a fragment can be skipped if the destination-alpha at this location already contains the value 1.0.
I did prototype-implement that by using GL_EXT_shader_framebuffer_fetch to test the destination alpha at the start of the pixel shader and then manually discard the fragment if the value is above a certain threshold. That works but it made things actually slower on my test hardware (Snapdragon XR2) - so I wonder:
whether it's somehow possible to not even have the fragment shader execute if destination alpha is already above a certain threshold?
alternatively, if it would be possible to only write to the depth buffer for fragments that are completely opaque and leave the current depth buffer value unchanged for all fragments that have an alpha value of less than 1 (but still depth-test every fragment), that should allow the hardware to use early-z rejection for occluded fragments. So,
Is this possible somehow (i.e. use depth testing, but update the depth buffer value only for opaque fragments and leave it unchanged for others)?
bottom line this would allow to reduce overdraw of alpha-blended sprites to only those fragments that contribute to the final pixel color and I wonder whether there is a performant way of doing this.
For number 2, I think you could modify gl_FragDepth in the fragment shader to achieve something close, but doing so would disable early-z rejection so wouldn't really help.
I think one viable way to reduce overdraw would be to create a tool to generate a mesh for each sprite which aims to cover a decent proportion of the opaque part of the sprite without using too many verts. I imagine for a typical sprite, even just a well placed quad could cover 80%+.
You'd render the generated opaque geometry of your sprites with depth write enabled, and do a second pass the ordinary way with depth testing enabled to cover the transparent parts.
You would massively reduce overdraw, but significantly increase the complexity of your code and number of verts rendered. You would double your draw calls, but if you're atlassing and using texture arrays, you might be doubling from 1 to 2 draw calls which is fine. I've never tried it so can't say if it's worth all the effort that would be involved.

Minimum steps to implement depth-only pass

I have an existing OpenGL ES 3.1 application that renders a scene to an FBO with color and depth/stencil attachment. It uses the usual methods for drawing (glBindBuffer, glDrawArrays, glBlend*, glStencil* etc. ). My task is now to create a depth-only pass that fills the depth attachment with the same values as the main pass.
My question is: What is the minimum number of steps necessary to achieve this and avoid the GPU doing superfluous work (unnecessary shader invocations etc.)? Is deactivating the color attachment enough or do I also have to set null shaders, disable blending etc. ?
I assume you need this before the main pass runs, otherwise you would just keep the main pass depth.
Preflight
Create specialized buffers which contain only the mesh data needed to compute position (which are deinterleaved from all non-position data).
Create specialized vertex shaders (which compute only the output position).
Link programs with the simplest valid fragment shader.
Rendering
Render the depth-only pass using the specialized buffers and shaders, masking out all color writes.
Render the main pass with the full buffers and shaders.
Options
At step (2) above it might be beneficial to load the depth-only pass depth results as the starting depth for the main pass. This will give you better early-zs test accuracy, at the expense of the readback of the depth value. Most mobile GPUs have hidden surface removal, so this isn't always going to be a net gain - it depends on your content, target GPU, and how good your front-to-back draw order is.
You probably want to use the specialized buffers (position data interleaved in one buffer region, non-position interleaved in a second) for the main draw, as many GPUs will optimize out the non-position calculations if the primitive is culled.
The specialized buffers and optimized shaders can also be used for shadow mapping, and other such depth-only techniques.

OpenGL batching and disabling objects

I'm combining the vertex data that has the same format in a single VBO, assigning vertex attributes based on a material that these object use and rendering them with a single glDrawArrays() call.
It is all working out great until I have to disable some objects (say object1) from being rendered at runtime. Is this even possible, assuming I've already set up all the vertex attributes and stuff? Would it be better not to use batching at all, and have vbo/vao per object (then, if an object's disabled, just don't call glDraw*() on it) ?
Batching requires putting all of your data in one buffer, but batching is not limited to that. Batching is about reducing the number of draw calls. Putting your data in one buffer is necessary for that, but not sufficient.
Putting all of your vertex data in one buffer alone has performance advantages, relative to having to switch buffers and vertex formats. You don't need to go all the way to batching everything into a single draw call to improve performance over using a buffer for each individual object.
In OpenGL, as discussed in this video, the primary cost of multiple draw calls isn't the draw call itself. It's the state changes you usually do between draw calls.
You've put your vertex data in the same buffer, and you must have managed to eliminate stage changes between objects if you could render everything with one draw call. At that point, you've already gained most of the performance you're going to. Accept that and move on to other lower-hanging fruit.

WebGL: Framebuffers and textures with one, one-byte channel?

I'm generating blurred drop shadows in WebGL by drawing the object to be blurred onto an off-screen framebuffer/texture, then applying a few passes of a filter to it (back and forth between two off-screen framebuffers), then copying the result to the final output.
However, I'm just dropping the RGB channels, overwriting them with the desired color of the drop shadow (usually black) while maintaining the alpha channel. It seems like I could probably get better performance by just having my off-screen framebuffers be a single (alpha) channel.
Is there a way to do that, and would it actually help?
Also, is there a better way to apply multiple passes of a filter than just alternating between two frame buffers and using the previous frame buffer's bound texture as the input?
Assuming WebGL follows GLES then per the spec (Page 91):
The name of the color buffer of an application-created framebuffer object
is COLOR_ATTACHMENT0 ... Color buffers consist of R, G, B, and,
optionally, A unsigned integer values.
So you can't attach only to A, or only to any single colour channel.
Options to explore:
Use colorMask to disable writing to R, G and B. Depending on what data layout your GPU uses internally you can imagine that could effectively achieve exactly what you want or possibly have no effect whatsoever.
Is there a way you could render to the depth channel instead of to the alpha channel?
Reducing memory bandwidth is often helpful but if it's not a bottleneck then you could end up prematurely optimising.
To avoid excessive per-frame ping-ponging you'd normally attempt to reform your shader so that it does the effect of all the stages in one. Otherwise consider whether there's any better than-linear way to combine multiple passes. Instead of knowing only how to get from stage n to stage n+1, can you go from stage n to stage 2n? Or even just n+2?

Modify Buffer in OpenGL

I have a modifiable terrain which is stored in a vertex buffer. Because of its large number of vertices, I do not want to upload all vertices again every time the terrain is modified. What I do by now is to split the terrain into smaller chunks so that I only have to recreate the buffer of the area containing the modification of the terrain.
But how can I just add or remove some vertices of an existing buffer?
You can either use glBufferSubData as datenwolf said, or if you are planning on making a lot of modifications and accessing randomly the data, you may want to map the buffer into client memory using glMapBuffer and later unmap it with glUnmapBuffer. (Then, based on the access specifiers you chose, you can edit the data as a C array)
You can change data in an existing buffer using glBufferSubData

Resources