OpenGL ES and overlapping triangles with VBO - opengl-es

Some background:
I am very new to OGL. My application concerns itself with 2D only. All objects are normal to the viewing direction, and I use orthographic projection. I find that the performance of the system is limited by the number of draw* calls indicating that I need to batch more.
There is only one object that I need to draw, but it consists of thousands of triangles that potentially overlap. I have the ability to pre-compute geometry in my particular application and order the triangles back to front since they have varying degrees of transparency. The vertex attribute consists of the color (only) including alpha that is used in the fragment program.
What I've done:
All the primitives are triangles and I assign the 3 vertices of each triangle the same color since the color is constant across a face. I put all of the vertices, for all triangles, and their colors into a single VBO (16-bit; there aren't that many vertices). The index buffer orders the triangles back to front and I issue a single draw call. I use alpha blending (SRC_ALPHA, ONE_MINUS_SRC_ALPHA).
Result:
I see that the result is correctly blended and rendered on the only machine that I possess and test on. I have not tried it on others. I've searched for quite some time, but in vain, for some definitive answer. BTW, the only reference is in the VBO extension spec where there is a mention of a "sequence of primitives" but it does not address what happens when the primitives overlap.
Question:
Is this the guaranteed behavior? That is will the result be the same as issuing multiple calls within glBegin(...) and glEnd(...) in immediate mode (which is guaranteed by the standard)?
Note: Depth buffer and stencil buffer are turned off.

It is guaranteed by the OpenGL specification that primitives will be rendered in the order provided. Each primitive pulled from a glDraw* command will be rendered in the order specified by its component vertices.
So yes: if you put the triangles in an order, that's the order you'll get them out when you render them.

Related

What is the most efficient way to create a reservoir in Three.js?

I am creating a 3D reservoir model which looks like this.
It's made of hundreds of thousands of cells with outline. The outline is needed for all cells underneath, because there is an IJK filter used to hide cells on any level and thus show the rest. Once the model is rendered, it shouldn't need to be updated in terms of position or scale.
That's enough about the background. The approach I'm using is creating one large geometry, which stores all vertices cross the reservoir in one triangle strip. It also stores IJK index for each cell, so the IJK filter works in shader level. This should create the mesh part. Then I create another object to draw all outlines using one THREE.LineSegments.
The approach works pretty well for small amount of cells, but for large data set, frame rate drops.
I'm proposing another way of doing this by barycentric outline and instancing drawing. Barycentric outline drawing removes the extra LineSegment object, since it draws outline in fragment shader. However, it comes with drawbacks. Because of the missing of geometry shader in WebGL, I have to use full triangle rather than triangle strip to store barycentric coordinates for each vertex. I'm ok with this extra memory usage, if instanced drawing can boost the performance.(?) That's to say, I draw a cube with outline, and I create as many instances as I need and put them in right position.
I am wondering if this approach is indeed gonna increase the performance theoretically. Any thoughts are welcomed!
Ok I think I am gonna answer this question myself.
I implemented the change based on above ideas and it works pretty good compared to the original version.
Let's put the result first: this approach has no problem rendering hundreds of thousands of cells at reasonable frame rate. My demo contains 400,000 cells, with the frame rate at 50 fps in worst case, running on my Nvidia 1050Ti card and 4k monitor. For comparison, if I draw 400,000 cells in the previous version, the frame rate could drop to 10 fps.
This means using instanced drawing for a large object is faster than composing a single large geometry. For rendering performance, the instanced cube is rendered only one side, while triangle-stripped cube is two-sided. Once I can draw a single unit cube with ideal outline, I can transform it to any places in "any" shape in vertex shader. But of course instanced drawing comes with its restrictions: each cell doesn't have to be at same shape, but has to have same number of vertices, faces, etc; I lost control to change vertex color...
As for memory usage, the new approach actually use less. I provide position for 8 vertices, instead of 14, in each cell. Even though the first unit cube has 36 vertices, I can use its unit position as index, for subsequent instances. That is, for 36 unit vertices (0/1, 0/1, 0/1), I only need to provide 8 real positions.
Hope this helps for people who want to implement the same optimization.

Using multiple primitives in WebGL at the same time

I am actually trying to develop a web application that would visualize a Finite Element mesh. In order to do so, I am using WebGl. Right now I have a page with all the code necessary to draw the mesh in the viewport using triangles as primitives (each quad element of the mesh was splitted into two triangles to draw it). The problem is that, when using triangles, all the piece is "continuous" and you cant see the separation between triangles. In fact, what I would like to achieve is to add lines between the nodes so that, around each quad element (formed by two triangles) we have these lines in black, and so the mesh can actually be shown.
So I was able to define the lines in my page, but since one shader just can have one type of primitive, if I add the code for the line buffers and bind them it just show the lines, not the element (as they were the last buffers binded).
So the closest solution I have found is using multiple shaders, and managing them with multiple programs, but this solution would just enable me whether to plot the geometry with trias or to draw just the lines, depending on which program is currently selected.
Could any of you help me about how to approach this issue? I have seen a windows application that shows FE meshes using OpenGL and it is able to mix the triangles with points and lines, apart from using different layers, illumination etc. So I am aware that this may be complicated, but I assume that if it is possible somehow with OpenGl it should be as well with webGL.
Please if you provide any solution I would appreciate a lot that it contains some code as an example, for instance drawing a single triangle but including three black lines at its borders and maybe three points at the vertices.
setup()
{
<your current code here>
Additional step - Unbind the previous textures, upload and bind one 1x1 black pixel as a texture. Let this texture object be borderID;
}
Draw loop()
{
Unbind the previous textures, bind your normal textures, and draw the mesh like your current setup. This will fill the entire area with different colours, without border (the current case)
Bind the borderID texture, and draw the same vertices again except this time, use context.LINE_STRIP instead of context.TRIANGLES. This will draw lines with the black texture, and will appear as border, on top of the previously drawn colors for each triangle. You can have something like below
if(currDrawMode==0)
context3dStore.bindTexture(context3dStore.TEXTURE_2D, meshTextureObj[bindId]); else context3dStore.bindTexture(context3dStore.TEXTURE_2D, borderTexture1pixObj[bindId]);
context3dStore.drawElements((currDrawMode == 0) ? context3dStore.TRIANGLES: context3dStore.LINE_LOOP, indicesCount[bindId], context3dStore.UNSIGNED_SHORT, 0); , where currDrawMode toggles between drawing the border and drawing the meshfill.
Since the line texture appears as a border over the flat colors you had earlier, this should solve your need
}

Is drawing outside the viewport in OpenGL expensive?

I have several thousand quads to draw, some of which might fall entirely outside the viewport. I could write code which will detect which quads fall wholly outside viewport and ask OpenGL to draw only those which will be at least partially visible. Alternatively, I could simply have OpenGL draw all of the quads, regardless of whether they intersect with the viewport.
I don't have enough experience with OpenGL to know if one of these is obviously better (or if OpenGL offers some quick viewport intersection test I can use). Are draws outside the viewport close to being no-ops, or are they expensive enough that I should I try to avoid them?
It depends on your circumstances.
Drawing is best done in batches, preferably batches that are static in structure (ie: each batch is drawn in its entirety). So you shouldn't be culling down at the quad level. But doing some culling of large groups of quads is not unwelcome.
The primary performance that you'll lose is vertex transform (aka: your vertex shader). A vertex shader has to be run on every vertex you provide, regardless of anything else. However, hardware will discard triangles that are trivially outside of the viewport, so you won't soak up any fillrate or other performance.
However, that doesn't mean that it's OK if your vertex T&L is cheap. Rendering large blocks of triangles that aren't visible may very well stall the rasterizer, because all of the triangles are being culled. That is, if you draw a lot of stuff that gets culled by being off screen, the fillrate that you might have used on actually visible triangles may be lost.
So it's not a good idea to just hurl geometry at the GPU willy-nilly.
In any case, if you're doing 2D rendering, coarse culling of discrete groups of quads is really all you need. You could divide your tilemap into screen-sized portions, and you draw up to 4 of these based on the position of the camera.

Vertex buffer objects and glutsolidsphere

I have to draw a great collection of spheres in a 3D physical simulation of a "spring-mass" like system.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Is there any method to draw OpenGL spheres in a way faster than glutSolidSphere?
Spheres are self-similar; every sphere is just a scaled version of any other sphere. I see no need to regenerate any geometry. Indeed, I see no need to have more than one sphere at all.
It's simply a matter of providing the proper scaling matrix. I would suggest a sphere of radius one centered at the origin for your display list or buffer object mesh. Then you can just transform it to different locations, using a scale to set the new radius.
I would like to know an efficient method to draw spheres without having to compile a display list at every step of my simulation (each step may vary from milliseconds to seconds, depending on the number of bodies involved in the computation).
Why are you generating a display list at all, if the geometry you put into is is dynamic. Display lists are meant for static geometry that never or only seldomly changes.
I've read that vertex-buffer objects are an efficient method to draw objects which need also to be sometimes updated.
Actually VBOs are most efficient with static geometry as well. In general you want to keep the number of actual geometry updates as low as possible. In your case the only thing updating are the positions (and maybe the size) of the spheres. This is a prime example for instanced drawing. However this also works well, with updating only a uniform or the transformation matrix and do the call drawing a sphere.
The idea of Vertex Arrays and VBOs is, that you draw a whole batch of geometry with a single call. A sphere would be such a batch.

Count of rendered polygons

Once upon a time, I dabbled in programming Homebrew for the Nintendo DS. During testing of the 3D hardware, you could pull a count of currently rendered polygons from a hardware register. I was doing this to confirm how many, approximately, of a certain model would be rendered at various angles because there was a max of 2048 triangles allowed.
I'm hoping this isn't a completely stupid question... but here it goes. Is there anyway to get the number of polygons that are actually being rendered each frame (not being omitted by depth buffering) in OpenGL? Specifically, OpenGL ES 1.1?
You could just run each triangle through an algorithm to determine if it inside the view frustrum by treating the viewing area as four planes. Then it's just a matter of checking which side of the plane each triangle is on, and making sure you only add up triangles that are on the correct side of all four planes. This wouldn't be good for rendering speed, but it would give you an accurate count on how many polygons are being rendered for each viewing angle. This website Graphics Gems contains a lot of good source code that can help you with the math portion of things if you need it. It contains the source code for a series of five books that contain graphics algorithms such as ray triangle intersection, etc.
Edit:
I didn't notice your comment about depth buffering, the above description is for all triangles in the viewing window. You could just add two more planes at your depth buffer distances and use those to further filter out visible polygons.

Resources