UML diagram for OpenGL ES 2.0 state? - opengl-es

Can anybody provide a UML diagram describing the OpenGL ES 2.0 state machine?
Ideally, such a diagram should describe thing such as textures have width, height, type, internal format, etc.; programs have attached shaders, may or may not be linked, have uniforms, etc.; et al.
The reason I would be very interested is because I often find myself wondering things such as:
Are texture parameters (set with glTexParameter) associated with the current texture, or texture unit?
Is the set of enabled generalized vector attributes part of the currently bound VBO? Or part of the current program? Or global?
Having a UML diagram of OpenGL would be tremendously useful in answering these things at a glance rather than having to pour through obscene amounts of documentation to try to figure out how all the different components play together.
I realize looking for this is a long shot because I imagine it a tremendous effort to put together. Still, I think it would be tremendously useful. Even a partial answer could help a lot. Likewise, a diagram of some version of OpenGL other than the one I'm targeting (ES 2.0) would be useful.

The website for the OpenGL Insights book provides a UML state diagram for the whole rendering pipeline for both OpenGL 4.2 and OpenGL ES 2.0: http://openglinsights.com/pipeline.html
This diagram roughly shows the interaction of the stages and what GL objects are involved in each state and it shows the chapter of the specification that describe these objects.
What the diagram doesn't show is the state of the objects involved, but you can find that in the specification itself. In the OpenGL ES 2.0 specification chapter 6.2 all objects and aspects are listed with their state and how you can access it.
So, if you annotate the state diagram with the table numbers from the specification you more or less have everything you want.

Related

Is it true that OPENGL ES 3.1 is slower than OPENGL ES 2.0?

Found an article here OpenGL ES versus Vulkan, who is the performance king? mentioned that:
"The problem with OpenGL ES 3.1 is that while the graphics look immensely better than OpenGL ES 2.0, the performance hit is so great that games are basically not playable, looking at the image above comparing OpenGL ES 2.0 and 3.1 on my Nexus 6P shows that the exact same scene runs at a third of the frames per second compared to OpenGL ES 2.0. This is where Vulkan comes in, offering at least the same in graphics quality, but with improved performance. So how does Vulkan do?"
I can't imagine that 3.1 slower than 2.0 with the same scene. Had the author mistaken the image? Seems the right image have GI.
Had the author mistaken the image?
To me it seems that the author of that article is just dumb.
Just a quote from that article:
Vulkan still will not perform as well as the lower graphics capable OpenGL ES 2.0, as Vulkan displays a lot more on screen and the scenes it can render are a lot more complex
It's like saying that a Ferrari will not perform as well as a bicycle because you can ride 10 meters in a bicycle in 10 seconds, but can't drive 100 kilometers in a Ferrari in the same amount of time.
Now, about the image from that article:
It's not the same in OpenGL ES 3.1 and 2.0. I can at least see a more realistic lighting with reflections, as well as smoother looking walls, in the ES 3.1 screenshot.
To compare things like that you need at least to make sure that the resulting images are the same for both cases. If you are rendering a scene without postprocessing effects in one case, and with postprocessing effects on the other, then it's not the right comparison. Also, If you are rendering the scene with, say, a deferred renderer in one case, and a forward renderer in the other, then it's again not the right comparison, even if you got the same image.
I took a look at this article also while searching the same question myself, and please note that in it the author used unreal to do the comparisons. Enabling different options in unreal such as gl es 3 doesnt only change the gl version but also adds more realism that the engine assumes that gl es 3 can handle.

Object tracking with Project Tango

As far as I know, the main features of Project Tango SDK are:
motion tracking
area learning
depth perception
But what about object recognition & tracking?
I didn't see anything about that in the SDK, though I assume that Tango hardware would be really good at it (compared with traditional smartphones). Why?
Update 2017/06/05
A marker detection API has been introduced in the Hopak release
There are already good libraries for object tracking in 2D images and the additional features of project tango would likely add only marginal improvement in performance(of existing functions) for major overhauls of the library to support a small set of evolving hardware.
How do you think project tango could improve on existing object recognition & tracking?
With a 3d model of the object to be tracked, and knowledge of the motion and pose of the camera, one could predict what the next image of the tracked object 'should' look like. If the next image is different than predicted, it could be assumed that the tracked object has moved from its prior position. And the actual new 3D image could indicate the tracked object's vectors. That certainly has uses in navigating a live environment.
But that sounds like the sort of solution a self driving car might use. And that would be a valuable piece of tech worth keeping away from competitors despite its value to the community.
This is all just speculation. I have no first hand knowledge.
I'm not really sure what you're expecting for an "open question", but I can tell you one common way that people exploit Tango's capabilities to aid object recognition & tracking. Tango's point cloud, image callbacks, and pose data can be used as input for a library like PCL (http://pointclouds.org/).
Simply browsing the documentation & tutorials will give you a good idea of what's possible and how it can be achieved.
http://pointclouds.org/documentation/
Beyond that, you might browse the pcl-users mail archives:
http://www.pcl-users.org/

Why no access to texture lod in fragment shader

I'm trying to come to terms with the level of detail of a mipmapped texture in an OpenGL ES 2.0 fragment shader.
According to this answer it is not possible to use the bias parameter to texture2D to access a specific level of detail in the fragment shader. According to this post the level of detail is instead automatically computed from the parallel execution of adjacent fragments. I'll have to trust that that's the way how things work.
What I cannot understand is the why of it. Why isn't it possible to access a specific level of detail, when doing so should be very simple indeed? Why does one have to rely on complicated fixed functionality instead?
To me, this seems very counter-intuitive. After all, the whole OpenGL related stuff evolves away from fixed functionality. And OpenGL ES is intended to cover a broader range of hardware than OpenGL, therefore only support the simpler versions of many things. So I would perfectly understand if developers of the specification had decided that the LOD parameter is mandatory (perhaps defaulting to zero), and that it's up to the shader programmer to work out the appropriate LOD, in whatever way he deems appropriate. Adding a function which does that computation automagically seems like something I'd have expected in desktop OpenGL.
Not providing direct access to a specific level doesn't make any sense to me at all, no matter how I look at it. Particularly since that bias parameter indicates that we are indeed allowed to tweak the level of detail, so apparently this is not about fetching data from memory only for a single level for a bunch of fragments processed in parallel. I can't think of any other reason.
Of course, why questions tend to attract opinions. But since opinion-based answers are not accepted on Stack Overflow, please post your opinions as comments only. Answers, on the other hand, should be based on verifiable facts, like statements by someone with definite knowledge. If there are any records of the developers discussing this fact, that would be perfect. If there is a blog post by someone inside discussing this issue, that would still be very good.
Since Stack Overflow questions should deal with real programming problems, one might argue that asking for the reason is a bad question. Getting an answer won't make that explicit lod access suddenly appear, and therefore won't help me solve my immediate problems. But I feel that the reason here might be due to some important aspect of how OpenGL ES works which I haven't grasped so far. If that is the case, then understanding the motivation behind this one decision will help me and others to better understand OpenGL ES as a whole, and therefore make better use of it in their programs, in terms of performance, exactness, portability and so on. Therefore I might have stated this question as “what am I missing?”, which feels like a very real programming problem to me at the moment.
texture2DLod (...) serves a very important purpose in vertex shader texture lookups, which is not necessary in fragment shaders.
When a texture lookup occurs in a fragment shader, the fragment shader has access to per-attribute gradients (partial derivatives such as dFdx (...) and dFdy (...)) for the primitive currently being shaded, and it uses this information to determine which LOD to fetch neighboring texels from during filtering.
At the time vertex shaders run, no information about primitives is known and there is no such gradient. The only way to utilize mipmaps in a vertex shader is to explicitly fetch a specific LOD, and that is why that function was introduced.
Desktop OpenGL has solved this problem a little more intelligently, by offering a variant of texture lookup for vertex shaders that actually takes a gradient as one of its inputs. Said function is called textureGrad (...), and it was introduced in GLSL 1.30. ESSL 1.0 is derived from GLSL 1.20, and does not benefit from all the same basic hardware functionality.
ES 3.0 does not have this limitation, and neither does desktop GL 3.0. When explicit LOD lookups were introduced into desktop GL (3.0), it could be done from any shader stage. It may just be an oversight, or there could be some fundamental hardware limitation (recall that older GPUs used to have specialized vertex and pixel shader hardware and embedded GPUs are never on the cutting edge of GPU design).
Whatever the original reason for this limitation, it has been rectified in a later OpenGL ES 2.0 extension and is core in OpenGL ES 3.0. Chances are pretty good that a modern GL ES 2.0 implementation will actually support explicit LOD lookups in the fragment shader given the following extension:
GL_EXT_shader_texture_lod
Pseudo-code showing explicit LOD lookup in a fragment shader:
#version 100
#extension GL_EXT_shader_texture_lod : require
attribute vec2 tex_st;
uniform sampler2D sampler;
void main (void)
{
// Note the EXT suffix, that is very important in ESSL 1.00
gl_FragColor = texture2DLodEXT (sampler, tex_st, 0);
}

Will an Octree improve performance if my scene contains less than 200K vertices?

I am finally making the move to OpenGL ES 2.0 and am taking advantage of a VBO to load all of the scene data onto the graphics cards memory. However my scene is only around the 200,000 vertices in size ( and I know it depends on hardware somewhat ) but does anyone think an octree would make any sense in this instance ? ( incidentally because of the view point at least 60% of the scene is visible most of the time ) Clearly I am trying to avoid having to implementing an Octree at such an early stage of my GLSL coding life !
There is no need to be worried about optimization and performance if the App you are coding is for learning purpose only. But given your question, apparently you intend to make a commercial App.
Only using VBO will not solve problems on performance for you App, specially as you mentioned that you meant it to run on mobile devices. OpenGL ES has an optimized option for drawing called GL_TRIANGLE_STRIP, which is worth particularly for complex geometry.
Also interesting to add up in improving performance is to apply Bump Masking, for the case you have textures in your model. With these two approaches you App will be remarkably improved.
As you mention that your entire scenery is visible all the time, you should also use level of detail (LOD). To implement geometry LOD, you need a different mesh for each LOD that you wish to use, and each level has fewer polygons than the closest one. You can make yourself the geometry for each LOD, or you can also apply some 3D software to make it automatically.
Some tools are free and you can access and use it to automatically perform generic optimization directly on your GLSL ES code, and it is really worth checking.

Is ARB_texture_multisample available for OpenGL ES 2.0?

Basically, what is needed to perform multisampled deferred shading.
To expand a bit: I'm not actually all that interested in Deferred shading per se, but what is of key importance is allowing the storage and retrieval of sub-pixel sample data for antialiasing purposes: I need to be able to control the resolve, or at least do some operations before resolving multisampled buffers.
All the major extensions for OpenGL ES are listed here: http://www.khronos.org/registry/gles/
And as far as I know currently no major OpenGL ES implmentations does not provide individual sample resolving using OpenGL ES. Only thing you can do is to copy multisampled texture to normal one, and the access "normal" samples.

Resources