Understanding OpenGL state associations - opengl-es

I've been using OpenGL for quite a while now but I always get confused by it's state management system. In particular the issue I struggle with is understanding exactly which object or target a particular state is stored against.
Eg 1: assigning a texture parameter. Are those parameters stored with the texture itself, or the texture unit? Will binding a texture with a different texture unit move those parameter settings?
Eg 2: glVertexAttribPointer - what exactly is that associated with - is the it the active shader program, the the bound data buffer, the ES context itself? If I bind a different vertex buffer object, do I need to call glVertexAttribPointer again?
So I'm not asking for answers to the above questions - I'm asking if those answers are written down somewhere so I don't need to do the whole trial and error thing everytime I use something new.

Those answers are written in the OpenGL ES 2.0 specification (PDF link). Every function states what state it affects, and there's a big series of tables at the end that specify which state is part of which objects, or just part of the global context.

Related

Vulkan descriptor set best practice [duplicate]

In my vulkan application i used to draw meshes like this when all the meshes used the same texture
Updatedescriptorsets(texture)
Command buffer record
{
For each mesh
Bind transformubo
Draw mesh
}
But now I want each mesh to have a unique texture so i tried this
Command buffer record
{
For each mesh
Bind transformubo
Updatedescriptorsets (textures[meshindex])
Draw mesh
}
But it gives an error saying descriptorset is destroyed or updated. I looked in vulkan documentation and found out that I can't update descriptorset during command buffer records. So how can I have a unique texture to each mesh?
vkUpdateDescriptorSets is not synchonrized with anything. Therefore, you cannot update a descriptor set while it is in use. You must ensure that all rendering operations that use the descriptor set in question have finished, and that no commands have been placed in command buffers that use the set in question.
It's basically like a global variable; you can't have people accessing a global variable from numerous threads without some kind of synchronization. And Vulkan doesn't synchronize access to descriptor sets.
There are several ways to deal with this. You can give each object its own descriptor set. This is usually done by having the frequently changing descriptor set data be of a higher index than the less frequently changing data. That way, you're not changing every descriptor for each object, only the ones that change on a per-object basis.
You can use push constant data to index into large tables/array textures. So the descriptor set would have an array texture or an array of textures (if you have dynamic indexing for arrays of textures). A push constant would provide an index, which is used by the shader to fetch that particular object's texture from the array texture/array of textures. This makes frequent changes fairly cheap, and the same index can also be used to give each object its own transformation matrices (by fetching into an array of matrices).
If you have the extension VK_KHR_push_descriptor available, then you can integrate changes to descriptors directly into the command buffer. How much better this is than the push constant mechanism is of course implementation-dependent.
If you update a descriptor set then all command buffers that this descriptor set is bound to will become invalid. Invalid command buffers cannot be submitted or be executed by the GPU.
What you basically need to do is to update descriptor sets before you bind them.
This odd behavior is there because in vkCmdBindDescriptorSets some implementations take the vulkan descriptor set, translate it to native descriptor tables and then store it in the command buffer. So if you update the descriptor set after vkCmdBindDescriptorSets the command buffer will be seeing stale data. VK_EXT_descriptor_indexing extension relaxed this behavior under some circumstances.

How to get all current OpenGL bindings

I am learning OpenGL via the Superbible and internet, and a concept that I always see causing trouble is the glBindxxx (See for instance the accepted answer of Concept behind OpenGL's 'Bind' functions for a typical problem related to bindings.)
You can/have to bind buffers (glBindBuffer) before setting them to be used by glVertexAttribPointer. You can/have to bind the VAO (glBindVertexArray) before setting your glVerteyAttribPointer, and so on, with a never ending chain of current binding dependencies.
Sometimes, even if you forget binding something your program might still work if its simple enough, but I have seen lots of people taking too much time to find that the source of their bug is some hidden binding that they were not aware of.
Is there a command in OpenGL to list the last binding? Something similar to glGetAllBindings, and this would return the last bound ID of each glBindxxx functions (a small list of them is below based on Superbible 6th edition)
glBindBuffer
glBindBufferBase
glBindFrameBuffer
glBindImageTexture
glBindProgramPipeline
glBindSampler
glBindTexture
glBindTransformFeedback
glBindVertexArray
glBindVertexBuffer
For instance, if I performed a glBindBuffer with buffer ID 1 as parameter and then again with buffer ID 2, glBindVertexArray with ID 3 and then glBindVertexArray with ID 5, the function would return 2 for glBindBuffer and 5 for glBindVertexArray.
With that, I can always know in which context I am before applying new settings.
I believe this would greatly help anyone needing to understand and debug binding problems.
Is there a command in OpenGL to list all current bindings?
No. Really, how could there be?
Many objects are bound to specific targets for that particular type of object. Buffer objects alone have a dozen. And textures not only are bound to a target, they are bound to a combination of texture unit and target. So the combination of (unit 0, GL_TEXTURE2D) can have a different texture bound than (unit 0, GL_TEXTURE3D). And the number of texture units for modern hardware is required to be at least 96 (16 per stage * 6 shader stages).
It would not be reasonable to have a single function regurgitate all of that information.

Changing data source for same attribute not possible ? (in WebGL/OpenGL ES 2.0)

I happen to have multiple parts in drawing for each frame that could use the same shader/program but need different data passed to vertexShader as attributes. That data is stored in different buffers, though (one having GL_STATIC_DRAW, other GL_STREAM_DRAW). The static buffer is kinda large which is why I dont want to have it all streamed from the same buffer.
My question:
Is OpenGL ES 2.0 / WebGL capable of using different data for the same attribute/shader setup (read from gpu memory) ?
What have I tried ?
I have tried to get the attributeLocation for e.g. a_position_f4 multiple times (bindBuffer() used to change the buffer) and then setting vertexAttribPointer() each - but debugging showed that the attribute indicies are not dependant on the bound buffer and so changing bound buffers and en-/disabling the vertexAttributes won't lead to the desired behaviour.
I have tried to get the attributeLocation for e.g. a_position_f4 multiple times (bindBuffer() used to change the buffer) and then setting vertexAttribPointer() each - but debugging showed that the attribute indicies are not dependant on the bound buffer and so changing bound buffers and en-/disabling the vertexAttributes won't lead to the desired behaviour.
Youre conclusion is wrong. The attribute locations have nothing to do with the buffers you use. They are determined at shader link time and are not tied to buffer objects in any way. The vertex attrib pointer however will always reference the currently bound GL_ARRAY_BUFFER (at the time of the glVertexAttribPointer call). So you should just query the attribute locations once (per program object), and then can use
glBindBuffer(GL_ARRAY_BUFFER, bufID);
glVertexAttribPointer(attrib_loc, ...);
any time to specify the buffer bufID as the source for the attribute data for the attribute with index attrib_loc. (As you can see here, each attribute array may come from a different buffer, or sever or all can come from the same, that is up to you). The GL is a state machine, an this setup will stay until you change it again, so all draw calls following this statement will use that location for fetching that attribute.

OpenGL Driver Monitor says textures are rapidly increasing. How to find the leak?

When I run my app, OpenGL Driver Monitor says the Textures count is rapidly increasing — within 30 seconds the Textures count increases by about 45,000.
But I haven't been able to find the leak. I've instrumented every glGen*() call to print out every GL object name it returns — but they're all less than 50, so apparently GL objects created by glGen*() aren't being leaked.
It's a large, complex app that renders multiple shaders to multiple FBOs on shared contexts on separate threads, so reducing this to a simple test case isn't practical.
What else, besides glGen*() calls, should I check in order to identify what is leaking?
Funny thing, those glGen* (...) functions. All they do is return the first unused name for a particular type of object and reserve the name so that a subsequent call to glGen* (...) does not also give out the name.
Texture objects (and all objects, really) are actually created in OpenGL the first time you bind a name. That is to say, glBindTexture (GL_TEXTURE_2D, 1) is the actual function that creates a texture with the name 1. The interesting thing here is that in many implementations (OpenGL 2.1 or older) you are free to use any random number you want for the name even if it was not acquired through a call to glGenTextures (...) and glBindTexture (...) will still create a texture for that name (provided one does not already exist).
The bottom line is that glGenTextures (...) is not what creates a texture, it only gives you the first unused texture name it finds. I would focus on tracking down all calls to glBindTexture (...) instead, it is likely you are passing uninitialized data as the name.
UPDATE:
As datenwolf points out, if you are using a 3.2+ core context then this behavior does not apply (names must be generated with a matching glGen* (...) call starting with OpenGL 3.0). However, OS X gives you a 2.1 implementation by default.

Is it possible to verify the components' size of non-color-renderable internal format?

In OpenGL ES 3.0 spec we can read:
§ 4.4.5
When the relevant framebuffer binding is non-zero, if the currently bound
framebuffer object is not framebuffer complete, then the values of the state variables
listed in table 6.34 are undefined.
Table 6.34 contains the x_BITS constant. That means we can create a texture or renderbuffer that's not color-renderable, but we can't verify that it has proper size.
Is there any way around this, or is my idea completely skewed and this information is irrelevant (which would render the question incorrect)?
You can query a bound render buffer properties using GetRenderbufferParameteriv (6.1.14 Renderbuffer Object Queries). For example with RENDERBUFFER_-
INTERNAL_FORMAT.
The problem is that unless the framebuffer is complete, it is not well formed thus the specification just states that values returned are undefined. That's doesn't mean you can query for one of the renderbuffers attached and get the desired information.
Not sure if this is what you were looking for.

Resources