In my vulkan application i used to draw meshes like this when all the meshes used the same texture
Updatedescriptorsets(texture)
Command buffer record
{
For each mesh
Bind transformubo
Draw mesh
}
But now I want each mesh to have a unique texture so i tried this
Command buffer record
{
For each mesh
Bind transformubo
Updatedescriptorsets (textures[meshindex])
Draw mesh
}
But it gives an error saying descriptorset is destroyed or updated. I looked in vulkan documentation and found out that I can't update descriptorset during command buffer records. So how can I have a unique texture to each mesh?
vkUpdateDescriptorSets is not synchonrized with anything. Therefore, you cannot update a descriptor set while it is in use. You must ensure that all rendering operations that use the descriptor set in question have finished, and that no commands have been placed in command buffers that use the set in question.
It's basically like a global variable; you can't have people accessing a global variable from numerous threads without some kind of synchronization. And Vulkan doesn't synchronize access to descriptor sets.
There are several ways to deal with this. You can give each object its own descriptor set. This is usually done by having the frequently changing descriptor set data be of a higher index than the less frequently changing data. That way, you're not changing every descriptor for each object, only the ones that change on a per-object basis.
You can use push constant data to index into large tables/array textures. So the descriptor set would have an array texture or an array of textures (if you have dynamic indexing for arrays of textures). A push constant would provide an index, which is used by the shader to fetch that particular object's texture from the array texture/array of textures. This makes frequent changes fairly cheap, and the same index can also be used to give each object its own transformation matrices (by fetching into an array of matrices).
If you have the extension VK_KHR_push_descriptor available, then you can integrate changes to descriptors directly into the command buffer. How much better this is than the push constant mechanism is of course implementation-dependent.
If you update a descriptor set then all command buffers that this descriptor set is bound to will become invalid. Invalid command buffers cannot be submitted or be executed by the GPU.
What you basically need to do is to update descriptor sets before you bind them.
This odd behavior is there because in vkCmdBindDescriptorSets some implementations take the vulkan descriptor set, translate it to native descriptor tables and then store it in the command buffer. So if you update the descriptor set after vkCmdBindDescriptorSets the command buffer will be seeing stale data. VK_EXT_descriptor_indexing extension relaxed this behavior under some circumstances.
Related
I happen to have multiple parts in drawing for each frame that could use the same shader/program but need different data passed to vertexShader as attributes. That data is stored in different buffers, though (one having GL_STATIC_DRAW, other GL_STREAM_DRAW). The static buffer is kinda large which is why I dont want to have it all streamed from the same buffer.
My question:
Is OpenGL ES 2.0 / WebGL capable of using different data for the same attribute/shader setup (read from gpu memory) ?
What have I tried ?
I have tried to get the attributeLocation for e.g. a_position_f4 multiple times (bindBuffer() used to change the buffer) and then setting vertexAttribPointer() each - but debugging showed that the attribute indicies are not dependant on the bound buffer and so changing bound buffers and en-/disabling the vertexAttributes won't lead to the desired behaviour.
I have tried to get the attributeLocation for e.g. a_position_f4 multiple times (bindBuffer() used to change the buffer) and then setting vertexAttribPointer() each - but debugging showed that the attribute indicies are not dependant on the bound buffer and so changing bound buffers and en-/disabling the vertexAttributes won't lead to the desired behaviour.
Youre conclusion is wrong. The attribute locations have nothing to do with the buffers you use. They are determined at shader link time and are not tied to buffer objects in any way. The vertex attrib pointer however will always reference the currently bound GL_ARRAY_BUFFER (at the time of the glVertexAttribPointer call). So you should just query the attribute locations once (per program object), and then can use
glBindBuffer(GL_ARRAY_BUFFER, bufID);
glVertexAttribPointer(attrib_loc, ...);
any time to specify the buffer bufID as the source for the attribute data for the attribute with index attrib_loc. (As you can see here, each attribute array may come from a different buffer, or sever or all can come from the same, that is up to you). The GL is a state machine, an this setup will stay until you change it again, so all draw calls following this statement will use that location for fetching that attribute.
When I run my app, OpenGL Driver Monitor says the Textures count is rapidly increasing — within 30 seconds the Textures count increases by about 45,000.
But I haven't been able to find the leak. I've instrumented every glGen*() call to print out every GL object name it returns — but they're all less than 50, so apparently GL objects created by glGen*() aren't being leaked.
It's a large, complex app that renders multiple shaders to multiple FBOs on shared contexts on separate threads, so reducing this to a simple test case isn't practical.
What else, besides glGen*() calls, should I check in order to identify what is leaking?
Funny thing, those glGen* (...) functions. All they do is return the first unused name for a particular type of object and reserve the name so that a subsequent call to glGen* (...) does not also give out the name.
Texture objects (and all objects, really) are actually created in OpenGL the first time you bind a name. That is to say, glBindTexture (GL_TEXTURE_2D, 1) is the actual function that creates a texture with the name 1. The interesting thing here is that in many implementations (OpenGL 2.1 or older) you are free to use any random number you want for the name even if it was not acquired through a call to glGenTextures (...) and glBindTexture (...) will still create a texture for that name (provided one does not already exist).
The bottom line is that glGenTextures (...) is not what creates a texture, it only gives you the first unused texture name it finds. I would focus on tracking down all calls to glBindTexture (...) instead, it is likely you are passing uninitialized data as the name.
UPDATE:
As datenwolf points out, if you are using a 3.2+ core context then this behavior does not apply (names must be generated with a matching glGen* (...) call starting with OpenGL 3.0). However, OS X gives you a 2.1 implementation by default.
In OpenGL ES 3.0 spec we can read:
§ 4.4.5
When the relevant framebuffer binding is non-zero, if the currently bound
framebuffer object is not framebuffer complete, then the values of the state variables
listed in table 6.34 are undefined.
Table 6.34 contains the x_BITS constant. That means we can create a texture or renderbuffer that's not color-renderable, but we can't verify that it has proper size.
Is there any way around this, or is my idea completely skewed and this information is irrelevant (which would render the question incorrect)?
You can query a bound render buffer properties using GetRenderbufferParameteriv (6.1.14 Renderbuffer Object Queries). For example with RENDERBUFFER_-
INTERNAL_FORMAT.
The problem is that unless the framebuffer is complete, it is not well formed thus the specification just states that values returned are undefined. That's doesn't mean you can query for one of the renderbuffers attached and get the desired information.
Not sure if this is what you were looking for.
According to Apple's documentation, CGLFlushDrawable or it's Cocoa equivalent flushBuffer may behave in couple different ways. Normally for a windowed application the contents of a back buffer are copied to the visible buffer like it's stated here:
CGLFlushDrawable
Copies the back buffer of a double-buffered context to the front buffer.
I assume the contents of the drawing buffer are left untouched (see question 1.). Even if I'm wrong, it can be assured by passing the kCGLPFABackingStore attribute to CGLChoosePixelFormat.
But further reading reaveals, that under some circumstances the buffers may be swapped rather than copying being performed:
If the backing store attribute is set to false, the buffers can be exchanged rather than copied. This is often the case in full-screen mode.
And also this states
When there is no content above your full-screen window, Mac OS X automatically attempts to optimize this context’s performance. For example, when your application calls flushBuffer on the NSOpenGLContext object, the system may swap the buffers rather than copying the contents of the back buffer to the front buffer. (...) Because the system may choose to swap the buffers rather than copy them, your application must completely redraw the scene after every call to flushBuffer.
And here go my questions:
If the back buffer is copied, is it guaranteed, that it's contents are preserved even without the backing store attribute?
If the bufferse are swapped, does the back buffer get contents of the front buffer, or is it undefined so it could as well get random stuff?
The system may choose to swap buffers, but is there any way to determine if it actually did choose to do so?
In any of those cases, is there a way to determine if the buffer was preserved, exchanged with the front buffer or got messed up?
Also any information on how it is made in WGL, GLX or EGL would be appreciated. I particulary need the answer to the question 4.
No, it's not guaranteed.
It might be random.
No, I don't believe so.
No. If you don't specify kCGLPFABackingStore or NSOpenGLPFABackingStore, then you can't make any assumptions about the contents of the back buffer, which is why the docs say you must redraw from scratch for every frame.
I'm not sure what you're asking about WGL, GLX, and EGL.
I've been using OpenGL for quite a while now but I always get confused by it's state management system. In particular the issue I struggle with is understanding exactly which object or target a particular state is stored against.
Eg 1: assigning a texture parameter. Are those parameters stored with the texture itself, or the texture unit? Will binding a texture with a different texture unit move those parameter settings?
Eg 2: glVertexAttribPointer - what exactly is that associated with - is the it the active shader program, the the bound data buffer, the ES context itself? If I bind a different vertex buffer object, do I need to call glVertexAttribPointer again?
So I'm not asking for answers to the above questions - I'm asking if those answers are written down somewhere so I don't need to do the whole trial and error thing everytime I use something new.
Those answers are written in the OpenGL ES 2.0 specification (PDF link). Every function states what state it affects, and there's a big series of tables at the end that specify which state is part of which objects, or just part of the global context.