opengl es shader program id and vbo buffer id are same - opengl-es

I am drawing 2 triangles in opengl es 2.0 using vbo.
the program handle (hProgramHandle)
hProgramHandle = glCreateProgram(); // value is 210003
is same as the iVertBuffId
glGenBuffers(1, &iVertBuffId1); // for vertices // 70001
...
...
glGenBuffers(1, &iVertBuffId2); // for color // 140002
...
...
glGenBuffers(1, &iVertBuffId3); // for texture // 210003
I have created 3 buffers (each for position, color and texture).
The above issue is coming while generating the buffer for the texture.
I am not getting the output.
Will opengl generate the same number for the program id and vbo buffer id?

That is dependent on the implementation of the particular OpenGL ES driver you are running, but yes the values can be the same because they are handles to different types of objects and not necessarily memory pointers. Think of them as indexes into different data structures.

Ids returned by OpenGL are in fact names refering to its internal storage.
Internal OpenGL storage is divided by speciality, so it can optimize its memory access at will.
Where this is counter-intuitive is that ids are in fact not unique, but rather dependant on what you are talking about to OpenGL : e.g. what is currently bound.
It is absolutely correct for OpenGL to give you identical ids, as long as they refer to something different : texture ids and buffer ids can overlap, that's not a problem.
Remark that they may or may not overlap, and begin by 0 in order or simply give you what seems to be random numbers, that is implementation dependant.

Related

Why Directx11 doesn't support multiple index buffers in IASetIndexBuffer

You can set multiple vertex buffers with IASetVertexBuffers,
but there is no plural version of IASetIndexBuffer.
What is the point of creating (non-interleaved) multiple vertex buffer datas if you wont be able to refer them with individual index buffers
(assume i have a struct called vector3 with 3 floats x,y,z)
let say i have a model of human with 250.000 vertices and 1.000.000 triangles;
i will create a vertex buffer with size of 250.000 * sizeof(vector3)
for vertex LOCATIONS and also,
i will create another vertex buffer with size of 1.000.000 * 3 *
sizeof(vector3) for vertex NORMALS (and propably another for diffuse
texture)
i can set these vertex buffers like:
ID3D11Buffer* vbs[2] = { meshHandle->VertexBuffer_Position, meshHandle->VertexBuffer_Normal };
uint strides[] = { Vector3f_size, Vector3f_size };
uint offsets[] = { 0, 0 };
ImmediateContext->IASetVertexBuffers(0, 2, vbs, strides, offsets);
how can i set seperated index buffers for these vertex datas if IASetIndexBuffer only supports 1 index buffer
and also (i know there are techniques for decals like creating extra triangles from the original model but)
let say i want to render a small texture like a SCAR on this human model's face (let say forehead), and this scar will only spread through 4 triangles,
is it possible creating a uv buffer (with only 4 triangles) and creating 3 different index buffers for locations, normals and UVs for only 4 triangles but using the same original vertex buffers (same data from full human model). i dont want to create tonnes of uv data which will never be rendered beside characters forehead (and i dont want to re-use, re-create vertex position datas for this secondary texture layers (decals))
EDIT:
i realized i didnt properly ask a question, so my question is:
did i misunderstand non-interleaved model structure (is it being used
for some other reason instead having non-aligned vertex components)?
or am i approaching non-interleaved structure wrong (is there a way
defining multiple non-aligned vertex buffers and drawing them with
only one index buffer)?
The reason you can't have more than on Index buffer bound at a time is because you need to specify a Fixed number of primitives when you make a call to DrawIndexed.
So in your example, if you have one index buffer with 3000 primitives and another with 12000 primitives, the pipeline would have no idea how to match the first set to the second set.
If is generally normal that some vertex data (mostly position) eventually requires to be duplicated across your vertex buffer, since your buffers requires to be the same size.
Index buffer works as a "lookup table", so your data across vertex buffers need to be consistent.
Non interleaved model structure has many advantages:
First using separate buffers can lead to better performances if you need to draw the models several times and some draws do not require attributes.
For example, when you render a shadow map, you will need to access only positions, so in interleaved mode, you still need to bind a large data structure and access elements in a non contiguous way (Input Assembler does that). In case of non interleaved data, Position will be contiguous in memory so fetch will be much faster.
Also non interleaved allows to more easily do some processing on some attributes, it is common nowadays to perform some displacement or skinning in a compute shader, so in that case you can also easily create another Position+Normal buffer, perform your skinning on those and attach them in the pipeline once processed (And you can keep UV buffer intact).
If you want to draw non aligned Vertex buffers, you could use Structured Buffers instead (and use SV_VertexID and some custom lookup tables in your shader code).

In opengl ES can I use a vertex buffer array buffer etc for shader shared matrices?

As OpenGL ES does not support shared "uniform blocks" I was wondering if there is a way I can put matrices that can be referenced by a number of different shaders, a simple example would be a worldToViewport or worldToEye which would not change for an entire frame and which all shaders would reference. I saw one post where one uses 3 or 4 dot calls on a vertex to transform it from 4 "column vectors", but wondering if there is a way to assign the buffer data to a "mat4" in the shader.
Ah yes the need for this is webGL which at the moment it seems only to support openGLES 2.0.
I wonder if it supports indexed attribute buffers as I assume they don't need to be any specified size relative to the size of the position vertex array.
Then if one can use a hard coded or calculated index into the attribute buffer ( in the shader ) and if one can bind more than one attribute buffer at a time, and access all "bound to the shader" buffers simultaneously in a shader ...
I see if all true might work. I need a good language/architecture reference on shaders as I am somewhat new to shader programming as I I'm trying to deign a wall without knowing the shapes of the bricks :)
Vertex attributes are per-vertex, so there is no way so share vertex attributes amongst multiple vertices.
OpenGL ES 2.0 upwards has CPU-side uniforms, which must be uploaded individually from the CPU at draw time. Uniforms belong to the program object, so for uniforms which are constant for a frame you only have to modify each program once, so the cost isn't necessarily proportional to draw count.
OpenGL ES 3.0 onwards has Uniform Buffer Objects (UBOs) which allow you to load uniforms from a buffer in memory.
I'm not sure what you mean by "doesn't support shared uniform blocks", as that's pretty much what a UBO is, albeit it won't work on older hardware which only supports OpenGL ES 2.x.

How can I properly manage data in modern OpenGL while considering performance?

In modern OpenGL (3.x+), you create buffer objects which contain vertex attributes, such as positions, colors, normals, texture coordinatess, & indices.
These buffers are then assigned to a corresponding vertex array object (VAO) which essentially contains pointers to all of the data as well as the data's format.
There are many tutorials out there for how to create a VAO and how to use it; unfortunately, it isn't clear how VAO's should be used for larger applications or games.
For example, a game might contain many 3D models, and it seems appropriate to separate each model by a different VAO.
On the other hand, a particle system contains many disconnected primitives traveling independent of one another. In this scenario, using a single VAO per system might improve performance in CPU-GPU transfers. However, in this case, the primitives need to be translated differently than one another, so it might seem viable to separate each particle into a very tiny VAO.
Question:
For a large quantity of small data sets (such as a particle system of quads), should all of the data be packed into 1 VAO or divided into many VAO's? What are the performance benefits/drawbacks in each method?
Assumming 1 VAO is used, the only apparent way to translate each independent sub-unit of data is to modify the actual position information and reload it into the GPU. Doing this many times is costly in terms of time performance.
Assuming many VAO's are used, then the GPU must store duplicate formatting information for each VAO. This seems to be costly in terms of space (but I'm not sure if this is necessarily slow).
Side-Note:
Yes, I'm personally interested in managing a particle system. To keep this question more generic, and more useful for others, I am asking about VAO management as a whole. I am curious what management methods are more suitable vs others when considering the type of data being stored and when considering what type of performance is desired (time/space).
VAO creation is described well here:
https://www.opengl.org/wiki/Vertex_Specification
In the case of particles it would be best to use instanced rendering - where you can render all the particles in a single draw call but assign a different position for each one as an attribute. You can update an existing buffer using glSubData. That way you could update the position on the CPU side between frames, and then update the buffer.
In more complex examples you can instance whichever attributes you want to.
The way I call instanced rendering and set it up in my code is as follows:
void CreateInstancedAttrib(unsigned int attribNum,GLuint VAO,GLuint& posVBO,int numInstances){
glBindVertexArray(VAO);
posVBO = CreateVertexArrayBuffer(0, sizeof(vec3),numInstances,GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(attribNum);
glVertexAttribPointer(attribNum, 3, GL_FLOAT, GL_FALSE, sizeof(vec3), 0);
glVertexAttribDivisor(attribNum, 1);
glBindVertexArray(0);
}
Where posVBO is the usual attrib data and the lines following set up the buffer for positions.
When rendering:
void RenderInstancedStaticMesh(const StaticMesh& mesh, MaterialUniforms& uniforms,const vec3* positions){
for (unsigned int meshNum = 0; meshNum < mesh.m_numMeshes; meshNum++){
if (mesh.m_meshData[meshNum]->m_hasTexture){
glBindTexture(GL_TEXTURE_2D, mesh.m_meshData[meshNum]->m_texture);
}
glBindVertexArray(mesh.m_meshData[meshNum]->m_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mesh.m_meshData[meshNum]->m_instancedDataBuffer);
glBufferSubData(GL_ARRAY_BUFFER,0, sizeof(vec3) * mesh.m_numInstances, positions);
glUniform3fv(uniforms.diffuseUniform, 1, &mesh.m_meshData[meshNum]->m_material.diffuse[0]);
glUniform3fv(uniforms.specularUniform, 1, &mesh.m_meshData[meshNum]->m_material.specular[0]);
glUniform3fv(uniforms.ambientUniform, 1, &mesh.m_meshData[meshNum]->m_material.ambient[0]);
glUniform1f(uniforms.shininessUniform, mesh.m_meshData[meshNum]->m_material.shininess);
glDrawElementsInstanced(GL_TRIANGLES, mesh.m_meshData[meshNum]->m_numFaces * 3,
GL_UNSIGNED_INT, 0,mesh.m_numInstances);
}
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
That's a lot to take in but the important lines are DrawElementsInstance and glBufferSubData.
If you do a few googles on both functions I'm sure you will come to understand how instanced rendering works.
Anymore questions please ask
The general rule is, that you want to minimize the amount of draw calls. If you put things into individual VAOs you have to perform a draw call for each VAO. Also switching between VAOs and VBOs comes with a cost either. Don't think of VAOs and VBOs as "model" containsers, but as memory pools, where each VBO / VAO should be used to coalesce data of identical properties.
A particle system is the perfect candidate to put everything into a single VBO/VAO. In the usual case using instanced rendering where the VBO contain information about where to place each particle.

Is it okay to send an array of objects to vertex shader?

Due to performance issues drawing thousands of similar triangles (with different attributes), I would like to draw all of these using a single call to drawElements. But in order to draw each triangle with their respective attributes (ex: world location, color, orientation) I believe i need to send an array buffer to my vertex shader, where the array is a list of all the triangle attributes.
If this approach is indeed the standard way to do it, then i am delighted to know i have the theory correct. I just need to know how to send an array buffer to the shader. Note that currently i know how to send a multiple attributes and uniforms (though they are not contiguous in memory, which is what im looking for).
If not, i'd appreciate it if a resident expert can point me in the right direction.
I have a related question because I am having trouble actually implementing a VBO.
How to include model matrix to a VBO?
You do have the theory correct, let me just clear a few things...
You can only "send vertex array" to the vertex shader via pointers using attributes so this part of the code stays pretty much the same. What you do seem to be looking for are 2 optimisations, putting the whole buffer on the GPU and using interleaved vertex data.
To put the whole buffer on the GPU you need to use VBOs as already mentioned in the comment. By doing that you create a raw buffer on the GPU to which you can put any data you want and even modify them in runtime if needed. In your case that would be vertex data. To use non interleaved data you would probably create a buffer for each, position, colour, orientation...
To use interleaved data you need to put them into the buffer sequentially, if possible it is best to create a data structure that holds all the vertex data (of a single vertex, not the whole array) and send those to the buffer (a simple primitive array will work as well). A C example of such structure:
typedef union {
struct {
float x,y,z; //position
float r,g,b,a; //color
float ox,oy,oz; //orientation
};
struct {
float position[3];
float color[4];
float orientation[3];
};
}Vertex;
What you need to do then is set the correct pointers when using this data. In the VBO you start with NULL (0) and that would represent the position in this case, to set colour you would have to then use ((float *)NULL)+3 or use some conveniences such as offsetof(Vertex, color) in C. With that you also need to set the stride, that would be the size of the Vertex structure so you could use sizeof(Vertex) or hardcoded sizeof(float)*(3+4+3).
After this all you need to watch for is correct buffer binding/unbinding while rest of your code should be exactly the same.

multiple VBOs --> IBO

I was just trying to learn how VBOs and IBOs work in WEBGL.
Here is my understanding:
IBOs help reduce the amount of info passed down to the GPU . So we have a VBO and then we create an IBO with its indices pointing to the VBO. I had a doubt as to how WEBGL knows the IBO <--> VBO mapping . In case of a single VBO/IBO , I thought as GL was a state machine it sees the last ARRAY_BUFFER it is bound to and then uses that buffer as the IBO target .Below is a case with multiple VBOs(position buffer and color buffer) as given below:
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexColorBuffer);
gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, cubeVertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer);
setMatrixUniforms();
gl.drawElements(gl.TRIANGLES, cubeVertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
In the above code(which works - i took it from a tutorial), we have two VBOs and one IBO (cubeVertexIndexBuffer) , what i don't understand is how WEBGL knows the indices of the IBO point to the position buffer and not the color buffer(though the color buffer is the last bound ARRAY_BUFFER).
Please let me know as to what I am missing here....
The GL_ELEMENT_ARRAY_BUFFER binding has nothing to do with the GL_ARRAY_BUFFER binding, each binding is used by a different set of commands. This is why people should not learn OpenGL using multiple VBOs in the beginning, interleaved VBOs avoid this sort of confusion. I will attempt to clear up your confusion below.
The GL_ARRAY_BUFFER binding is used for commands like glVertexAttribPointer (...). The GL_ELEMENT_ARRAY_BUFFER binding, on the other hand, by glDrawElements (...).
The indices in the IBO point to neither of the element arrays per-se. What they point to
are the glVertexAttribPointer (...)s you setup while you had a GL_ARRAY_BUFFER bound.
In case it was not already obvious, you cannot have more than one buffer object of the same type bound at any given time. When you set your attrib pointers and you are not using interleaved arrays you have to change the bound VBO. Thus, glDrawElements (...) could not care less which VBO is bound, it only cares about the vertex attrib pointers that you setup and the bound element array. Those pointers are relative to whatever VBO was bound when you set them up, but after the pointer is setup the state of the binding is no longer relevant.
The statefulness in OpenGL is awful. If you want to work with any of the buffers, you almost always need two calls: One to tell OpenGL which buffer you want to use and a second (and third and more) that operate on the buffer. The ambiguous buffer names make the whole thing more confusing. So, let us quickly clear up some things:
As explained here, binding to gl.ARRAY_BUFFER (or GL_ARRAY_BUFFER) indicates that the specified buffer is a VBO.
gl.ELEMENT_ARRAY_BUFFER (or GL_ELEMENT_ARRAY_BUFFER) indicates an IBO (that link is worth checking out). An IBO represents one complete surface. It is a list of indices to all vertices that this surface is made of. The IBO needs to be bound to gl.ELEMENT_ARRAY_BUFFER before a draw call because that's how the pipeline knows which vertices are where.
Since you sometimes need multiple attributes per vertex that you might (for some reason) want to store in separate buffers, you can use a single IBO to index into multiple VBOs.
Finally, (and I think, here is your confusion), the vertexAttribPointer tells the shader to look up the given attribute of given type in the given relative offset in the VBO that is bound when vertexAttribPointer is called. This way, the pipeline can get each attribute once it has the right vertex index, which it reads from the IBO. In your case, vertexAttribPointer says that the vertexPositionAttribute comes from the cubeVertexPositionBuffer, and color from your color buffer. In order to avoid calling vertexAttribPointer before every draw call, you can instead use a VAO to cache that information.

Resources