I want a vertex array object in OpenGL ES 2.0 to hold two attributes from different buffers, the second buffer being read from client memory (glBindBuffer(GL_ARRAY_BUFFER, 0)) But I get a runtime error:
GLuint my_vao;
GLuint my_buffer_attrib0;
GLfloat attrib0_data[] = { 0, 0, 0, 0 };
GLfloat attrib1_data[] = { 1, 1, 1, 1 };
void init()
{
// setup vao
glGenVertexArraysOES(1, &my_vao);
glBindVertexArrayOES(my_vao);
// setup attrib0 as a vbo
glGenBuffers( 1, &my_buffer_attrib0 );
glBindBuffer(GL_ARRAY_BUFFER, my_buffer_attrib0);
glBufferData( GL_ARRAY_BUFFER, sizeof(attrib0_data), attrib0_data, GL_STATIC_DRAW );
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
// "end" vao
glBindVertexArrayOES( 0 );
}
void draw()
{
glBindVertexArrayOES(my_vao);
// (now I assume attrib0 is bound to my_buffer_attrib0,
// and attrib1 is not bound. but is this assumption true?)
// setup attrib1
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, attrib1_data);
// draw using attrib0 and attrib1
glDrawArrays( GL_POINTS, 0, 1 ); // runtime error: Thread1: EXC_BAD_ACCESS (code=2, address=0x0)
}
What I want to achieve is to wrap the binding of two attributes as a vertex array buffer:
void draw_ok()
{
glBindVertexArrayOES( 0 );
// setup attrib0
glBindBuffer( GL_ARRAY_BUFFER, my_buffer_attrib0 );
glVertexAttribPointer( 0, 4, GL_FLOAT, GL_FALSE, 0, 0);
// setup attrib1
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, attrib1_data);
glEnableVertexAttribArray( 0 );
glEnableVertexAttribArray( 1 );
// draw using attrib0 and attrib1
glDrawArrays( GL_POINTS, 0, 1); // ok
}
Is it possible to bind two different buffers in a vertex array object? Are OES_vertex_array_object's different from (plain) OpenGL vertex array objects? Also note that I get this error in XCode running the iOS simulator. These are related links:
Use of VAO around VBO in Open ES iPhone app Causes EXC_BAD_ACCESS When Call to glDrawElements
OES_vertex_array_object
Well, a quote from the extension specifications explains it quite simply:
Should a vertex array object be allowed to encapsulate client vertex arrays?
RESOLVED: No. The OpenGL ES working group agreed that compatibility with OpenGL and the ability to to guide developers to more performant drawing by enforcing VBO usage were more important than the possibility of hurting adoption of VAOs.
So you can indeed bind two different buffers in a VAO, (well, the buffer binding isn't stored in the VAO, anyway, only the source buffers of the individual attributes, set through glVertexAttribPointer) but you cannot use client space memory in a VAO, only VBOs. This is the same for desktop GL.
So I would advise you to store all your vertex data in VBOs. If you want to use client memory because the data is updated dynamically and you think VBOs won't buy you anything there, that's still the wrong approach. Just use a VBO with a dynamic usage (GL_DYNAMIC_DRAW or even GL_STREAM_DRAW) and update it using glBuffer(Sub)Data or glMapBuffer (or the good old glBufferData(..., NULL); glMapBuffer(GL_WRITE_ONLY) combination).
Remove the following line:
glBindBuffer( GL_ARRAY_BUFFER, 0 );
from the draw() function. You didn't bind any buffer before and it may mess up buffer state.
After some digging (reading), answers was found found in OES_vertex_array_object. It seems that OES_vertex_array_object's focus on state on the server side, and client state are used if and only if the zero object is bound. It remains to answer if OES_vertex_array_object's are the same as plain OpenGL VAO's. Please comment if you know the answer to this. Below are quotations from OES_vertex_array_object:
This extension introduces vertex array objects which encapsulate
vertex array states on the server side (vertex buffer objects).
* Should a vertex array object be allowed to encapsulate client
vertex arrays?
RESOLVED: No. The OpenGL ES working group agreed that compatibility
with OpenGL and the ability to to guide developers to more
performant drawing by enforcing VBO usage were more important than
the possibility of hurting adoption of VAOs.
An INVALID_OPERATION error is generated if
VertexAttribPointer is called while a non-zero vertex array object
is bound, zero is bound to the <ARRAY_BUFFER> buffer object binding
point and the pointer argument is not NULL [fn1].
[fn1: This error makes it impossible to create a vertex array
object containing client array pointers, while still allowing
buffer objects to be unbound.]
And the presently attached vertex array object has the following
impacts on the draw commands:
While a non-zero vertex array object is bound, if any enabled
array's buffer binding is zero, when DrawArrays or
DrawElements is called, the result is undefined.
So EXC_BAD_ACCESS was the undefined result!
The functionality you desire has now been accepted by the community as an extension to WebGL:
http://www.khronos.org/registry/webgl/extensions/OES_vertex_array_object/
Related
I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.
I'm having an issue while rendering a square in WebGL. When I run the program in Chrome, I'm getting the error:
GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 0
I've assumed this is because, at some point, the buffers are looking at the wrong arrays when trying to get data. I've pinpointed the issue to the
gl.vertexAttribPointer(pColorIndex, 4, gl.FLOAT, false, 0, 0);
line in the code below. i.e. if I change the 4 to a 2, the code will run, but not properly (as I'm looking at a vec4 for color data here). Is there an issue with the way my arrays are bound?
bindColorDisplayShaders();
// clear the framebuffer
gl.clear(gl.COLOR_BUFFER_BIT);
// bind the shader
gl.useProgram(shader);
// set the value of the uniform variable in the shader
var shift_loc = gl.getUniformLocation(shader, "shift");
gl.uniform1f(shift_loc, .5);
// bind the buffer
gl.bindBuffer(gl.ARRAY_BUFFER, vertexbuffer);
// get the index for the a_Position attribute defined in the vertex shader
var positionIndex = gl.getAttribLocation(shader, 'a_Position');
if (positionIndex < 0) {
console.log('Failed to get the storage location of a_Position');
return;
}
// "enable" the a_position attribute
gl.enableVertexAttribArray(positionIndex);
// associate the data in the currently bound buffer with the a_position attribute
// (The '2' specifies there are 2 floats per vertex in the buffer. Don't worry about
// the last three args just yet.)
gl.vertexAttribPointer(positionIndex, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind the buffer with the color data
gl.bindBuffer(gl.ARRAY_BUFFER, chosencolorbuffer);
var pColorIndex = gl.getUniformLocation(shader, 'a_ChosenColor');
if(pColorIndex < 0){
console.log('Failed to get the storage location of a_ChosenColor');
}
gl.enableVertexAttribArray(pColorIndex);
gl.vertexAttribPointer(pColorIndex, 4, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// draw, specifying the type of primitive to assemble from the vertices
gl.drawArrays(gl.TRIANGLES, 0, numPoints);
You can only use either a uniform or a vertex attribute,
this are two different things.
When using a vertex attribute you have to match the amount of vertices in your position buffer, and get the location using gl.getAttribLocation.
When using a uniform you're not supplying its data via array buffers but using the gl.uniform* methods to set their value.
In your example gl.uniform4fv(pColorIndex, yourColorVector).
In various sources I've seen recommendations for 'unbinding' buffers after use, i.e. setting it to null. I'm curious if there is really a need for this. e.g.
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
// ... buffer related operations ...
gl.bindBuffer(gl.ARRAY_BUFFER, null); // unbinding
On the one hand, it's likely better for debugging as you'll probably get better error messages, but is there any significant performance loss from unbinding buffers all the time? It's generally recommended to reduce WebGL calls where possible.
The reason people often unbind buffers and other objects is to minimize the side effects of functions/methods. It's a general software development principle that functions should only perform their advertised operations, and not have any unexpected side effects. Therefore, it's a common practice that if a function binds objects, it unbinds them before returning.
Let's look at a typical example (with no particular language syntax). First, we define a function that creates a texture without any defined content:
function GLuint createEmptyTexture(int texWidth, int texHeight) {
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
return texId;
}
Then, let's have another function to create a texture. But this one fills the texture with data from a buffer (which I believe is not supported in WebGL yet, but it still helps illustrates the general principle):
function GLuint createTextureFromBuffer(int texWidth, int texHeight,
GLuint bufferId) {
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufferId);
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
return texId;
}
Now, I can call these functions, and everything works as expected:
GLuint tex1 = createEmptyTexture(width, height);
GLuint tex2 = createTextureFromBuffer(width, height, bufferId);
But see what happens if I call them in the opposite order:
GLuint tex1 = createTextureFromBuffer(width, height, bufferId);
GLuint tex2 = createEmptyTexture(width, height);
This time, both textures will be filled with the buffer content, because the pixel unpack buffer was still bound after the first function returned, and therefore when the second function was called.
One way of avoiding this is to unbind the pixel unpack buffer at the end of the function that binds it. And to make sure that similar issues can not happen because the texture is still bound, it can unbind that one as well:
function GLuint createTextureFromBuffer(int texWidth, int texHeight,
GLuint bufferId) {
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, bufferId);
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
return texId;
}
With this implementation, both call sequences of using these two functions will produce the same result.
There are other approaches to address this. For example:
Each function documents its preconditions and side effects, and the caller is responsible to make any necessary state changes to meet the preconditions of the next function after calling a function with side effects.
Each function is completely responsible for setting up all it's state. In the example above, this would mean that the createEmptyTexture() function would have to unbind the pixel unpack buffer, because it relies on none being bound.
Approach 1 does not really scale well, and will be painful to maintain in larger systems. Approach 2 is also unsatisfactory because OpenGL has a lot of state, and having to set up all relevant state in every function would be verbose and inefficient.
This is really part of a bigger question: How do you deal with the state based nature of OpenGL in a modular software architecture? Buffer bindings are just one example of state you need to deal with. This is typically not very difficult to handle in small programs that you write by yourself, but is a possible trouble spot in larger systems. It gets worse if components from different sources (e.g. different vendors) are mixed.
I don't think there's one single approach that is ideal in all possible scenarios. The important thing is that you pick one clearly defined strategy, and use it consistently. How to handle this best in various scenarios is somewhat beyond the scope of an answer here.
While unbinding buffers should be fairly cheap, I'm not a fan of unnecessary calls. So I would try to avoid those calls, unless you really feel you need them to enforce a clear and consistent policy for the software you are writing.
I have four VBO's (BufferA, BufferB, BufferC and BufferD) and two programs (program1 and program2).
Main steps of logic are:
glUseProgram(progran1);
glBindBuffer(GL_ARRAY_BUFFER, BufferA);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, BufferB);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, Vertex1Count);
glEndTransformFeedback();
swap(BufferA, BufferB);
glUseProgram(progran2);
glBindBuffer(GL_ARRAY_BUFFER, BufferC);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, BufferD);
glBeginTransformFeedback(GL_POINTS);
glDrawArrays(GL_POINTS, 0, Vertex2Count);
glEndTransformFeedback();
swap(BufferC, BufferD);
Questions: What do I need to do to gain access to BufferB from program2?
Can I bind BufferB as texture somehow and read it with texelfetch?
I am using iOS 7 and OpenGL es 3.0
Yes, you can. You may use buffer as PBO and then create a texture from your buffer.
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BufferB);
GLuint someTex;
glActiveTexture(GL_TEXTURE0);
glGenTexutre(1, &someTex);
glBindTexture(GL_TEXTURE_2D, someTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1, sizeOfYourBuffer, 0, GL_RGBA, GL_FLOAT, nullptr);
// nullptr is interpret as an offset in your buffer
In case of using PBO, TexImage* works fast since CPU is not involved in texture initialization.
Disadvantage of the approach is that the texture is not allowed to be changed. But if you are implementing iterating method you may use the "pin pong strategy" (Have different buffers for previous and new state; Swap it after visualization).
I'm working on Opengl ES 2.0 using OMAP3530 development board on Windows CE 7.
My Task is to Load a 24-Bit Image File & rotate it about an angle in z-Axis & export the image file(Buffer).
For this task I've created a FBO for off-screen rendering & loaded this image file as a Texture by using glTexImage2D() & I've applied this Texture to a Quad & rotate that QUAD by using PVRTMat4::RotationZ() API & Read-Back by using ReadPixels() API. Since it is a single frame process i just made only 1 loop.
Here are the problems I'm facing now.
1) All API's are taking distinct processing time on every run.ie Sometimes when i run my application i get different processing time for all API's.
2) glDrawArrays() is taking too much time (~50 ms - 80 ms)
3) glReadPixels() is also taking too much time ~95 ms for Image(800x600)
4) Loading 32-Bit image is much faster than 24-Bit image so conversion is needed.
I'd like to ask you all if anybody facing/Solved similar problem kindly suggest me any
Here is the Code snippet of my Application.
[code]
[i]
void BindTexture(){
glGenTextures(1, &m_uiTexture);
glBindTexture(GL_TEXTURE_2D, m_uiTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ImageWidth, ImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, pTexData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, TCHAR *lpCmdLine, int nCmdShow)
{
// Fragment and vertex shaders code
char* pszFragShader = "Same as in RenderToTexture sample;
char* pszVertShader = "Same as in RenderToTexture sample;
CreateWindow(Imagewidth, ImageHeight);//For this i've referred OGLES2HelloTriangle_Windows.cpp example
LoadImageBuffers();
BindTexture();
Generate& BindFrame,Render Buffer();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_auiFbo, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, ImageWidth, ImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_auiDepthBuffer);
BindTexture();
GLfloat Angle = 0.02f;
GLfloat afVertices[] = {Vertices to Draw a QUAD};
glGenBuffers(1, &ui32Vbo);
LoadVBO's();//Aps's to load VBO's refer
// Draws a triangle for 1 frames
while(g_bDemoDone==false)
{
glBindFramebuffer(GL_FRAMEBUFFER, m_auiFbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
PVRTMat4 mRot,mTrans, mMVP;
mTrans = PVRTMat4::Translation(0,0,0);
mRot = PVRTMat4::RotationZ(Angle);
glBindBuffer(GL_ARRAY_BUFFER, ui32Vbo);
glDisable(GL_CULL_FACE);
int i32Location = glGetUniformLocation(uiProgramObject, "myPMVMatrix");
mMVP = mTrans * mRot ;
glUniformMatrix4fv(i32Location, 1, GL_FALSE, mMVP.ptr());
// Pass the vertex data
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
// Pass the texture coordinates data
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);//
glReadPixels(0,0,ImageWidth ,ImageHeight,GL_RGBA,GL_UNSIGNED_BYTE,pOutTexData) ;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
eglSwapBuffers(eglDisplay, eglSurface);
}
DeInitAll();[/i][/code]
The PowerVR architecture can not render a single frame and allow the ARM to read it back quickly. It is just not designed to work that way fast - it is a deferred rendering tile-based architecture. The execution times you are seeing are too be expected and using an FBO is not going to make it faster either. Also, beware that the OpenGL ES drivers on OMAP for Windows CE are really poor quality. Consider yourself lucky if they work at all.
A better design would be to display the OpenGL ES rendering directly to the DSS and avoid using glReadPixels() and the FBO completely.
I've got improved performance for rotating a Image Buffer my using multiple FBO's & PBO's.
Here is the pseudo code snippet of my application.
InitGL()
GenerateShaders();
Generate3Textures();//Generate 3 Null Textures
Generate3FBO();//Generate 3 FBO & Attach each Texture to 1 FBO.
Generate3PBO();//Generate 3 PBO & to readback from FBO.
DrawGL()
{
BindFBO1;
BindTexture1;
UploadtoTexture1;
Do Some Processing & Draw it in FBO1;
BindFBO2;
BindTexture2;
UploadtoTexture2;
Do Some Processing & Draw it in FBO2;
BindFBO3;
BindTexture3;
UploadtoTexture3;
Do Some Processing & Draw it in FBO3;
BindFBO1;
ReadPixelfromFBO1;
UnpackToPBO1;
BindFBO2;
ReadPixelfromFBO2;
UnpackToPBO2;
BindFBO3;
ReadPixelfromFBO3;
UnpackToPBO3;
}
DeinitGL();
DeallocateALL();
By this way I've achieved 50% increased performance for overall processing.