I'm working on a little 2D graphics/game library for personal use and currently I'm trying to think of a way to improve performance when drawing tiled maps. Currently I am creating a static GL_QUADS VBO for each tile in the map and then drawing it to the screen. Each VBO is referencing a texture loaded into memory which is sub-imaged and mapped to the VBO.
Currently, I have a 20 x 20 tile map that I am testing with. With my current implementation, since I have to draw each individual tile, that is 400 glDraw* calls every frame.
Is there any way to, for example, make each row of the tile map ONE VBO? This would reduce the glDraw* calls to 20, for this example. How would I map the sub-images? Individual tiles can be rotated.
I have seen some references to using a Texture Atlas. Would that be a good alternative? Any useful links on how to implement this in opengl?
CODE:
Current render method:
public void render() {
texture.bind();
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
for (SpriteSheet spriteSheet : spriteSheets) {
VBO vbo = spriteSheet.getVBO();
float angle = spriteSheet.getAngle();
vbo.bind();
if (angle != 0) {
glPushMatrix();
Vector2f position = spriteSheet.getPosition();
glTranslatef(position.x, position.y, 0);
glRotatef(angle, 0.0f, 0.0f, 1);
glTranslatef(-position.x, -position.y, 0);
glVertexPointer(Vertex.positionElementCount, GL_FLOAT, Vertex.stride, Vertex.positionByteOffset);
glColorPointer(Vertex.colorElementCount, GL_FLOAT, Vertex.stride, Vertex.colorByteOffset);
glTexCoordPointer(Vertex.textureElementCount, GL_FLOAT, Vertex.stride, Vertex.textureByteOffset);
glDrawArrays(vbo.getMode(), 0, Vertex.elementCount);
glPopMatrix();
}
else {
glVertexPointer(Vertex.positionElementCount, GL_FLOAT, Vertex.stride, Vertex.positionByteOffset);
glColorPointer(Vertex.colorElementCount, GL_FLOAT, Vertex.stride, Vertex.colorByteOffset);
glTexCoordPointer(Vertex.textureElementCount, GL_FLOAT, Vertex.stride, Vertex.textureByteOffset);
glDrawArrays(vbo.getMode(), 0, Vertex.elementCount);
}
vbo.unbind();
}
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
texture.unbind();
}
There are several things you can do.
A texture atlas one option, but you could use a GL_TEXTURE_2D_ARRAY as well, using the 3rd texture coordinate to select which layer to use.
The next thing to think about is instancing: Have a single quad in the buffer and make OpenGL draw it several times, using an additional buffer to select texture layer and rotation based on the drawn instance.
Related
I have developed more than 20 mobile apps using OpenGL ES 2.0. However, I am trying to make a renderer to use my apps in OSX so now I am using OpenGL v3.3 with GLSL v130. Yesterday, I ran into a problem that I can't use a texture(RTT) that I drew particles on Off-Screen FBO with GL_LINES 1.0 size (it is the max value in OpenGL 3.3 why??)
When I drew geometry on the Off Screen FBO and used it as a texture on On-screen, I was able to see that
and also if I draw small particles on On-screen I can clearly see those but if I try to draw that particle lines and try to use it as a texture on Main screen I can see only a black texture.
I have already checked GL ERRORs and back FBOs' status and GL blending options but I am still struggling to solve it .
Anyone has a idea to solve it ?
Even though I think my code is okay I attached a little code bellow
// AFTER generate and bind FBO, generate RTT
StarTexture fboTex;
fboTex.texture_width = texture_width;
fboTex.texture_height = texture_height;
glGenTextures(1, &fboTex.texture_id);
glBindTexture(GL_TEXTURE_2D,fboTex.texture_id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture_width, texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTex.texture_id, 0);
and this is drawing particles on BACK FBO
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE, g_proxtrans.s);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_PARTICLE]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Vec3)*ParticleNumTotal*2, &p_particle_lc_xy[0]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION], 3, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_COLOR]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_COLOR], 4, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_COLOR]);
glLineWidth(Thickness); // 1.0 because it is maxium
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_OFF1]);
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0); // 200 lines
and when I draw that on the main screen
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear( GL_COLOR_BUFFER_BIT);
starfbo->bindingVAO1();
glViewport(0, 0, ogl_Width, ogl_Height);
glUseProgram(h_Shader_Program[Shader_Program_FINAL]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_TEXCOORD2]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_UV2], 2, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_UV2]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_SQCOORD2]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION3], 2, GL_FLOAT, 0, 0,0 );
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION3]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_ON]);
glDrawElements(GL_TRIANGLES,sizeof(squareIndices)/sizeof(squareIndices[0]), GL_UNSIGNED_INT ,(void*)0);
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE, g_proxtrans.s);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_PARTICLE]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Vec3)*ParticleNumTotal*2, &p_particle_lc_xy[0]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_POSITION], 3, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_POSITION]);
glBindBuffer(GL_ARRAY_BUFFER, h_VBO[VBO_COLOR]);
glVertexAttribPointer(h_Attributes[ATTRIBUTES_COLOR], 4, GL_FLOAT, 0, 0,0);
glEnableVertexAttribArray(h_Attributes[ATTRIBUTES_COLOR]);
glLineWidth(Thickness);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, h_VBO[VBO_INDEX_OFF1]);
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0);
If the resolution of the rendered image is much larger than the size (in pixels) it ends up being rendered at, it's certainly possible that small features disappear entirely.
Picture an extreme case. Say you render a few thin lines into a 1000x1000 texture, lighting up a very small fraction of the total 1,000,000 pixels. Now you map this texture onto a quad that has a size of 10x10 pixels when displayed. The fragment shader is invoked once for each pixel (assuming no MSAA), which makes for 100 shader invocations. Each of these 100 invocations samples the texture. With linear sampling and no mipmapping, it will read 4 texels for each sample operation. In total, 100 * 4 = 400 texels are read while rendering the polygon. It's quite likely that reading these 400 texels out of the total 1,000,000 will completely miss all of the lines you rendered into the texture.
One way to reduce this problem is to use mipmapping. This will generally prevent the features from disappearing completely. But small features will still fade because more and more texels are averaged in higher mipmap levels, where most of the texels are black.
A better but slightly more complex approach is that instead of using automatically generated mipmaps, you create the mipmaps manually, by rendering the same content into each mipmap level.
It might be good enough to simply be careful that you're not making the texture too large. Or to create your own wide lines by drawing them as polygons instead of using line primitives.
glDrawElements(GL_LINES, 400, GL_UNSIGNED_INT, 0);
GL_UNSIGNED_INT can not be used in OpenGL ES Versus OpenGL. Oddly, it works for IOS but not Android.
The parameter must be GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT in OpenGL ES.
I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?
glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}
You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}
I am teaching myself about open gl es and vertex buffer (VBO) and I have written code and it is supposed to draw one red triangle but instead it colours the screen black:
- (void)drawRect:(CGRect)rect {
// Draw a red triangle in the middle of the screen:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
// Setup the vertex data:
typedef struct {
float x;
float y;
} Vertex;
const Vertex vertices[] = {{50,50}, {50,150}, {150,50}};
const short indices[3] = {0,1,2};
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
NSLog(#"drawrect");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
// The following line does the actual drawing to the render buffer:
glDrawElements(GL_TRIANGLE_STRIP, 3, GL_UNSIGNED_SHORT, indices);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, framebuffer);
[eAGLcontext presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Here vertexBuffer is of type GLuint. What is going wrong? Thanks for your help.
Your vertices dont have a Z component, try {{50,50,-100}, {50,150,-100}, {150,50,-100}}; (your camera by default looks down the Z axis so putting it in the -Z should put it on screen) if you cant see it still try smaller numbers, im not sure what your near and far draw cutoff distance is, and if its not even set i dont know what the default is. This might not be the only issue but its the only one i can see by just looking quickly at it.
You need to add
glViewport(0, 0, 320, 480);
where you create the frame buffer and set up the context.
And replace your call to glDrawElements with
glDrawArrays(GL_TRIANGLE_STRIP, ...);
I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture:
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0);
However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to write the depth component to the color buffer, then copying it:
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// turn on depth rendering
glUseProgram(m_BaseShader.uiId);
// this is a switch to cause the fragment shader to just dump out the depth component
glUniform1i(uiBaseShaderRenderDepth, true);
// and for this, the color buffer needs to be on
glColorMask(GL_TRUE,GL_TRUE,GL_TRUE,GL_TRUE);
// and clear it to 1.0, like how the depth buffer starts
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// draw the scene
DrawScene();
// bind our texture
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, width, height, 0);
Here is the fragment shader:
uniform sampler2D sTexture;
uniform bool bRenderDepth;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
if(bRenderDepth) {
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
} else {
gl_FragColor = vec4(texture2D(sTexture, TexCoord).rgb * LightIntensity, 1.0);
}
}
I have experimented with not having the 'bRenderDepth' branch, and it doesn't speed it up significantly.
Right now pretty much just doing this step its at 14fps, which obviously is not acceptable. If I pull out the copy its way above 30fps. I'm getting two suggestions from the Xcode OpenGLES analyzer on the copy command:
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: error:
Validation Error: glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
960, 640, 0) : Height<640> is not a power of two
file://localhost/Users/xxxx/Documents/Development/xxxx.mm: warning:
GPU Wait on Texture: Your app updated a texture that is currently
used for rendering. This caused the CPU to wait for the GPU to
finish rendering.
I'll work to resolve the two above issues (perhaps they are the crux if of it). In the meantime can anyone suggest a more efficient way to pull that depth data into a texture?
Thanks in advance!
iOS devices generally support OES_depth_texture, so on devices where the extension is present, you can set up a framebuffer object with a depth texture as its only attachment:
GLuint g_uiDepthBuffer;
glGenTextures(1, &g_uiDepthBuffer);
glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
// glTexParameteri calls omitted for brevity
GLuint g_uiDepthFramebuffer;
glGenFramebuffers(1, &g_uiDepthFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_uiDepthFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_uiDepthBuffer, 0);
Your texture then receives all the values being written to the depth buffer when you draw your scene (you can use a trivial fragment shader for this), and you can texture from it directly without needing to call glCopyTexImage2D.
I have a cube which I want to rotate. I also have a light source GL_LIGHT0. I want to rotate the cube and leave the light source fixed in its location. But the light source is rotating together with my cube. I use OpenGL ES 1.1
Here's a snippet of my code to make my question more clear.
GLfloat glfarr[] = {...} //cube points
GLubyte glubFaces[] = {...}
Vertex3D normals[] = {...} //normals to surfaces
const GLfloat light0Position[] = {0.0, 0.0, 3.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
glEnable(GL_LIGHT0);
for(i = 0; i < 8000; ++i)
{
if (g_bDemoDone) break;
glLoadIdentity();
glTranslatef(0.0,0.0, -12);
glRotatef(rot, 0.0, 1.0,1.0);
rot += 0.8;
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, glfarr);
glDrawElements(GL_TRIANGLES, 3*12, GL_UNSIGNED_BYTE, glubFaces);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
eglSwapBuffers(eglDisplay, eglSurface);
}
Thanks.
Fixed in relation to what? The light position is transformed by the current MODELVIEW matrix when you do glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
If you want it to move with with the cube you'll have to move glLightfv(GL_LIGHT0, GL_POSITION, light0Position); to after the translation and rotation calls.
The problem seems to be that you're rotating the modelview matrix, not the cube itself. Essentially, you're moving the camera.
In order to rotate just the cube, you'll need to rotate the vertices that make up the cube. Generally that's done using a library (GLUT or some such) or simple trig. You'll be operating on the vertex data stored in the array, before the glDrawElements call. You may/may not have to or want to modify the normals or texture coordinates, it depends on your effects and how it ends up looking.