I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.
Related
I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.
I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.
It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.
The code I’m using to write the pixels onto the atlas is:
-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{
static NSSize insetSize; //ultimately this is the size of the image in the box
static NSSize boundingBox; //this is the size of the box that holds the image in the grid
static CGFloat multiplier;
multiplier = 1.0;
NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas
MyGridPoint _gridPos;
//make sure the column and row position is legal
_gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
_gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;
_gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
_gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;
insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
boundingBox = insetSize;
//…code here to calculate the size to make anImage so that it fits into the space allowed
//on the atlas.
//multiplier var will hold a value that sizes up or down the image…
insetSize.width = anImage.size.width * multiplier;
insetSize.height = anImage.size.height * multiplier;
//provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
//if it’s the same size as the grid’s boxes.
insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);
//roundUp() is a handy function I found somewhere (I can’t remember now)
//that makes the first param a multiple of the the second..
//here we make sure the image lines are aligned as it’s a RGBA so we make
//it a multiple of 4
insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
NSData *insetData = [insetImage TIFFRepresentation];
GLubyte *data = malloc(insetData.length);
memcpy(data, [insetData bytes], insetData.length);
insetImage = NULL;
insetData = NULL;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glGetError();
}
The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using:
and here's a screen grab of the result in my app:
Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...
EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.
EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.
I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.
//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368
It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.
I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.
It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).
Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).
I am using this simple function to draw quad in 3D space that is facing camera. Now, I want to use fragment shader to draw illusion of a sphere inside. But, the problem is I'm new to OpenGL ES, so I don't know how?
void draw_sphere(view_t view) {
set_gl_options(COURSE);
glPushMatrix();
{
glTranslatef(view.plyr_pos.x, view.plyr_pos.y, view.plyr_pos.z - 1.9);
#ifdef __APPLE__
#undef glEnableClientState
#undef glDisableClientState
#undef glVertexPointer
#undef glTexCoordPointer
#undef glDrawArrays
static const GLfloat vertices []=
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0,
1, 1, 0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
#else
#endif
}
glPopMatrix();
}
More exactly, I want to achieve this:
There might be quite a few thing you need to to achieve this... The sphere that is drawn on the last image you posted is a result in using lighting and shine and color. In general you need a shader that can process all that and can normally work for any shape.
This specific case (also some others that can be mathematically presented) can be drawn with a single quad without even needing to push normal coordinates to the program. What you need to do is create a normal in a fragment shader: If you receive vectors sphereCenter, fragmentPosition and float sphereRadius, then sphereNormal is a vector such as
sphereNormal = (fragmentPosition-sphereCenter)/radius; //taking into account all have .z = .0
sphereNormal.z = -sqrt(1.0 - length(sphereNormal)); //only if(length(spherePosition) < sphereRadius)
and real sphere position:
spherePosition = sphereCenter + sphereNormal*sphereRadius;
Now all you need to do is add your lighting.. Static or not it is most common to use some ambient factor, linear and square distance factors, shine factor:
color = ambient*materialColor; //apply ambient
vector fragmentToLight = lightPosition-spherePosition;
float lightDistance = length(fragmentToLight);
fragmentToLight = normalize(fragmentToLight); //can also just divide with light distance
float dotFactor = dot(sphereNormal, fragmentToLight); //dot factor is used to take int account the angle between light and surface normal
if(dotFactor > .0) {
color += (materialColor*dotFactor)/(1.0 + lightDistance*linearFactor + lightDistance*lightDistance*squareFactor); //apply dot factor and distance factors (in many cases the distance factors are 0)
}
vector shineVector = (sphereNormal*(2.0*dotFactor)) - fragmentToLight; //this is a vector that is mirrored through the normal, it is a reflection vector
float shineFactor = dot(shineVector, normalize(cameraPosition-spherePosition)); //factor represents how strong is the light reflection towards the viewer
if(shineFactor > .0) {
color += materialColor*(shineFactor*shineFactor * shine); //or some other power then 2 (shineFactor*shineFactor)
}
This pattern to create lights in fragment shader is one of very many. If you don't like it or you cant make it work I suggest you find another one on the web, otherwise I hope you will understand it and be able to play around with it.
I am teaching myself about open gl es and vertex buffer (VBO) and I have written code and it is supposed to draw one red triangle but instead it colours the screen black:
- (void)drawRect:(CGRect)rect {
// Draw a red triangle in the middle of the screen:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
// Setup the vertex data:
typedef struct {
float x;
float y;
} Vertex;
const Vertex vertices[] = {{50,50}, {50,150}, {150,50}};
const short indices[3] = {0,1,2};
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
NSLog(#"drawrect");
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
// The following line does the actual drawing to the render buffer:
glDrawElements(GL_TRIANGLE_STRIP, 3, GL_UNSIGNED_SHORT, indices);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, framebuffer);
[eAGLcontext presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Here vertexBuffer is of type GLuint. What is going wrong? Thanks for your help.
Your vertices dont have a Z component, try {{50,50,-100}, {50,150,-100}, {150,50,-100}}; (your camera by default looks down the Z axis so putting it in the -Z should put it on screen) if you cant see it still try smaller numbers, im not sure what your near and far draw cutoff distance is, and if its not even set i dont know what the default is. This might not be the only issue but its the only one i can see by just looking quickly at it.
You need to add
glViewport(0, 0, 320, 480);
where you create the frame buffer and set up the context.
And replace your call to glDrawElements with
glDrawArrays(GL_TRIANGLE_STRIP, ...);
I want "Face In a Crystal Ball" effect where I have a model (the face) doing things inside of a translucent model (the crystal ball). I feel like I'm taking crazy pills because I just can't get this inner face to show up partially occluded by the ball. My goal is to vary the alpha of the ball (and/or face) to make the face appear and disappear.
Below is the relevant bits code. As you'll see, I'm not using shaders, just good old GL/GLES1. If anyone can tell me what I'm doing wrong, I'll be VERY appreciative.
The setup code...
//-- CONFIGURATION ---------------
// Create The Depth Buffer Object
glGenRenderbuffersOES(1, &depth_renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depth_renderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width,
height);
// Create The FrameBuffer Object
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
color_renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
depth_renderbuffer);
// Bind Color Buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color_renderbuffer);
glViewport(0, 0, width, height);
//-- LIGHTING ----------------------
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//-- PROJECTION ---------------------
glMatrixMode(GL_PROJECTION);
viewport_size = vec2((float) width,(float) height);
//Orthographic Projection
float max_x,max_y;
if(width>height){
max_y = 1;
max_x = (float)width/(float)height;
}
else{
max_x = 1;
max_y = (float)height/(float) width;
}
const float MAX_X = max_x;
const float MAX_Y = max_y;
const float Z_0 = 0;
const float MAX_Z = 1;
glOrthof(-MAX_X, MAX_X, -MAX_Y, MAX_Y, Z_0-MAX_Z, Z_0+MAX_Z);
world_size = vec3(2*MAX_X,2*MAX_Y,2*MAX_Z);
//Color Depth
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE); //Dissapears if False
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //doesn't do it
glBlendFunc(GL_ONE, GL_ONE); //better
Here is the rendering call
glClearColor(world->background_color.x,
world->background_color.y,
world->background_color.z,
world->background_color.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int s=0;s<surfaces.size();s++){
Surface* surface = surface[s];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, surface->getMatAmbient().Pointer());
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, surface->getMatDiffuse().Pointer());
glMatrixMode(GL_MODELVIEW);
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
glPushMatrix();
glLoadIdentity();
vec4 light_position = vec4(world->light->position,1);
glLightfv(GL_LIGHT0,GL_POSITION,light_position.Pointer());
glPopMatrix();
glPushMatrix();
glMultMatrixf(surface->transform.Pointer());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, surface->index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, surface->vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, VERTEX_STRIDE, 0);
glNormalPointer(GL_FLOAT, VERTEX_STRIDE, (GLvoid*) VERTEX_NORMAL_OFFSET);
glDrawElements(GL_TRIANGLES, surface->indices.size(), GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
It sounds like you may be suffering from a simple case of the concept of a depth buffer not really applying to your scene. A depth buffer stores one depth for every pixel on screen, which in a scene with fully opaque objects would be the depth of the nearest object at that pixel.
The problem is that when you want to add partially transparent objects to the scene, you end up in a position where several objects contribute to the colour of an individual pixel. But you can still store the depth of only one of them.
So what's probably happening in your case is that you're drawing the crystal ball first, and that's putting the depths of the various crystal ball pixels into the depth buffer. You're then attempting to draw the face and OpenGL is seeing that it's further away than the values already in the buffer, so skipping those pixels.
So the quick-fix solution is just to re-order your scene geometry by hand such that the face is always drawn before the crystal ball, being always on the inside.
In an ideal solution, you'd draw all opaque geometry in one step (traditionally in something close to front-to-back order, though that's not as important on the PowerVR) to establish opaque depth values, then all transparent geometry back to front so that it is composited in the correct order.
In OpenGL you really want the order of certain things to be relatively fixed so that you can push the relevant values over to the GPU and not incur communications costs. People still tend to divide into opaque and transparent geometry and draw opaque first but often they'll just then disable z-buffer writes when they draw the transparent geometry, making an effort to do it something a bit like back-to-front order but not investing too much time in the problem.
If you're happy to use purely additive blending then clearly any order drawing for the transparencies is correct once the depth buffer has the opaque stuff set up.
What order are you rendering the objects? If you draw the ball before the face, then the entire face will get rejected because it is behind the ball in the z-buffer. If you want to do correct transparency, you have to render objects from back to front.
And regarding your inline question:
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
When you call glLightfv with a position, the position is transformed by what's currently in the modelview matrix stack. You have to put it in the right place relative to what frame of reference you're defining the coordinates (is it relative to the view coordinates, or to the world coordinates, or to the object coordinates?).