I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.
It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.
The code I’m using to write the pixels onto the atlas is:
-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{
static NSSize insetSize; //ultimately this is the size of the image in the box
static NSSize boundingBox; //this is the size of the box that holds the image in the grid
static CGFloat multiplier;
multiplier = 1.0;
NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas
MyGridPoint _gridPos;
//make sure the column and row position is legal
_gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
_gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;
_gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
_gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;
insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
boundingBox = insetSize;
//…code here to calculate the size to make anImage so that it fits into the space allowed
//on the atlas.
//multiplier var will hold a value that sizes up or down the image…
insetSize.width = anImage.size.width * multiplier;
insetSize.height = anImage.size.height * multiplier;
//provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
//if it’s the same size as the grid’s boxes.
insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);
//roundUp() is a handy function I found somewhere (I can’t remember now)
//that makes the first param a multiple of the the second..
//here we make sure the image lines are aligned as it’s a RGBA so we make
//it a multiple of 4
insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
NSData *insetData = [insetImage TIFFRepresentation];
GLubyte *data = malloc(insetData.length);
memcpy(data, [insetData bytes], insetData.length);
insetImage = NULL;
insetData = NULL;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glGetError();
}
The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using:
and here's a screen grab of the result in my app:
Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...
EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.
EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.
I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.
//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368
It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.
I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.
It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).
Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).
Related
I have a total of two textures, the first is used as a framebuffer to work with inside a computeshader, which is later blitted using BlitFramebuffer(...). The second is supposed to be an OpenGL array texture, which is used to look up textures and copy them onto the framebuffer. It's created in the following way:
var texarray uint32
gl.GenTextures(1, &texarray)
gl.ActiveTexture(gl.TEXTURE0 + 1)
gl.BindTexture(gl.TEXTURE_2D_ARRAY, texarray)
gl.TexParameteri(gl.TEXTURE_2D_ARRAY, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.TexImage3D(
gl.TEXTURE_2D_ARRAY,
0,
gl.RGBA8,
16,
16,
22*48,
0,
gl.RGBA, gl.UNSIGNED_BYTE,
gl.Ptr(sheet.Pix))
gl.BindImageTexture(1, texarray, 0, false, 0, gl.READ_ONLY, gl.RGBA8)
sheet.Pix is just the pixel array of an image loaded as a *image.NRGBA
The compute-shader looks like this:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img;
layout(binding = 1) uniform sampler2DArray texAtlas;
void main() {
ivec2 iCoords = ivec2(gl_GlobalInvocationID.xy);
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7));
imageStore(img, iCoords, c);
}
When i run the program however, the result is just a window filled with the same color:
So my question is: What did I do wrong during the shader creation and what needs to be corrected?
For any open code questions, here's the corresponding repo
vec4 c = texture(texAtlas, vec3(iCoords.x%16, iCoords.y%16, 7))
That can't work. texture samples the texture at normalized coordinates, so the texture is in [0,1] (in the st domain, the third dimension is the layer and is correct here), coordinates outside of that ar handled via the GL_WRAP_... modes you specified (repeat, clamp to edge, clamp to border). Since int % 16 is always an integer, and even with repetition only the fractional part of the coordinate will matter, you are basically sampling the same texel over and over again.
If you need the full texture sampling (texture filtering, sRGB conversions etc.), you have to use the normalized coordinates instead. But if you only want to access individual texel data, you can use texelFetch and keep the integer data instead.
Note, since you set the texture filter to GL_LINEAR, you seem to want filtering, however, your coordinates appear as if you would want at to access the texel centers, so if you're going the texture route , thenvec3(vec2(iCoords.xy)/vec2(16) + vec2(1.0/32.0) , layer) would be the proper normalization to reach the texel centers (together with GL_REPEAT), but then, the GL_LINEAR filtering would yield identical results to GL_NEAREST.
I am working on a simple iOS application to learn about OpenGLES 2.0. In the project, I'm rendering 4 triangles in the shape of a pyramid, with some sliders to adjust the height of the apex of the pyramid, and to rotate the modalViewMatrix about the y axis. I am trying to find the reason why.. after rotating this object counter-clockwise to the point where triangles appear in front of other triangles, I can see through the near triangles. However, when rotating in the clockwise direction to the same point, the near triangles are opaque and occlude the furthest triangles.
I assumed that the reason was a lack of a depth render buffer but after setting the property view.drawableDepthFormat = GLKViewDrawableDepthFormat16; the behavior persists.
For reference, this is my drawRect function where drawing is done. The only other code is done in viewDidLoad and in Global scope of the xcode project here.
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
[self.baseEffect prepareToDraw];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER,pos);
glEnableVertexAttribArray(GLKVertexAttribPosition);
const GLvoid * off1 = NULL + offsetof(SceneVertex, position) ;
glVertexAttribPointer(GLKVertexAttribPosition, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off1);
glEnableVertexAttribArray(GLKVertexAttribNormal);
const GLvoid * off2 = NULL + offsetof(SceneVertex, normal) ;
glVertexAttribPointer(GLKVertexAttribNormal, // Identifies the attribute to use
3, // number of coordinates for attribute
GL_FLOAT, // data is floating point
GL_FALSE, // no fixed point scaling
sizeof(SceneVertex), // total num bytes stored per vertex
off2);
GLenum error = glGetError();
if(GL_NO_ERROR != error)
{
NSLog(#"GL Error: 0x%x", error);
}
int sizeOfTries = sizeof(triangles);
int sizeOfSceneVertex = sizeof(SceneVertex);
int numArraysToDraw = sizeOfTries / sizeOfSceneVertex;
glDrawArrays(GL_TRIANGLES, 0, numArraysToDraw);
}
It's not enough just to have a depth buffer, you need to tell OpenGL how you want to use it. Try adding the following lines:
glEnable(GL_DEPTH_TEST); // Enable depth testing
glDepthMask(GL_TRUE); // Enable depth write
glDepthFunc(GL_LEQUAL); // Choose the depth comparison function
While we're here, I'd recommend GLKViewDrawableDepthFormat24 over GLKViewDrawableDepthFormat16 for most use cases (better precision).
I'd also recommend familiarizing yourself with xcode's frame capture feature (doc), it really is an invaluable way to figure out what is going on when rendering is not working as intended.
Bitmap is constructed by pixel data(purely pixel data). The construction was done by properly setting the bitmap parameters like hieght,width, bitcount etc. Bitmap is actually constructed with CreateDIBsection. And the bitmap is loaded onto a CStatic object having Bitmap as property.
Image is getting displayed with proper width and content. But only difference is the content color is colored instead of scale of gray. For eg image is a white H letter on black Bground, instead of displaying it as whitish, say a blue colored H letter is displayed. Similar color changes applies for different images. Also, sometimes junk colored data appears deviating from original content of image apart from just the color change.
Bitmap is a 16 bit bitmap.
Please see below for code used for creating BitMap.
HDC is device context of CStatic variable in which the created bitmap is loaded;
I directly set the BitMap returned by below function to this variable using setbitmap function. CStatic varibale has also BitMap as one of its property. See below for function used to create bitmap.
Function parameter definitions.
PixMapHeight = number of rows in pixel matrix.
PixMapWidth = number of columns in pixel matrix.
BitsPerPixel = The bits stored for one pixel.
pPixMapBits = Void pointer to pixel array.(raw pixel data only! 16 bit per pixel).
DoBitmapFromPixels(HDC Hdc, UINT PixMapWidth, UINT PixMapHeight, UINT BitsPerPixel, LPVOID pPixMapBits)
BITMAPINFO *bmpInfo = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bmpInfoHeader(bmpInfo->bmiHeader);
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
LONG lBmpSize = PixMapWidth * PixMapHeight * (BitsPerPixel / 8);
bmpInfoHeader.biWidth = PixMapWidth;
bmpInfoHeader.biHeight = -(static_cast<int>(PixMapHeight));
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biBitCount = BitsPerPixel;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biSizeImage = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biClrImportant = 0;
void *pPixelPtr = NULL;
HBITMAP hBitMap = CreateDIBSection(Hdc, bmpInfo, DIB_RGB_COLORS, &pPixelPtr, NULL, 0);
if (pPixMapBits != NULL)
{
BYTE* pbBits = (BYTE*)pPixMapBits;
BYTE *Pix = (BYTE *)pPixelPtr;
memcpy(Pix, ((BYTE*)pbBits + (lBmpSize * (CurrentFrame - 1))), lBmpSize);
}
free(bmpInfo);
return hBitMap;
The supposed output is the figure in the left of attached file. But I am getting a blue toned image as in right(never mind the scaling and exact match issue, put the image to depict the problem).
And also it will be very helpful if I know how RGB values are stored in 16 bits!
You never actually said what format pPixMapBits is in, but I'm guessing that it contains 16-bit values where 0 represents black, 32768 represents gray, and 65535 represents white.
You are creating a BITMAPINFOHEADER with bitBitCount = 16 and biCompression = BI_RGB. According to the documentation, if you set the fields that way, then:
Each WORD in the bitmap array represents a single pixel. The relative intensities of red, green, and blue are represented with five bits for each color component. The value for blue is in the least significant five bits, followed by five bits each for green and red. The most significant bit is not used.
This is not the same format as your source data, and you are doing no conversion, so you get junk. Note that the bitmap format you chose is capable of representing only 2^5 = 32 shades of gray, not 65536, so you will suffer loss of quality during the conversion.
I want "Face In a Crystal Ball" effect where I have a model (the face) doing things inside of a translucent model (the crystal ball). I feel like I'm taking crazy pills because I just can't get this inner face to show up partially occluded by the ball. My goal is to vary the alpha of the ball (and/or face) to make the face appear and disappear.
Below is the relevant bits code. As you'll see, I'm not using shaders, just good old GL/GLES1. If anyone can tell me what I'm doing wrong, I'll be VERY appreciative.
The setup code...
//-- CONFIGURATION ---------------
// Create The Depth Buffer Object
glGenRenderbuffersOES(1, &depth_renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depth_renderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width,
height);
// Create The FrameBuffer Object
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
color_renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
depth_renderbuffer);
// Bind Color Buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color_renderbuffer);
glViewport(0, 0, width, height);
//-- LIGHTING ----------------------
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//-- PROJECTION ---------------------
glMatrixMode(GL_PROJECTION);
viewport_size = vec2((float) width,(float) height);
//Orthographic Projection
float max_x,max_y;
if(width>height){
max_y = 1;
max_x = (float)width/(float)height;
}
else{
max_x = 1;
max_y = (float)height/(float) width;
}
const float MAX_X = max_x;
const float MAX_Y = max_y;
const float Z_0 = 0;
const float MAX_Z = 1;
glOrthof(-MAX_X, MAX_X, -MAX_Y, MAX_Y, Z_0-MAX_Z, Z_0+MAX_Z);
world_size = vec3(2*MAX_X,2*MAX_Y,2*MAX_Z);
//Color Depth
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE); //Dissapears if False
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //doesn't do it
glBlendFunc(GL_ONE, GL_ONE); //better
Here is the rendering call
glClearColor(world->background_color.x,
world->background_color.y,
world->background_color.z,
world->background_color.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int s=0;s<surfaces.size();s++){
Surface* surface = surface[s];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, surface->getMatAmbient().Pointer());
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, surface->getMatDiffuse().Pointer());
glMatrixMode(GL_MODELVIEW);
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
glPushMatrix();
glLoadIdentity();
vec4 light_position = vec4(world->light->position,1);
glLightfv(GL_LIGHT0,GL_POSITION,light_position.Pointer());
glPopMatrix();
glPushMatrix();
glMultMatrixf(surface->transform.Pointer());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, surface->index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, surface->vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, VERTEX_STRIDE, 0);
glNormalPointer(GL_FLOAT, VERTEX_STRIDE, (GLvoid*) VERTEX_NORMAL_OFFSET);
glDrawElements(GL_TRIANGLES, surface->indices.size(), GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
It sounds like you may be suffering from a simple case of the concept of a depth buffer not really applying to your scene. A depth buffer stores one depth for every pixel on screen, which in a scene with fully opaque objects would be the depth of the nearest object at that pixel.
The problem is that when you want to add partially transparent objects to the scene, you end up in a position where several objects contribute to the colour of an individual pixel. But you can still store the depth of only one of them.
So what's probably happening in your case is that you're drawing the crystal ball first, and that's putting the depths of the various crystal ball pixels into the depth buffer. You're then attempting to draw the face and OpenGL is seeing that it's further away than the values already in the buffer, so skipping those pixels.
So the quick-fix solution is just to re-order your scene geometry by hand such that the face is always drawn before the crystal ball, being always on the inside.
In an ideal solution, you'd draw all opaque geometry in one step (traditionally in something close to front-to-back order, though that's not as important on the PowerVR) to establish opaque depth values, then all transparent geometry back to front so that it is composited in the correct order.
In OpenGL you really want the order of certain things to be relatively fixed so that you can push the relevant values over to the GPU and not incur communications costs. People still tend to divide into opaque and transparent geometry and draw opaque first but often they'll just then disable z-buffer writes when they draw the transparent geometry, making an effort to do it something a bit like back-to-front order but not investing too much time in the problem.
If you're happy to use purely additive blending then clearly any order drawing for the transparencies is correct once the depth buffer has the opaque stuff set up.
What order are you rendering the objects? If you draw the ball before the face, then the entire face will get rejected because it is behind the ball in the z-buffer. If you want to do correct transparency, you have to render objects from back to front.
And regarding your inline question:
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
When you call glLightfv with a position, the position is transformed by what's currently in the modelview matrix stack. You have to put it in the right place relative to what frame of reference you're defining the coordinates (is it relative to the view coordinates, or to the world coordinates, or to the object coordinates?).
Im trying to draw specific colour rectangles into a CGBitmapContext and then later compare pixel values with the colour i drew (a kind of hit-testing).
On Leopard this works fine but on SnowLeopard the pixel-values i get out are different to the colour values i draw in - i guess due to colorspace confusion and ignorance on my part.
The basic steps i take are:-
create a bitmap context with a kCGColorSpaceGenericRGB colorspace
set the context's fillColorSpace to the same kCGColorSpaceGenericRGB colorspace
set the context's fill color
draw
get the bitmapContextData, iterate pixel values, etc.
As an example, on Leopard if i do:-
CGContextSetRGBFillColor(cntxt, 1.0, 0.0, 0.0, 1.0 ); // set pure red fill colour
CGContextFillRect(cntxt, cntxtBounds); // fill entire context
each pixel has a value UInt8 red==255, green==0, blue==0, alpha==255
On Snow Lepard however,
each pixel has a value UInt8 red==243, green==31, blue==6, alpha==255
(These values are made up - i'm not on snow leopard right now. They are roughly typical of what i was getting - still definitely 'Red' but difficult for me to correlate with (1.0,0,0). Similar for other colours too except (1.0,1.0,1.0) would be exactly (255,255,255) and (0,0,0) would be exactly (0,0,0) ).
I have tried other colorSpaces but a similar thing happens. Any help or pointers is much appreciated, thanks.
UPDATE
I believe this demonstrates what i'm on about..
//create
NSUInteger arbitraryPixSize = 10;
size_t components = 4;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (arbitraryPixSize * bitsPerComponent * components + 7)/8;
size_t dataLength = bytesPerRow * arbitraryPixSize;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 0, dataLength );
CGColorSpaceRef colSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef context = CGBitmapContextCreate ( bitmap, arbitraryPixSize, arbitraryPixSize, bitsPerComponent,bytesPerRow, colSpace, kCGImageAlphaPremultipliedFirst );
CGContextSetFillColorSpace( context, colSpace );
CGContextSetStrokeColorSpace( context, colSpace );
// -- draw something
CGContextSetRGBFillColor( context, 1.0f, 0.0f, 0.0f, 1.0f );
CGContextFillRect( context, CGRectMake( 0, 0, arbitraryPixSize, arbitraryPixSize ) );
// test the first pixel
UInt8 *baseAddr = (UInt8 *)CGBitmapContextGetData(context);
UInt8 alpha = baseAddr[0];
UInt8 red = baseAddr[1];
UInt8 green = baseAddr[2];
UInt8 blue = baseAddr[3];
CGContextRelease(context);
CGColorSpaceRelease(colSpace);
RESULTS
Leopard -> red==255, green==0, blue==0, alpha==255
Snow leopard -> red==228, green==29, blue==29, alpha==255
Take a look at the docs for CGContextSetRGBFillColor.
CGContextSetRGBFillColor Sets the
current fill color to a value in the
DeviceRGB color space.
You wanted your components to be with respect to the generic RGB space. So, use one of the other methods of setting the fill color.