CGBitmapContext get pixel value Leopard vs. SnowLeopard confusion - cocoa

Im trying to draw specific colour rectangles into a CGBitmapContext and then later compare pixel values with the colour i drew (a kind of hit-testing).
On Leopard this works fine but on SnowLeopard the pixel-values i get out are different to the colour values i draw in - i guess due to colorspace confusion and ignorance on my part.
The basic steps i take are:-
create a bitmap context with a kCGColorSpaceGenericRGB colorspace
set the context's fillColorSpace to the same kCGColorSpaceGenericRGB colorspace
set the context's fill color
draw
get the bitmapContextData, iterate pixel values, etc.
As an example, on Leopard if i do:-
CGContextSetRGBFillColor(cntxt, 1.0, 0.0, 0.0, 1.0 ); // set pure red fill colour
CGContextFillRect(cntxt, cntxtBounds); // fill entire context
each pixel has a value UInt8 red==255, green==0, blue==0, alpha==255
On Snow Lepard however,
each pixel has a value UInt8 red==243, green==31, blue==6, alpha==255
(These values are made up - i'm not on snow leopard right now. They are roughly typical of what i was getting - still definitely 'Red' but difficult for me to correlate with (1.0,0,0). Similar for other colours too except (1.0,1.0,1.0) would be exactly (255,255,255) and (0,0,0) would be exactly (0,0,0) ).
I have tried other colorSpaces but a similar thing happens. Any help or pointers is much appreciated, thanks.
UPDATE
I believe this demonstrates what i'm on about..
//create
NSUInteger arbitraryPixSize = 10;
size_t components = 4;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (arbitraryPixSize * bitsPerComponent * components + 7)/8;
size_t dataLength = bytesPerRow * arbitraryPixSize;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 0, dataLength );
CGColorSpaceRef colSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef context = CGBitmapContextCreate ( bitmap, arbitraryPixSize, arbitraryPixSize, bitsPerComponent,bytesPerRow, colSpace, kCGImageAlphaPremultipliedFirst );
CGContextSetFillColorSpace( context, colSpace );
CGContextSetStrokeColorSpace( context, colSpace );
// -- draw something
CGContextSetRGBFillColor( context, 1.0f, 0.0f, 0.0f, 1.0f );
CGContextFillRect( context, CGRectMake( 0, 0, arbitraryPixSize, arbitraryPixSize ) );
// test the first pixel
UInt8 *baseAddr = (UInt8 *)CGBitmapContextGetData(context);
UInt8 alpha = baseAddr[0];
UInt8 red = baseAddr[1];
UInt8 green = baseAddr[2];
UInt8 blue = baseAddr[3];
CGContextRelease(context);
CGColorSpaceRelease(colSpace);
RESULTS
Leopard -> red==255, green==0, blue==0, alpha==255
Snow leopard -> red==228, green==29, blue==29, alpha==255

Take a look at the docs for CGContextSetRGBFillColor.
CGContextSetRGBFillColor Sets the
current fill color to a value in the
DeviceRGB color space.
You wanted your components to be with respect to the generic RGB space. So, use one of the other methods of setting the fill color.

Related

Monochrome image getting displayed as colored RGB image

Bitmap is constructed by pixel data(purely pixel data). The construction was done by properly setting the bitmap parameters like hieght,width, bitcount etc. Bitmap is actually constructed with CreateDIBsection. And the bitmap is loaded onto a CStatic object having Bitmap as property.
Image is getting displayed with proper width and content. But only difference is the content color is colored instead of scale of gray. For eg image is a white H letter on black Bground, instead of displaying it as whitish, say a blue colored H letter is displayed. Similar color changes applies for different images. Also, sometimes junk colored data appears deviating from original content of image apart from just the color change.
Bitmap is a 16 bit bitmap.
Please see below for code used for creating BitMap.
HDC is device context of CStatic variable in which the created bitmap is loaded;
I directly set the BitMap returned by below function to this variable using setbitmap function. CStatic varibale has also BitMap as one of its property. See below for function used to create bitmap.
Function parameter definitions.
PixMapHeight = number of rows in pixel matrix.
PixMapWidth = number of columns in pixel matrix.
BitsPerPixel = The bits stored for one pixel.
pPixMapBits = Void pointer to pixel array.(raw pixel data only! 16 bit per pixel).
DoBitmapFromPixels(HDC Hdc, UINT PixMapWidth, UINT PixMapHeight, UINT BitsPerPixel, LPVOID pPixMapBits)
BITMAPINFO *bmpInfo = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bmpInfoHeader(bmpInfo->bmiHeader);
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
LONG lBmpSize = PixMapWidth * PixMapHeight * (BitsPerPixel / 8);
bmpInfoHeader.biWidth = PixMapWidth;
bmpInfoHeader.biHeight = -(static_cast<int>(PixMapHeight));
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biBitCount = BitsPerPixel;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biSizeImage = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biClrImportant = 0;
void *pPixelPtr = NULL;
HBITMAP hBitMap = CreateDIBSection(Hdc, bmpInfo, DIB_RGB_COLORS, &pPixelPtr, NULL, 0);
if (pPixMapBits != NULL)
{
BYTE* pbBits = (BYTE*)pPixMapBits;
BYTE *Pix = (BYTE *)pPixelPtr;
memcpy(Pix, ((BYTE*)pbBits + (lBmpSize * (CurrentFrame - 1))), lBmpSize);
}
free(bmpInfo);
return hBitMap;
The supposed output is the figure in the left of attached file. But I am getting a blue toned image as in right(never mind the scaling and exact match issue, put the image to depict the problem).
And also it will be very helpful if I know how RGB values are stored in 16 bits!
You never actually said what format pPixMapBits is in, but I'm guessing that it contains 16-bit values where 0 represents black, 32768 represents gray, and 65535 represents white.
You are creating a BITMAPINFOHEADER with bitBitCount = 16 and biCompression = BI_RGB. According to the documentation, if you set the fields that way, then:
Each WORD in the bitmap array represents a single pixel. The relative intensities of red, green, and blue are represented with five bits for each color component. The value for blue is in the least significant five bits, followed by five bits each for green and red. The most significant bit is not used.
This is not the same format as your source data, and you are doing no conversion, so you get junk. Note that the bitmap format you chose is capable of representing only 2^5 = 32 shades of gray, not 65536, so you will suffer loss of quality during the conversion.

glTexSubImage2D shifting NSImage by a pixel

I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.
It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.
The code I’m using to write the pixels onto the atlas is:
-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{
static NSSize insetSize; //ultimately this is the size of the image in the box
static NSSize boundingBox; //this is the size of the box that holds the image in the grid
static CGFloat multiplier;
multiplier = 1.0;
NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas
MyGridPoint _gridPos;
//make sure the column and row position is legal
_gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
_gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;
_gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
_gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;
insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
boundingBox = insetSize;
//…code here to calculate the size to make anImage so that it fits into the space allowed
//on the atlas.
//multiplier var will hold a value that sizes up or down the image…
insetSize.width = anImage.size.width * multiplier;
insetSize.height = anImage.size.height * multiplier;
//provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
//if it’s the same size as the grid’s boxes.
insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);
//roundUp() is a handy function I found somewhere (I can’t remember now)
//that makes the first param a multiple of the the second..
//here we make sure the image lines are aligned as it’s a RGBA so we make
//it a multiple of 4
insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
NSData *insetData = [insetImage TIFFRepresentation];
GLubyte *data = malloc(insetData.length);
memcpy(data, [insetData bytes], insetData.length);
insetImage = NULL;
insetData = NULL;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glGetError();
}
The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using:
and here's a screen grab of the result in my app:
Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...
EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.
EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.
I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.
//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368
It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.
I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.
It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).
Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).

openGL image quality (blured)

i use openGL to create an slideshow app. Unfortunatly the images rendered with openGL look blured compared to the gnome image viewer.
Here are the 2 Screenshots
(opengl) http://tinyurl.com/dxmnzpc
(image viewer) http://tinyurl.com/8hshv2a
and this is the base image:
http://tinyurl.com/97ho4rp
the image has the native size of my screen. (2560x1440)
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/freeglut.h>
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include <unistd.h>
GLuint text = 0;
GLuint load_texture(const char* file) {
SDL_Surface* surface = IMG_Load(file);
GLuint texture;
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
SDL_PixelFormat *format = surface->format;
printf("%d %d \n",surface->w,surface->h);
if (format->Amask) {
gluBuild2DMipmaps(GL_TEXTURE_2D, 4,surface->w, surface->h, GL_RGBA,GL_UNSIGNED_BYTE, surface->pixels);
} else {
gluBuild2DMipmaps(GL_TEXTURE_2D, 3,surface->w, surface->h, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
}
SDL_FreeSurface(surface);
return texture;
}
void display(void) {
GLdouble offset_x = -1;
GLdouble offset_y = -1;
int p_viewport[4];
glGetIntegerv(GL_VIEWPORT, p_viewport);
GLfloat gl_width = p_viewport[2];//width(); // GL context size
GLfloat gl_height = p_viewport[3];//height();
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glEnable( GL_TEXTURE_2D );
glTranslatef(0,0,0);
glBindTexture( GL_TEXTURE_2D, text);
gl_width=2; gl_height=2;
glBegin(GL_QUADS);
glTexCoord2f(0, 1); //4
glVertex2f(offset_x, offset_y);
glTexCoord2f(1, 1); //3
glVertex2f(offset_x + gl_width, offset_y);
glTexCoord2f(1, 0); // 2
glVertex2f(offset_x + gl_width, offset_y + gl_height);
glTexCoord2f(0, 0); // 1
glVertex2f(offset_x, offset_y + gl_height);
glEnd();
glutSwapBuffers();
}
int main(int argc, char **argv) {
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE);
glutGameModeString("2560x1440:24");
glutEnterGameMode();
text = load_texture("/tmp/raspberry/out.jpg");
glutDisplayFunc(display);
glutMainLoop();
}
UPDATED TRY
void display(void)
{
GLdouble texture_x = 0;
GLdouble texture_y = 0;
GLdouble texture_width = 0;
GLdouble texture_height = 0;
glViewport(0,0,width,height);
glClearColor (0.0,2.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
//Do pixel calculatons
texture_x = ((2.0*1-1) / (2*width));
texture_y = ((2.0*1-1) / (2*height));
texture_width=((2.0*width-1)/(2*width));
texture_height=((2.0*height-1)/(2*height));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0,0,0);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, text);
glBegin(GL_QUADS);
glTexCoord2f(texture_x, texture_height); //4
glVertex2f(0, 0);
glTexCoord2f(texture_width, texture_height); //3
glVertex2f(width, 0);
glTexCoord2f(texture_width, texture_y); // 2
glVertex2f(width,height);
glTexCoord2f(texture_y, texture_y); // 1
glVertex2f(0,height);
glEnd();
glutSwapBuffers();
}
What you run into is a variation of the fencepost problem, that arises from how OpenGL deals with texture coordinates. OpenGL does not address a texture's pixels (texels), but uses the image data as support for a interpolation, that in fact covers a wider range than the images pixels. So the texture coordinates 0 and 1 don't hit the left-/bottom most and right-/top most pixels, but go a little further, in fact.
Let's say the texture is 8 pixels wide:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
^ ^ ^ ^ ^ ^ ^ ^ ^
0.0 | | | | | | | 1.0
| | | | | | | | |
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
The digits denote the texture's pixels, the bars the edges of the texture and in case of nearest filtering the border between pixels. You however want to hit the pixels' centers. So you're interested in the texture coordinates
(0/8 + 1/8)/2 = 1 / (2 * 8)
(1/8 + 2/8)/2 = 3 / (2 * 8)
...
(7/8 + 8/8)/2 = 15 / (2 * 8)
Or more generally for pixel i in a N wide texture the proper texture coordinate is
(2i + 1)/(2N)
However if you want to perfectly align your texture with the screen pixels, remember that what you specify as coordinates are not a quad's pixels, but edges, which, depending on projection may align with screen pixel edges, not centers, thus may require other texture coordinates.
Note that if you follow this, irregardless of your filtering mode and mipmaps your image will always look clear and crisp, because the interpolation hits exactly your sampling support, which is your input image. Switching to another filtering mode, like GL_NEAREST may look right at first look, but it's actually not correct, because it will alias your samples. So don't do it.
There are few other issues with your code as well, but they're not as a huge problem. First and foremost, you're choosing a rather arcane way to viewport dimensions. You're (probably without further thought) explout the fact that the default OpenGL viewport is the size of the window the context has been created with. You're using SDL, which has the side effect, that this approach won't bite you, as long as you stick with SDL-1. But switch to any other framework, that may create the context via a proxy drawable, and you're running into a problem.
The canonical way is usually to retrieve the window size from the windowing system (SDL in your case) and then setting the viewport at one of the first actions in the display function.
Another issue is your use of gluBuildMipmaps, because a) you don't want to use mipmaps and b) since OpenGL-2 you can upload texture images of arbitrary size (i.e. you're not limited to powers of 2 for the dimensions), which completely eliminates the need for gluBuildMipmaps. So don't use it. Just use glTexImage2D directly and switch to a non-mipmapping filtering mode.
Update due to question update
The way you calculate the texture coordinates still doesn't look right. It seems like you're starting to count at 1. Texture pixels are 0 base indexed, so…
This is how I'd do it:
Assuming the projection maps the viewport
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, win_width, 0, win_height, -1, 1);
glViewport(0, 0, win_width, win_height);
we calculate the texture coordinates as
//Do pixel calculatons
glBindTexture( GL_TEXTURE_2D, text);
GLint tex_width, tex_height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &tex_width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &tex_height);
texture_s1 = 1. / (2*width); // (2*0-1
texture_t1 = 1. / (2*height);
texture_s2 = (2.*(tex_width -1) + 1) / (2*width);
texture_t2 = (2.*(tex_height-1) + 1) / (2*height);
Note that tex_width and tex_height give the number of pixels in each direction, but the coordinates are 0 based, so you've to subtract 1 from them for the texture coordinate mapping. Hence we also use a constant 1 in the numerator for the s1, t1 coordinates.
The rest looks okay, given the projection you choose
glEnable( GL_TEXTURE_2D );
glBegin(GL_QUADS);
glTexCoord2f(s1, t1); //4
glVertex2f(0, 0);
glTexCoord2f(s2, t1); //3
glVertex2f(tex_width, 0);
glTexCoord2f(s2, t2); // 2
glVertex2f(tex_width,tex_height);
glTexCoord2f(s1, t2); // 1
glVertex2f(0,tex_height);
glEnd();
I'm not sure if this is really the problem, but I think you don't need/want mipmaps here. Have you tried using glTexImage2D instead of gluBuild2DMipmaps in combination with nearest neighbor filtering (glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN/MAG_FILTER, GL_NEAREST);)?

32 bit Grayscale Image with CGBitmapContextCreate always Black

I'm using the following code to display a 32 bit Grayscale image. Even if I explicitly set every pixel to be 4294967297 (which ought to be white), the end result is always black. What am I doing wrong here? The image is just 64x64 pixels.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
ptr = (float*)malloc(4*xDim*yDim);
for(i=0;i<yDim;i++)
{
for(j=0;j<xDim;j++)
{
ptr[i*xDim + j] = 4294967297;
}
}
CGContextRef bitmapContext = CGBitmapContextCreate(
ptr,
xDim,
yDim,
32,
4*xDim,
colorSpace,
kCGImageAlphaNone | kCGBitmapFloatComponents);
//ptr = CGBitmapContextGetData(bitmapContext);
//NSLog(#"%ld",sizeof(float));
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
NSRect drawRect;
drawRect.origin = NSMakePoint(1.0, 1.0);
drawRect.size.width = 64;
drawRect.size.height = 64;
NSImage *greyscale = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
[greyscale drawInRect:drawRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
If you don't specify the endianness, Quartz will default to big-endian. If you are on an Intel Mac, this will be wrong. You will need to explicitly set the endianness, and the best way to do that is to change your flags to:
kCGImageAlphaNone | kCGBitmapFloatComponents | kCGBitmapByteOrder32Host
This will work properly regardless of your CPU (for future compatibility!).
You can find more detail here: http://lists.apple.com/archives/Quartz-dev/2011/Dec/msg00001.html
You are using a float component image. Make sure ptr has type float* and try setting values to 0.5f instead of 4294967297.
Are you showing the exact code you are using ?
2^32 - 1 = 4294967295
If you are using 4294967297 I suspect you are getting overflow and an actual value of 2 !
32 bit floating point gray scale is not supported by CGBitmapContextCreate. CGBitmapContextCreate Supported Color Spaces

disabling color correction in quartz 2d

Ok, I know that it's not possible to actually disable color correction in quartz. What I'm looking for is a device-independent color space setting that dosn't change the RGB values I draw in a CGLayer.
I tried all the ICC profiles from the system library, they all shift the colors.
This is the best result I got:
const CGFloat whitePoint[] = {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[] = {0, 0, 0};
const CGFloat gamma[] = {1, 1, 1};
const CGFloat matrix[] = {0.449695, 0.244634, 0.0251829, 0.316251, 0.672034, 0.141184, 0.18452, 0.0833318, 0.922602 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateCalibratedRGB(whitePoint, blackPoint, gamma, matrix);
This uses Apple RGB's color conversion matrix and D65 white point.
The colors still shift a bit, although I'm a lot happier with this than with the device dependent settings.
Here's how I write the CGLayer to a TIFF:
CIImage *image = [CIImage imageWithCGLayer:cgLayer];
NSBitmapImageRep *bitmapImage = [[NSBitmapImageRep alloc] initWithCIImage:image];
[[bitmapImage TIFFRepresentation] writeToFile:fileName atomically:YES];
Any help would be greatly appreciated.
Why not declare your colours to be part of the same colour space as your destination CGLayer?
The doco for CGColorSpaceCreateDeviceRGBseems to be saying just that:
CGColorSpaceCreateDeviceRGB

Resources