32 bit Grayscale Image with CGBitmapContextCreate always Black - cocoa

I'm using the following code to display a 32 bit Grayscale image. Even if I explicitly set every pixel to be 4294967297 (which ought to be white), the end result is always black. What am I doing wrong here? The image is just 64x64 pixels.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
ptr = (float*)malloc(4*xDim*yDim);
for(i=0;i<yDim;i++)
{
for(j=0;j<xDim;j++)
{
ptr[i*xDim + j] = 4294967297;
}
}
CGContextRef bitmapContext = CGBitmapContextCreate(
ptr,
xDim,
yDim,
32,
4*xDim,
colorSpace,
kCGImageAlphaNone | kCGBitmapFloatComponents);
//ptr = CGBitmapContextGetData(bitmapContext);
//NSLog(#"%ld",sizeof(float));
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
NSRect drawRect;
drawRect.origin = NSMakePoint(1.0, 1.0);
drawRect.size.width = 64;
drawRect.size.height = 64;
NSImage *greyscale = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
[greyscale drawInRect:drawRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];

If you don't specify the endianness, Quartz will default to big-endian. If you are on an Intel Mac, this will be wrong. You will need to explicitly set the endianness, and the best way to do that is to change your flags to:
kCGImageAlphaNone | kCGBitmapFloatComponents | kCGBitmapByteOrder32Host
This will work properly regardless of your CPU (for future compatibility!).
You can find more detail here: http://lists.apple.com/archives/Quartz-dev/2011/Dec/msg00001.html

You are using a float component image. Make sure ptr has type float* and try setting values to 0.5f instead of 4294967297.

Are you showing the exact code you are using ?
2^32 - 1 = 4294967295
If you are using 4294967297 I suspect you are getting overflow and an actual value of 2 !

32 bit floating point gray scale is not supported by CGBitmapContextCreate. CGBitmapContextCreate Supported Color Spaces

Related

Gaussian Noise using Core Graphics only?

How would I ever use Core graphics only to generate a noise texture background? I'm stuck on the noise part because there is no way to add a noise filter in core graphics...
About a year later, I've found the answer:
CGImageRef CGGenerateNoiseImage(CGSize size, CGFloat factor) CF_RETURNS_RETAINED {
NSUInteger bits = fabs(size.width) * fabs(size.height);
char *rgba = (char *)malloc(bits);
srand(124);
for(int i = 0; i < bits; ++i)
rgba[i] = (rand() % 256) * factor;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(rgba, fabs(size.width), fabs(size.height),
8, fabs(size.width), colorSpace, kCGImageAlphaNone);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
CFRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
free(rgba);
return image;
}
This effectively generates a noise image that's guaranteed to be random, and can be drawn, using the code from Jason Harwig's answer.
Create a noise png, then draw it using an overlay blend.
// draw background
CGContextFillRect(context, ...)
// blend noise on top
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGImageRef cgImage = [UIImage imageNamed:#"noise"].CGImage;
CGContextDrawImage(context, rect, cgImage);
CGContextSetBlendMode(context, kCGBlendModeNormal);
There is a CIRandomGenerator in CoreImageFilters as of iOS 6.
But bear in mind that this one is not gaussian noise (as it is not in the previous answer).
- (UIImage*)linearRandomImage:(CGRect)rect
{
CIContext *randomContext = [CIContext contextWithOptions:nil];
CIFilter *randomGenerator = [CIFilter filterWithName: #"CIColorMonochrome"];
[randomGenerator setValue:[[CIFilter filterWithName:#"CIRandomGenerator"] valueForKey:#"outputImage"] forKey:#"inputImage"];
[randomGenerator setDefaults];
CIImage *resultImage = [randomGenerator outputImage];
CGImageRef ref = [randomContext createCGImage:resultImage fromRect:rect];
UIImage *endImage=[UIImage imageWithCGImage:ref];
return endImage;
}

How do I create an 8-bit PNG with transparency from an NSBitmapImageRep?

I have a 32-bit NSBitmapImageRep which has an alpha channel with essentially 1-bit values (the pixels are either on or off).
I want to save this bitmap to an 8-bit PNG file with transparency. If I use the -representationUsingType:properties: method of NSBitmapImageRep and pass in NSPNGFileType, a 32-bit PNG is created, which is not what I want.
I know that 8-bit PNGs can be read, they open in Preview with no problems, but is it possible to write this type of PNG file using any built-in Mac OS X APIs? I'm happy to drop down to Core Image or even QuickTime if necessary. A cursory examination of the CGImage docs didn't reveal anything obvious.
EDIT:
I've started a bounty on this question, if someone can provide working source code that takes a 32-bit NSBitmapImageRep and writes a 256-color PNG with 1-bit transparency, it's yours.
How about pnglib? It's really lightweight and easy to use.
pngnq (and new pngquant which achieves higher quality) has BSD-style license, so you can just include it in your program. No need to spawn as separate task.
A great reference for working with lower level APIs is Programming With Quartz
Some of the code below is based on examples from that book.
Note: This is un-tested code meant to be a starting point only....
- (NSBitmapImageRep*)convertImageRep:(NSBitmapImageRep*)startingImage{
CGImageRef anImage = [startingImage CGImage];
CGContextRef bitmapContext;
CGRect ctxRect;
size_t bytesPerRow, width, height;
width = CGImageGetWidth(anImage);
height = CGImageGetHeight(anImage);
ctxRect = CGRectMake(0.0, 0.0, width, height);
bytesPerRow = (width * 4 + 63) & ~63;
bitmapData = calloc(bytesPerRow * height, 1);
bitmapContext = createRGBBitmapContext(width, height, TRUE);
CGContextDrawImage (bitmapContext, ctxRect, anImage);
//Now extract the image from the context
CGImageRef bitmapImage = nil;
bitmapImage = CGBitmapContextCreateImage(bitmapContext);
if(!bitmapImage){
fprintf(stderr, "Couldn't create the image!\n");
return nil;
}
NSBitmapImageRep *newImage = [[NSBitmapImageRep alloc] initWithCGImage:bitmapImage];
return newImage;
}
Context Creation Function:
CGContextRef createRGBBitmapContext(size_t width, size_t height, Boolean needsTransparentBitmap)
{
CGContextRef context;
size_t bytesPerRow;
unsigned char *rasterData;
//minimum bytes per row is 4 bytes per sample * number of samples
bytesPerRow = width*4;
//round up to nearest multiple of 16.
bytesPerRow = COMPUTE_BEST_BYTES_PER_ROW(bytesPerRow);
int bitsPerComponent = 2; // to get 256 colors (2xRGBA)
//use function 'calloc' so memory is initialized to 0.
rasterData = calloc(1, bytesPerRow * height);
if(rasterData == NULL){
fprintf(stderr, "Couldn't allocate the needed amount of memory!\n");
return NULL;
}
// uses the generic calibrated RGB color space.
context = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow,
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
(needsTransparentBitmap ? kCGImageAlphaPremultipliedFirst :
kCGImageAlphaNoneSkipFirst)
);
if(context == NULL){
free(rasterData);
fprintf(stderr, "Couldn't create the context!\n");
return NULL;
}
//Either clear the rect or paint with opaque white,
if(needsTransparentBitmap){
CGContextClearRect(context, CGRectMake(0, 0, width, height));
}else{
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, getRGBOpaqueWhiteColor());
CGContextFillRect(context, CGRectMake(0, 0, width, height));
CGContextRestoreGState(context);
}
return context;
}
Usage would be:
NSBitmapImageRep *startingImage; // assumed to be previously set.
NSBitmapImageRep *endingImageRep = [self convertImageRep:startingImage];
// Write out as data
NSData *outputData = [endingImageRep representationUsingType:NSPNGFileType properties:nil];
// somePath is set elsewhere
[outputData writeToFile:somePath atomically:YES];
One thing to try would be creating a NSBitmapImageRep with 8 bits, then copying the data to it.
This would actually be a lot of work, as you would have to compute the color index table yourself.
CGImageDestination is your man for low-level image writing, but I don't know if it supports that specific ability.

Get pixels and colours from NSImage

I have created an NSImage object, and ideally would like to determine how many of each pixels colour it contains. Is this possible?
This code renders the NSImage into a CGBitmapContext:
- (void)updateImageData {
if (!_image)
return;
// Dimensions - source image determines context size
NSSize imageSize = _image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
// Create a context to hold the image data
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
// Wrap graphics context
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
// Make our bitmap context current and render the NSImage into it
[NSGraphicsContext setCurrentContext:gctx];
[_image drawInRect:imageRect];
// Calculate the histogram
[self computeHistogramFromBitmap:ctx];
// Clean up
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
Given a bitmap context, we can access the raw image data directly, and compute the histograms for each colour channel:
- (void)computeHistogramFromBitmap:(CGContextRef)bitmap {
// NB: Assumes RGBA 8bpp
size_t width = CGBitmapContextGetWidth(bitmap);
size_t height = CGBitmapContextGetHeight(bitmap);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(bitmap);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
// Extract colour components
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
// Accumulate each colour
_histogram[kRedChannel][red]++;
_histogram[kGreenChannel][green]++;
_histogram[kBlueChannel][blue]++;
// Next pixel!
pixel++;
}
}
}
#end
I've published a complete project, a Cocoa sample app, which includes the above.
https://github.com/gavinb/CocoaImageHistogram.git
I suggest creating your own bitmap context, wrapping it in a graphics context and setting that as the current context, telling the image to draw itself, and then accessing the pixel data behind the bitmap context directly.
This will be more code, but will save you both a trip through a TIFF representation and the creation of thousands or millions of NSColor objects. If you're working with images of any appreciable size, these expenses will add up quickly.
Get an NSBitmapImageRep from your NSImage. Then you can get access to the pixels.
NSImage* img = ...;
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[img TIFFRepresentation]];
NSColor* color = [raw_img colorAtX:0 y:0];
Look for "histogram" in the Core Image documentation.
Using colorAtX with NSBitmapImageRep does not always lead to the exact correct color.
I managed to get the correct color with this simple code:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];
This maybe a more streamlined approach for some and reduce complexity of dropping into memory management.
https://github.com/koher/EasyImagy
Code sample
https://github.com/koher/EasyImagyCameraSample
import EasyImagy
let image = Image<RGBA<UInt8>>(nsImage: "test.png") // N.B. init with nsImage
print(image[x, y])
image[x, y] = RGBA(red: 255, green: 0, blue: 0, alpha: 127)
image[x, y] = RGBA(0xFF00007F) // red: 255, green: 0, blue: 0, alpha: 127
// Iterates over all pixels
for pixel in image {
// ...
}
//// Gets a pixel by subscripts Gets a pixel by
let pixel = image[x, y]
// Sets a pixel by subscripts
image[x, y] = RGBA(0xFF0000FF)
image[x, y].alpha = 127
// Safe get for a pixel
if let pixel = image.pixelAt(x: x, y: y) {
print(pixel.red)
print(pixel.green)
print(pixel.blue)
print(pixel.alpha)
print(pixel.gray) // (red + green + blue) / 3
print(pixel) // formatted like "#FF0000FF"
} else {
// `pixel` is safe: `nil` is returned when out of bounds
print("Out of bounds")
}

CGBitmapContext get pixel value Leopard vs. SnowLeopard confusion

Im trying to draw specific colour rectangles into a CGBitmapContext and then later compare pixel values with the colour i drew (a kind of hit-testing).
On Leopard this works fine but on SnowLeopard the pixel-values i get out are different to the colour values i draw in - i guess due to colorspace confusion and ignorance on my part.
The basic steps i take are:-
create a bitmap context with a kCGColorSpaceGenericRGB colorspace
set the context's fillColorSpace to the same kCGColorSpaceGenericRGB colorspace
set the context's fill color
draw
get the bitmapContextData, iterate pixel values, etc.
As an example, on Leopard if i do:-
CGContextSetRGBFillColor(cntxt, 1.0, 0.0, 0.0, 1.0 ); // set pure red fill colour
CGContextFillRect(cntxt, cntxtBounds); // fill entire context
each pixel has a value UInt8 red==255, green==0, blue==0, alpha==255
On Snow Lepard however,
each pixel has a value UInt8 red==243, green==31, blue==6, alpha==255
(These values are made up - i'm not on snow leopard right now. They are roughly typical of what i was getting - still definitely 'Red' but difficult for me to correlate with (1.0,0,0). Similar for other colours too except (1.0,1.0,1.0) would be exactly (255,255,255) and (0,0,0) would be exactly (0,0,0) ).
I have tried other colorSpaces but a similar thing happens. Any help or pointers is much appreciated, thanks.
UPDATE
I believe this demonstrates what i'm on about..
//create
NSUInteger arbitraryPixSize = 10;
size_t components = 4;
size_t bitsPerComponent = 8;
size_t bytesPerRow = (arbitraryPixSize * bitsPerComponent * components + 7)/8;
size_t dataLength = bytesPerRow * arbitraryPixSize;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 0, dataLength );
CGColorSpaceRef colSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef context = CGBitmapContextCreate ( bitmap, arbitraryPixSize, arbitraryPixSize, bitsPerComponent,bytesPerRow, colSpace, kCGImageAlphaPremultipliedFirst );
CGContextSetFillColorSpace( context, colSpace );
CGContextSetStrokeColorSpace( context, colSpace );
// -- draw something
CGContextSetRGBFillColor( context, 1.0f, 0.0f, 0.0f, 1.0f );
CGContextFillRect( context, CGRectMake( 0, 0, arbitraryPixSize, arbitraryPixSize ) );
// test the first pixel
UInt8 *baseAddr = (UInt8 *)CGBitmapContextGetData(context);
UInt8 alpha = baseAddr[0];
UInt8 red = baseAddr[1];
UInt8 green = baseAddr[2];
UInt8 blue = baseAddr[3];
CGContextRelease(context);
CGColorSpaceRelease(colSpace);
RESULTS
Leopard -> red==255, green==0, blue==0, alpha==255
Snow leopard -> red==228, green==29, blue==29, alpha==255
Take a look at the docs for CGContextSetRGBFillColor.
CGContextSetRGBFillColor Sets the
current fill color to a value in the
DeviceRGB color space.
You wanted your components to be with respect to the generic RGB space. So, use one of the other methods of setting the fill color.

disabling color correction in quartz 2d

Ok, I know that it's not possible to actually disable color correction in quartz. What I'm looking for is a device-independent color space setting that dosn't change the RGB values I draw in a CGLayer.
I tried all the ICC profiles from the system library, they all shift the colors.
This is the best result I got:
const CGFloat whitePoint[] = {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[] = {0, 0, 0};
const CGFloat gamma[] = {1, 1, 1};
const CGFloat matrix[] = {0.449695, 0.244634, 0.0251829, 0.316251, 0.672034, 0.141184, 0.18452, 0.0833318, 0.922602 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateCalibratedRGB(whitePoint, blackPoint, gamma, matrix);
This uses Apple RGB's color conversion matrix and D65 white point.
The colors still shift a bit, although I'm a lot happier with this than with the device dependent settings.
Here's how I write the CGLayer to a TIFF:
CIImage *image = [CIImage imageWithCGLayer:cgLayer];
NSBitmapImageRep *bitmapImage = [[NSBitmapImageRep alloc] initWithCIImage:image];
[[bitmapImage TIFFRepresentation] writeToFile:fileName atomically:YES];
Any help would be greatly appreciated.
Why not declare your colours to be part of the same colour space as your destination CGLayer?
The doco for CGColorSpaceCreateDeviceRGBseems to be saying just that:
CGColorSpaceCreateDeviceRGB

Resources