How do I create an 8-bit PNG with transparency from an NSBitmapImageRep? - cocoa

I have a 32-bit NSBitmapImageRep which has an alpha channel with essentially 1-bit values (the pixels are either on or off).
I want to save this bitmap to an 8-bit PNG file with transparency. If I use the -representationUsingType:properties: method of NSBitmapImageRep and pass in NSPNGFileType, a 32-bit PNG is created, which is not what I want.
I know that 8-bit PNGs can be read, they open in Preview with no problems, but is it possible to write this type of PNG file using any built-in Mac OS X APIs? I'm happy to drop down to Core Image or even QuickTime if necessary. A cursory examination of the CGImage docs didn't reveal anything obvious.
EDIT:
I've started a bounty on this question, if someone can provide working source code that takes a 32-bit NSBitmapImageRep and writes a 256-color PNG with 1-bit transparency, it's yours.

How about pnglib? It's really lightweight and easy to use.

pngnq (and new pngquant which achieves higher quality) has BSD-style license, so you can just include it in your program. No need to spawn as separate task.

A great reference for working with lower level APIs is Programming With Quartz
Some of the code below is based on examples from that book.
Note: This is un-tested code meant to be a starting point only....
- (NSBitmapImageRep*)convertImageRep:(NSBitmapImageRep*)startingImage{
CGImageRef anImage = [startingImage CGImage];
CGContextRef bitmapContext;
CGRect ctxRect;
size_t bytesPerRow, width, height;
width = CGImageGetWidth(anImage);
height = CGImageGetHeight(anImage);
ctxRect = CGRectMake(0.0, 0.0, width, height);
bytesPerRow = (width * 4 + 63) & ~63;
bitmapData = calloc(bytesPerRow * height, 1);
bitmapContext = createRGBBitmapContext(width, height, TRUE);
CGContextDrawImage (bitmapContext, ctxRect, anImage);
//Now extract the image from the context
CGImageRef bitmapImage = nil;
bitmapImage = CGBitmapContextCreateImage(bitmapContext);
if(!bitmapImage){
fprintf(stderr, "Couldn't create the image!\n");
return nil;
}
NSBitmapImageRep *newImage = [[NSBitmapImageRep alloc] initWithCGImage:bitmapImage];
return newImage;
}
Context Creation Function:
CGContextRef createRGBBitmapContext(size_t width, size_t height, Boolean needsTransparentBitmap)
{
CGContextRef context;
size_t bytesPerRow;
unsigned char *rasterData;
//minimum bytes per row is 4 bytes per sample * number of samples
bytesPerRow = width*4;
//round up to nearest multiple of 16.
bytesPerRow = COMPUTE_BEST_BYTES_PER_ROW(bytesPerRow);
int bitsPerComponent = 2; // to get 256 colors (2xRGBA)
//use function 'calloc' so memory is initialized to 0.
rasterData = calloc(1, bytesPerRow * height);
if(rasterData == NULL){
fprintf(stderr, "Couldn't allocate the needed amount of memory!\n");
return NULL;
}
// uses the generic calibrated RGB color space.
context = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow,
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
(needsTransparentBitmap ? kCGImageAlphaPremultipliedFirst :
kCGImageAlphaNoneSkipFirst)
);
if(context == NULL){
free(rasterData);
fprintf(stderr, "Couldn't create the context!\n");
return NULL;
}
//Either clear the rect or paint with opaque white,
if(needsTransparentBitmap){
CGContextClearRect(context, CGRectMake(0, 0, width, height));
}else{
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, getRGBOpaqueWhiteColor());
CGContextFillRect(context, CGRectMake(0, 0, width, height));
CGContextRestoreGState(context);
}
return context;
}
Usage would be:
NSBitmapImageRep *startingImage; // assumed to be previously set.
NSBitmapImageRep *endingImageRep = [self convertImageRep:startingImage];
// Write out as data
NSData *outputData = [endingImageRep representationUsingType:NSPNGFileType properties:nil];
// somePath is set elsewhere
[outputData writeToFile:somePath atomically:YES];

One thing to try would be creating a NSBitmapImageRep with 8 bits, then copying the data to it.
This would actually be a lot of work, as you would have to compute the color index table yourself.

CGImageDestination is your man for low-level image writing, but I don't know if it supports that specific ability.

Related

Export Opengl ES video

XCode has the ability to capture Opengl ES frames from the iPad, and that's great! I would like to extend this functionality and capture an entire Opengl ES movie of my application. Is there a way for that?
if it's not possible using XCode, how can i do it without much effort and big changes on my code? thank you very much!
I use a very simple technique, which requires just a few lines of code.
You can capture each OGL frame into UIImage using this code:
- (UIImage*)captureScreen {
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
// Allocate array.
GLuint *buffer = (GLuint *) malloc(dataLength);
GLuint *resultsBuffer = (GLuint *)malloc(dataLength);
// Read data
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// Flip vertical
for(int y = 0; y < framebufferHeight; y++) {
for(int x = 0; x < framebufferWidth; x++) {
resultsBuffer[x + y * framebufferWidth] = buffer[x + (framebufferHeight - 1 - y) * framebufferWidth];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, resultsBuffer, dataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Then you will capture each frame in your main loop:
- (void)onTimer {
// Compute and render new frame
[self update];
// Recording
if (recordingMode == RecordingModeMovie) {
recordingFrameNum++;
// Save frame
UIImage *image = [self captureScreen];
NSString *fileName = [NSString stringWithFormat:#"%d.jpg", (int)recordingFrameNum];
[UIImageJPEGRepresentation(image, 1.0) writeToFile:[basePath stringByAppendingPathComponent:fileName] atomically:NO];
}
}
At the end you will have tons of JPEG files which can be easily converted into a movie by
Time Lapse Assembler
If you want to have nice 30FPS movie, hard fix your calc steps to 1 / 30.0 sec per frame.

Decode values ignored in CGCreateImage

I am creating a monochrome image with the following code:
CGColorSpaceRef cgColorSpace = CGColorSpaceCreateDeviceGray();
CGImageRef cgImage = CGImageCreate (width, height, 1, 1, rowBytes, colorSpace, 0, dataProvider, decodeValues, NO, kCGRenderingIntentDefault);
where decodeValues is an array of 2 CGFloat's, equal to {0,1}. This gives me a fine image, but apparently my data (which comes from a PDF image mask) is black-on-white instead of white-on-black. To invert the image, I tried to set the values of decodeValues to {1,0}, but this did not change anything at all. Actually, whatever nonsensical values I put into decodeValues, I get the same image.
Why is decodeValues ignored here? How do I invert black and white?
here's some code for creating and drawing a mono image. It's the same as yours but with more context (and without necessary cleanup):
size_t width = 200;
size_t height = 200;
size_t bitsPerComponent = 1;
size_t componentsPerPixel = 1;
size_t bitsPerPixel = bitsPerComponent * componentsPerPixel;
size_t bytesPerRow = (width * bitsPerPixel + 7)/8;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGImageAlphaNone;
CGFloat decode[] = {0.0, 1.0};
size_t dataLength = bytesPerRow * height;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 255, dataLength );
CGDataProviderRef dataProvider = CGDataProviderCreateWithData( NULL, bitmap, dataLength, NULL);
CGImageRef cgImage = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
dataProvider,
decode,
false,
kCGRenderingIntentDefault
);
CGRect destRect = CGRectMake(0, 0, width, height);
CGContextDrawImage( context, destRect, cgImage );
If i change the decode array to CGFloat decode[] = {0.0, 0.0}; i always get a black image.
If you have tried that and it didn't have any effect (you say you get the same image whatever values you use), either: you aren't actually passing in those values but you think you are, or: somehow you aren't actually examining the output of CGImageCreate.

Gaussian Noise using Core Graphics only?

How would I ever use Core graphics only to generate a noise texture background? I'm stuck on the noise part because there is no way to add a noise filter in core graphics...
About a year later, I've found the answer:
CGImageRef CGGenerateNoiseImage(CGSize size, CGFloat factor) CF_RETURNS_RETAINED {
NSUInteger bits = fabs(size.width) * fabs(size.height);
char *rgba = (char *)malloc(bits);
srand(124);
for(int i = 0; i < bits; ++i)
rgba[i] = (rand() % 256) * factor;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(rgba, fabs(size.width), fabs(size.height),
8, fabs(size.width), colorSpace, kCGImageAlphaNone);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
CFRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
free(rgba);
return image;
}
This effectively generates a noise image that's guaranteed to be random, and can be drawn, using the code from Jason Harwig's answer.
Create a noise png, then draw it using an overlay blend.
// draw background
CGContextFillRect(context, ...)
// blend noise on top
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGImageRef cgImage = [UIImage imageNamed:#"noise"].CGImage;
CGContextDrawImage(context, rect, cgImage);
CGContextSetBlendMode(context, kCGBlendModeNormal);
There is a CIRandomGenerator in CoreImageFilters as of iOS 6.
But bear in mind that this one is not gaussian noise (as it is not in the previous answer).
- (UIImage*)linearRandomImage:(CGRect)rect
{
CIContext *randomContext = [CIContext contextWithOptions:nil];
CIFilter *randomGenerator = [CIFilter filterWithName: #"CIColorMonochrome"];
[randomGenerator setValue:[[CIFilter filterWithName:#"CIRandomGenerator"] valueForKey:#"outputImage"] forKey:#"inputImage"];
[randomGenerator setDefaults];
CIImage *resultImage = [randomGenerator outputImage];
CGImageRef ref = [randomContext createCGImage:resultImage fromRect:rect];
UIImage *endImage=[UIImage imageWithCGImage:ref];
return endImage;
}

32 bit Grayscale Image with CGBitmapContextCreate always Black

I'm using the following code to display a 32 bit Grayscale image. Even if I explicitly set every pixel to be 4294967297 (which ought to be white), the end result is always black. What am I doing wrong here? The image is just 64x64 pixels.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
ptr = (float*)malloc(4*xDim*yDim);
for(i=0;i<yDim;i++)
{
for(j=0;j<xDim;j++)
{
ptr[i*xDim + j] = 4294967297;
}
}
CGContextRef bitmapContext = CGBitmapContextCreate(
ptr,
xDim,
yDim,
32,
4*xDim,
colorSpace,
kCGImageAlphaNone | kCGBitmapFloatComponents);
//ptr = CGBitmapContextGetData(bitmapContext);
//NSLog(#"%ld",sizeof(float));
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
NSRect drawRect;
drawRect.origin = NSMakePoint(1.0, 1.0);
drawRect.size.width = 64;
drawRect.size.height = 64;
NSImage *greyscale = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
[greyscale drawInRect:drawRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
If you don't specify the endianness, Quartz will default to big-endian. If you are on an Intel Mac, this will be wrong. You will need to explicitly set the endianness, and the best way to do that is to change your flags to:
kCGImageAlphaNone | kCGBitmapFloatComponents | kCGBitmapByteOrder32Host
This will work properly regardless of your CPU (for future compatibility!).
You can find more detail here: http://lists.apple.com/archives/Quartz-dev/2011/Dec/msg00001.html
You are using a float component image. Make sure ptr has type float* and try setting values to 0.5f instead of 4294967297.
Are you showing the exact code you are using ?
2^32 - 1 = 4294967295
If you are using 4294967297 I suspect you are getting overflow and an actual value of 2 !
32 bit floating point gray scale is not supported by CGBitmapContextCreate. CGBitmapContextCreate Supported Color Spaces

Get pixels and colours from NSImage

I have created an NSImage object, and ideally would like to determine how many of each pixels colour it contains. Is this possible?
This code renders the NSImage into a CGBitmapContext:
- (void)updateImageData {
if (!_image)
return;
// Dimensions - source image determines context size
NSSize imageSize = _image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
// Create a context to hold the image data
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
// Wrap graphics context
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
// Make our bitmap context current and render the NSImage into it
[NSGraphicsContext setCurrentContext:gctx];
[_image drawInRect:imageRect];
// Calculate the histogram
[self computeHistogramFromBitmap:ctx];
// Clean up
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
Given a bitmap context, we can access the raw image data directly, and compute the histograms for each colour channel:
- (void)computeHistogramFromBitmap:(CGContextRef)bitmap {
// NB: Assumes RGBA 8bpp
size_t width = CGBitmapContextGetWidth(bitmap);
size_t height = CGBitmapContextGetHeight(bitmap);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(bitmap);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
// Extract colour components
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
// Accumulate each colour
_histogram[kRedChannel][red]++;
_histogram[kGreenChannel][green]++;
_histogram[kBlueChannel][blue]++;
// Next pixel!
pixel++;
}
}
}
#end
I've published a complete project, a Cocoa sample app, which includes the above.
https://github.com/gavinb/CocoaImageHistogram.git
I suggest creating your own bitmap context, wrapping it in a graphics context and setting that as the current context, telling the image to draw itself, and then accessing the pixel data behind the bitmap context directly.
This will be more code, but will save you both a trip through a TIFF representation and the creation of thousands or millions of NSColor objects. If you're working with images of any appreciable size, these expenses will add up quickly.
Get an NSBitmapImageRep from your NSImage. Then you can get access to the pixels.
NSImage* img = ...;
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[img TIFFRepresentation]];
NSColor* color = [raw_img colorAtX:0 y:0];
Look for "histogram" in the Core Image documentation.
Using colorAtX with NSBitmapImageRep does not always lead to the exact correct color.
I managed to get the correct color with this simple code:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];
This maybe a more streamlined approach for some and reduce complexity of dropping into memory management.
https://github.com/koher/EasyImagy
Code sample
https://github.com/koher/EasyImagyCameraSample
import EasyImagy
let image = Image<RGBA<UInt8>>(nsImage: "test.png") // N.B. init with nsImage
print(image[x, y])
image[x, y] = RGBA(red: 255, green: 0, blue: 0, alpha: 127)
image[x, y] = RGBA(0xFF00007F) // red: 255, green: 0, blue: 0, alpha: 127
// Iterates over all pixels
for pixel in image {
// ...
}
//// Gets a pixel by subscripts Gets a pixel by
let pixel = image[x, y]
// Sets a pixel by subscripts
image[x, y] = RGBA(0xFF0000FF)
image[x, y].alpha = 127
// Safe get for a pixel
if let pixel = image.pixelAt(x: x, y: y) {
print(pixel.red)
print(pixel.green)
print(pixel.blue)
print(pixel.alpha)
print(pixel.gray) // (red + green + blue) / 3
print(pixel) // formatted like "#FF0000FF"
} else {
// `pixel` is safe: `nil` is returned when out of bounds
print("Out of bounds")
}

Resources