How would I ever use Core graphics only to generate a noise texture background? I'm stuck on the noise part because there is no way to add a noise filter in core graphics...
About a year later, I've found the answer:
CGImageRef CGGenerateNoiseImage(CGSize size, CGFloat factor) CF_RETURNS_RETAINED {
NSUInteger bits = fabs(size.width) * fabs(size.height);
char *rgba = (char *)malloc(bits);
srand(124);
for(int i = 0; i < bits; ++i)
rgba[i] = (rand() % 256) * factor;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(rgba, fabs(size.width), fabs(size.height),
8, fabs(size.width), colorSpace, kCGImageAlphaNone);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
CFRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
free(rgba);
return image;
}
This effectively generates a noise image that's guaranteed to be random, and can be drawn, using the code from Jason Harwig's answer.
Create a noise png, then draw it using an overlay blend.
// draw background
CGContextFillRect(context, ...)
// blend noise on top
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGImageRef cgImage = [UIImage imageNamed:#"noise"].CGImage;
CGContextDrawImage(context, rect, cgImage);
CGContextSetBlendMode(context, kCGBlendModeNormal);
There is a CIRandomGenerator in CoreImageFilters as of iOS 6.
But bear in mind that this one is not gaussian noise (as it is not in the previous answer).
- (UIImage*)linearRandomImage:(CGRect)rect
{
CIContext *randomContext = [CIContext contextWithOptions:nil];
CIFilter *randomGenerator = [CIFilter filterWithName: #"CIColorMonochrome"];
[randomGenerator setValue:[[CIFilter filterWithName:#"CIRandomGenerator"] valueForKey:#"outputImage"] forKey:#"inputImage"];
[randomGenerator setDefaults];
CIImage *resultImage = [randomGenerator outputImage];
CGImageRef ref = [randomContext createCGImage:resultImage fromRect:rect];
UIImage *endImage=[UIImage imageWithCGImage:ref];
return endImage;
}
Related
XCode has the ability to capture Opengl ES frames from the iPad, and that's great! I would like to extend this functionality and capture an entire Opengl ES movie of my application. Is there a way for that?
if it's not possible using XCode, how can i do it without much effort and big changes on my code? thank you very much!
I use a very simple technique, which requires just a few lines of code.
You can capture each OGL frame into UIImage using this code:
- (UIImage*)captureScreen {
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
// Allocate array.
GLuint *buffer = (GLuint *) malloc(dataLength);
GLuint *resultsBuffer = (GLuint *)malloc(dataLength);
// Read data
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// Flip vertical
for(int y = 0; y < framebufferHeight; y++) {
for(int x = 0; x < framebufferWidth; x++) {
resultsBuffer[x + y * framebufferWidth] = buffer[x + (framebufferHeight - 1 - y) * framebufferWidth];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, resultsBuffer, dataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Then you will capture each frame in your main loop:
- (void)onTimer {
// Compute and render new frame
[self update];
// Recording
if (recordingMode == RecordingModeMovie) {
recordingFrameNum++;
// Save frame
UIImage *image = [self captureScreen];
NSString *fileName = [NSString stringWithFormat:#"%d.jpg", (int)recordingFrameNum];
[UIImageJPEGRepresentation(image, 1.0) writeToFile:[basePath stringByAppendingPathComponent:fileName] atomically:NO];
}
}
At the end you will have tons of JPEG files which can be easily converted into a movie by
Time Lapse Assembler
If you want to have nice 30FPS movie, hard fix your calc steps to 1 / 30.0 sec per frame.
I'm writing application that operates on black&white images. I'm doing it by passing a NSImage object into my method and then making NSBitmapImageRep from NSImage. All works but quite slow. Here's my code:
- (NSImage *)skeletonization: (NSImage *)image
{
int x = 0, y = 0;
NSUInteger pixelVariable = 0;
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
[myHelpText setIntValue:[bitmapImageRep pixelsWide]];
[myHelpText2 setIntValue:[bitmapImageRep pixelsHigh]];
NSColor *black = [NSColor blackColor];
NSColor *white = [NSColor whiteColor];
[myColor set];
[myColor2 set];
for (x=0; x<=[bitmapImageRep pixelsWide]; x++) {
for (y=0; y<=[bitmapImageRep pixelsHigh]; y++) {
// This is only to see if it's working
[bitmapImageRep setColor:myColor atX:x y:y];
}
}
[myColor release];
[myColor2 release];
NSImage *producedImage = [[NSImage alloc] init];
[producedImage addRepresentation:bitmapImageRep];
[bitmapImageRep release];
return [producedImage autorelease];
}
So I tried to use CIImage but I don't know how to get into each pixel by (x,y) coordinates. That is really important.
Use the representations array property from NSImage, to get your NSBitmapImageRep. It should be faster than serializing your image to a TIFF and then back.
Use the bitmapData property of the NSBitmapImageRep to access the image bytes directly.
eg
unsigned char black = 0;
unsigned char white = 255;
NSBitmapImageRep* bitmapImageRep = [[image representations] firstObject];
// you will need to do checks here to determine the pixelformat of your bitmap data
unsigned char* imageData = [bitmapImageRep bitmapData];
int rowBytes = [bitmapImageRep bytesPerRow];
int bpp = [bitmapImageRep bitsPerPixel] / 8;
for (x=0; x<[bitmapImageRep pixelsWide]; x++) { // don't use <=
for (y=0; y<[bitmapImageRep pixelsHigh]; y++) {
*(imageData + y * rowBytes + x * bpp ) = black; // Red
*(imageData + y * rowBytes + x * bpp +1) = black; // Green
*(imageData + y * rowBytes + x * bpp +2) = black; // Blue
*(imageData + y * rowBytes + x * bpp +3) = 255; // Alpha
}
}
You will need to know what pixel format you are using in your images before you can go playing with its data, look at the bitsPerPixel property of NSBitmapImageRep to help determine if your image is in RGBA format.
You could be working with a gray scale image, or an RGB image, or possibly CMYK. And either convert the image to what you want first. Or handle the data in the loop differently.
I have a 32-bit NSBitmapImageRep which has an alpha channel with essentially 1-bit values (the pixels are either on or off).
I want to save this bitmap to an 8-bit PNG file with transparency. If I use the -representationUsingType:properties: method of NSBitmapImageRep and pass in NSPNGFileType, a 32-bit PNG is created, which is not what I want.
I know that 8-bit PNGs can be read, they open in Preview with no problems, but is it possible to write this type of PNG file using any built-in Mac OS X APIs? I'm happy to drop down to Core Image or even QuickTime if necessary. A cursory examination of the CGImage docs didn't reveal anything obvious.
EDIT:
I've started a bounty on this question, if someone can provide working source code that takes a 32-bit NSBitmapImageRep and writes a 256-color PNG with 1-bit transparency, it's yours.
How about pnglib? It's really lightweight and easy to use.
pngnq (and new pngquant which achieves higher quality) has BSD-style license, so you can just include it in your program. No need to spawn as separate task.
A great reference for working with lower level APIs is Programming With Quartz
Some of the code below is based on examples from that book.
Note: This is un-tested code meant to be a starting point only....
- (NSBitmapImageRep*)convertImageRep:(NSBitmapImageRep*)startingImage{
CGImageRef anImage = [startingImage CGImage];
CGContextRef bitmapContext;
CGRect ctxRect;
size_t bytesPerRow, width, height;
width = CGImageGetWidth(anImage);
height = CGImageGetHeight(anImage);
ctxRect = CGRectMake(0.0, 0.0, width, height);
bytesPerRow = (width * 4 + 63) & ~63;
bitmapData = calloc(bytesPerRow * height, 1);
bitmapContext = createRGBBitmapContext(width, height, TRUE);
CGContextDrawImage (bitmapContext, ctxRect, anImage);
//Now extract the image from the context
CGImageRef bitmapImage = nil;
bitmapImage = CGBitmapContextCreateImage(bitmapContext);
if(!bitmapImage){
fprintf(stderr, "Couldn't create the image!\n");
return nil;
}
NSBitmapImageRep *newImage = [[NSBitmapImageRep alloc] initWithCGImage:bitmapImage];
return newImage;
}
Context Creation Function:
CGContextRef createRGBBitmapContext(size_t width, size_t height, Boolean needsTransparentBitmap)
{
CGContextRef context;
size_t bytesPerRow;
unsigned char *rasterData;
//minimum bytes per row is 4 bytes per sample * number of samples
bytesPerRow = width*4;
//round up to nearest multiple of 16.
bytesPerRow = COMPUTE_BEST_BYTES_PER_ROW(bytesPerRow);
int bitsPerComponent = 2; // to get 256 colors (2xRGBA)
//use function 'calloc' so memory is initialized to 0.
rasterData = calloc(1, bytesPerRow * height);
if(rasterData == NULL){
fprintf(stderr, "Couldn't allocate the needed amount of memory!\n");
return NULL;
}
// uses the generic calibrated RGB color space.
context = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow,
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
(needsTransparentBitmap ? kCGImageAlphaPremultipliedFirst :
kCGImageAlphaNoneSkipFirst)
);
if(context == NULL){
free(rasterData);
fprintf(stderr, "Couldn't create the context!\n");
return NULL;
}
//Either clear the rect or paint with opaque white,
if(needsTransparentBitmap){
CGContextClearRect(context, CGRectMake(0, 0, width, height));
}else{
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, getRGBOpaqueWhiteColor());
CGContextFillRect(context, CGRectMake(0, 0, width, height));
CGContextRestoreGState(context);
}
return context;
}
Usage would be:
NSBitmapImageRep *startingImage; // assumed to be previously set.
NSBitmapImageRep *endingImageRep = [self convertImageRep:startingImage];
// Write out as data
NSData *outputData = [endingImageRep representationUsingType:NSPNGFileType properties:nil];
// somePath is set elsewhere
[outputData writeToFile:somePath atomically:YES];
One thing to try would be creating a NSBitmapImageRep with 8 bits, then copying the data to it.
This would actually be a lot of work, as you would have to compute the color index table yourself.
CGImageDestination is your man for low-level image writing, but I don't know if it supports that specific ability.
I have created an NSImage object, and ideally would like to determine how many of each pixels colour it contains. Is this possible?
This code renders the NSImage into a CGBitmapContext:
- (void)updateImageData {
if (!_image)
return;
// Dimensions - source image determines context size
NSSize imageSize = _image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
// Create a context to hold the image data
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
// Wrap graphics context
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
// Make our bitmap context current and render the NSImage into it
[NSGraphicsContext setCurrentContext:gctx];
[_image drawInRect:imageRect];
// Calculate the histogram
[self computeHistogramFromBitmap:ctx];
// Clean up
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
Given a bitmap context, we can access the raw image data directly, and compute the histograms for each colour channel:
- (void)computeHistogramFromBitmap:(CGContextRef)bitmap {
// NB: Assumes RGBA 8bpp
size_t width = CGBitmapContextGetWidth(bitmap);
size_t height = CGBitmapContextGetHeight(bitmap);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(bitmap);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
// Extract colour components
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
// Accumulate each colour
_histogram[kRedChannel][red]++;
_histogram[kGreenChannel][green]++;
_histogram[kBlueChannel][blue]++;
// Next pixel!
pixel++;
}
}
}
#end
I've published a complete project, a Cocoa sample app, which includes the above.
https://github.com/gavinb/CocoaImageHistogram.git
I suggest creating your own bitmap context, wrapping it in a graphics context and setting that as the current context, telling the image to draw itself, and then accessing the pixel data behind the bitmap context directly.
This will be more code, but will save you both a trip through a TIFF representation and the creation of thousands or millions of NSColor objects. If you're working with images of any appreciable size, these expenses will add up quickly.
Get an NSBitmapImageRep from your NSImage. Then you can get access to the pixels.
NSImage* img = ...;
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[img TIFFRepresentation]];
NSColor* color = [raw_img colorAtX:0 y:0];
Look for "histogram" in the Core Image documentation.
Using colorAtX with NSBitmapImageRep does not always lead to the exact correct color.
I managed to get the correct color with this simple code:
[yourImage lockFocus]; // yourImage is just your NSImage variable
NSColor *pixelColor = NSReadPixel(NSMakePoint(1, 1)); // Or another point
[yourImage unlockFocus];
This maybe a more streamlined approach for some and reduce complexity of dropping into memory management.
https://github.com/koher/EasyImagy
Code sample
https://github.com/koher/EasyImagyCameraSample
import EasyImagy
let image = Image<RGBA<UInt8>>(nsImage: "test.png") // N.B. init with nsImage
print(image[x, y])
image[x, y] = RGBA(red: 255, green: 0, blue: 0, alpha: 127)
image[x, y] = RGBA(0xFF00007F) // red: 255, green: 0, blue: 0, alpha: 127
// Iterates over all pixels
for pixel in image {
// ...
}
//// Gets a pixel by subscripts Gets a pixel by
let pixel = image[x, y]
// Sets a pixel by subscripts
image[x, y] = RGBA(0xFF0000FF)
image[x, y].alpha = 127
// Safe get for a pixel
if let pixel = image.pixelAt(x: x, y: y) {
print(pixel.red)
print(pixel.green)
print(pixel.blue)
print(pixel.alpha)
print(pixel.gray) // (red + green + blue) / 3
print(pixel) // formatted like "#FF0000FF"
} else {
// `pixel` is safe: `nil` is returned when out of bounds
print("Out of bounds")
}
Ok, I know that it's not possible to actually disable color correction in quartz. What I'm looking for is a device-independent color space setting that dosn't change the RGB values I draw in a CGLayer.
I tried all the ICC profiles from the system library, they all shift the colors.
This is the best result I got:
const CGFloat whitePoint[] = {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[] = {0, 0, 0};
const CGFloat gamma[] = {1, 1, 1};
const CGFloat matrix[] = {0.449695, 0.244634, 0.0251829, 0.316251, 0.672034, 0.141184, 0.18452, 0.0833318, 0.922602 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateCalibratedRGB(whitePoint, blackPoint, gamma, matrix);
This uses Apple RGB's color conversion matrix and D65 white point.
The colors still shift a bit, although I'm a lot happier with this than with the device dependent settings.
Here's how I write the CGLayer to a TIFF:
CIImage *image = [CIImage imageWithCGLayer:cgLayer];
NSBitmapImageRep *bitmapImage = [[NSBitmapImageRep alloc] initWithCIImage:image];
[[bitmapImage TIFFRepresentation] writeToFile:fileName atomically:YES];
Any help would be greatly appreciated.
Why not declare your colours to be part of the same colour space as your destination CGLayer?
The doco for CGColorSpaceCreateDeviceRGBseems to be saying just that:
CGColorSpaceCreateDeviceRGB