NSImage to cv::Mat and vice versa - macos

while working with OpenCV I need to convert a NSImage to an OpenCV multi-channel 2D matrix (cvMat) and vice versa.
What's the best way to do it?
Greets,
Dom

Here's my outcome, which works pretty well.
NSImage+OpenCV.h:
//
// NSImage+OpenCV.h
//
#import <AppKit/AppKit.h>
#interface NSImage (NSImage_OpenCV) {
}
+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
-(id)initWithCVMat:(const cv::Mat&)cvMat;
#property(nonatomic, readonly) cv::Mat CVMat;
#property(nonatomic, readonly) cv::Mat CVGrayscaleMat;
#end
NSImage+OpenCV.mm:
//
// NSImage+OpenCV.mm
//
#import "NSImage+OpenCV.h"
static void ProviderReleaseDataNOP(void *info, const void *data, size_t size)
{
return;
}
#implementation NSImage (NSImage_OpenCV)
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return cgImage;
}
-(cv::Mat)CVMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGImageRelease(imageRef);
return cvMat;
}
-(cv::Mat)CVGrayscaleMat
{
CGImageRef imageRef = [self CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
CGImageRelease(imageRef);
return cvMat;
}
+ (NSImage *)imageWithCVMat:(const cv::Mat&)cvMat
{
return [[[NSImage alloc] initWithCVMat:cvMat] autorelease];
}
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
}
else
{
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:imageRef];
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
#end
Example usage:
Just import it like this:
#import "NSImage+OpenCV.h"
And use it like this:
cv::Mat cvMat_test;
NSImage *image = [NSImage imageNamed:#"test.jpg"];
cvMat_test = [image CVMat];
[myImageView setImage:[NSImage imageWithCVMat:cvMat_test]];

In -(id)initWithCVMat:(const cv::Mat&)cvMat, shouldn't you be adding the representation to self, rather than a new NSImage?
-(id)initWithCVMat:(const cv::Mat *)iMat
{
if(self = [super init]) {
NSData *tData = [NSData dataWithBytes:iMat->data length:iMat->elemSize() * iMat->total()];
CGColorSpaceRef tColorSpace;
if(iMat->elemSize() == 1) {
tColorSpace = CGColorSpaceCreateDeviceGray();
} else {
tColorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef tProvider = CGDataProviderCreateWithCFData((CFDataRef) tData);
CGImageRef tImage = CGImageCreate(
iMat->cols,
iMat->rows,
8,
8 * iMat->elemSize(),
iMat->step[0],
tColorSpace,
kCGImageAlphaNone | kCGBitmapByteOrderDefault,
tProvider,
NULL,
false,
kCGRenderingIntentDefault);
NSBitmapImageRep *tBitmap = [[NSBitmapImageRep alloc] initWithCGImage:tImage];
[self addRepresentation:tBitmap];
[tBitmap release];
CGImageRelease(tImage);
CGDataProviderRelease(tProvider);
CGColorSpaceRelease(tColorSpace);
}
return self;
}

Related

AVFoundation image captured is dark

On osx i use AVFoundation to capture image from a USB camera, all work fine, but the image I get is darker compared to live video.
Device capture configuration
-(BOOL)prepareCapture{
captureSession = [[AVCaptureSession alloc] init];
NSError *error;
imageOutput=[[AVCaptureStillImageOutput alloc] init];
NSNumber * pixelFormat = [NSNumber numberWithInt:k32BGRAPixelFormat];
[imageOutput setOutputSettings:[NSDictionary dictionaryWithObject:pixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
videoOutput=[[AVCaptureMovieFileOutput alloc] init];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:MyVideoDevice error:&error];
if (videoInput) {
[captureSession beginConfiguration];
[captureSession addInput:videoInput];
[captureSession setSessionPreset:AVCaptureSessionPresetHigh];
//[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
[captureSession addOutput:imageOutput];
[captureSession addOutput:videoOutput];
[captureSession commitConfiguration];
}
else {
// Handle the failure.
return NO;
}
return YES;
}
Add view for live preview
-(void)settingPreview:(NSView*)View{
// Attach preview to session
previewView = View;
CALayer *previewViewLayer = [previewView layer];
[previewViewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
AVCaptureVideoPreviewLayer *newPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[newPreviewLayer setFrame:[previewViewLayer bounds]];
[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
[previewViewLayer addSublayer:newPreviewLayer];
//[self setPreviewLayer:newPreviewLayer];
[captureSession startRunning];
}
Code to capture the image
-(void)captureImage{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in imageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments =
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
}
// Continue as appropriate.
//IMG is a global NSImage
IMG = [self imageFromSampleBuffer:imageSampleBuffer];
[[self delegate] imageReady:IMG];
}];
}
Create a NSImage from sample buffer data, i think the problem is here
- (NSImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
NSImage * image = [[NSImage alloc] initWithCGImage:quartzImage size:NSZeroSize];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Solution found
The problem was in imageFromSampleBuffer
I used this code and the picture is perfect
// Continue as appropriate.
//IMG = [self imageFromSampleBuffer:imageSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
}
Code found in this answer
In my case, you still need to call captureStillImageAsynchronouslyFromConnection: multiple times to force the built-in camera to expose properly.
int primeCount = 8; //YMMV
for (int i = 0; i < primeCount; i++) {
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {}];
}
[imageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
IMG = [[NSImage alloc] initWithSize: [imageRep size]];
[IMG addRepresentation: imageRep];
}
}];

Create RGB image from each channel

I have 3 files, one with only a red channel, one with only a green channel, one with only a blue channel. Now i want to combine those 3 images to one, where every image is one color-channel in the finished image.
How can i do this with cocoa? I have a solution that is working but is too slow:
NSBitmapImageRep *rRep = [[rImage representations] objectAtIndex: 0];
NSBitmapImageRep *gRep = [[gImage representations] objectAtIndex: 0];
NSBitmapImageRep *bRep = [[bImage representations] objectAtIndex: 0];
NSBitmapImageRep *finalRep = [rRep copy];
for (NSUInteger i = 0; i < [rRep pixelsWide]; i++) {
for (NSUInteger j = 0; j < [rRep pixelsHigh]; j++) {
CGFloat r = [[rRep colorAtX:i y:j] redComponent];
CGFloat g = [[gRep colorAtX:i y:j] greenComponent];
CGFloat b = [[bRep colorAtX:i y:j] blueComponent];
[finalRep setColor:[NSColor colorWithCalibratedRed:r green:g blue:b alpha:1.0] atX:i y:j];
}
}
NSData *data = [finalRep representationUsingType:NSJPEGFileType properties:[NSDictionary dictionaryWithObject:[NSNumber numberWithDouble:0.7] forKey:NSImageCompressionFactor]];
[data writeToURL:[panel URL] atomically:YES];
The Accelerate.framework provides a function to combine 3 planar images into one destination:
vImageConvert_Planar8toRGB888.
I haven't tried your approach but the vImage based method below is quite fast.
I was able to combine three (R,G,B) planes of a 1680x1050 image in ~0.1s on my Mac. The actual conversion takes ~1/3 of that time - The rest is setup & file IO.
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSDate* start = [NSDate date];
NSURL* redImageURL = [[NSBundle mainBundle] URLForImageResource:#"red"];
NSURL* greenImageURL = [[NSBundle mainBundle] URLForImageResource:#"green"];
NSURL* blueImageURL = [[NSBundle mainBundle] URLForImageResource:#"blue"];
NSData* redImageData = [self newChannelDataFromImageAtURL:redImageURL];
NSData* greenImageData = [self newChannelDataFromImageAtURL:greenImageURL];
NSData* blueImageData = [self newChannelDataFromImageAtURL:blueImageURL];
//We use our "Red" image to measure the dimensions. We assume that all images & the destination have the same size
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)redImageURL, NULL);
NSDictionary* properties = (__bridge NSDictionary*)CGImageSourceCopyPropertiesAtIndex(imageSource, 0, NULL);
CGFloat width = [properties[(id)kCGImagePropertyPixelWidth] doubleValue];
CGFloat height = [properties[(id)kCGImagePropertyPixelHeight] doubleValue];
self.image = [self newImageWithSize:CGSizeMake(width, height) fromRedChannel:redImageData greenChannel:greenImageData blueChannel:blueImageData];
NSLog(#"Combining 3 (R, G, B) planes of size %# took:%fs", NSStringFromSize(CGSizeMake(width, height)), [[NSDate date] timeIntervalSinceDate:start]);
}
- (NSImage*)newImageWithSize:(CGSize)size fromRedChannel:(NSData*)redImageData greenChannel:(NSData*)greenImageData blueChannel:(NSData*)blueImageData
{
vImage_Buffer redBuffer;
redBuffer.data = (void*)redImageData.bytes;
redBuffer.width = size.width;
redBuffer.height = size.height;
redBuffer.rowBytes = [redImageData length]/size.height;
vImage_Buffer greenBuffer;
greenBuffer.data = (void*)greenImageData.bytes;
greenBuffer.width = size.width;
greenBuffer.height = size.height;
greenBuffer.rowBytes = [greenImageData length]/size.height;
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&redBuffer, &greenBuffer, &blueBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
}
- (NSData*)newChannelDataFromImageAtURL:(NSURL*)imageURL
{
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
if(imageSource == NULL){return NULL;}
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
CFRelease(imageSource);
if(image == NULL){return NULL;}
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image);
CGFloat width = CGImageGetWidth(image);
CGFloat height = CGImageGetHeight(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(image);
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, bitmapInfo);
NSData* data = NULL;
if(NULL != bitmapContext)
{
CGContextDrawImage(bitmapContext, CGRectMake(0.0, 0.0, width, height), image);
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
if(NULL != imageRef)
{
data = (NSData*)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
}
CGImageRelease(imageRef);
CGContextRelease(bitmapContext);
}
CGImageRelease(image);
return data;
}
Your program creates many many many many many many color objects.
Although your program could simply access the image reps' bitmapData, it would require your program to know a lot about bitmap representations.
Before taking that approach, you should prefer to let Quartz do the heavy lifting by rendering each image to a CGBitmapContext (e.g. using CGContextDrawImage(gtx, rect, img.CGImage)) and then extracting/copying the rendered component values from the rendered result over to a destination RGB bitmap.
If your inputs are not multicomponent color models (e.g. grayscale), then you should render to the source color model to save a bunch of CPU time and memory.

How to save PNG file from NSImage (retina issues)

I'm doing some operations on images and after I'm done, I want to save the image as PNG on disk. I'm doing the following:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
[image lockFocus] ;
NSBitmapImageRep *imageRepresentation = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0.0, 0.0, image.size.width, image.size.height)] ;
[image unlockFocus] ;
NSData *data = [imageRepresentation representationUsingType:NSPNGFileType properties:nil];
[data writeToFile:path atomically:YES];
}
This code is working, but the problem is with retina mac, if I print the NSBitmapImageRep object I get a different size and pixels rect and when my image is saved on disk, it's twice the size:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=600x600 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
I tied to force the pixel size to not take care about the retina scale, as I want to preserve the original size:
imageRepresentation.pixelsWide = image.size.width;
imageRepresentation.pixelsHigh = image.size.height;
This time I get the right size when I print the NSBitmapImageRep object, but when I save my file I still get the same issue:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=300x300 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
Any idea how to fix this, and preserve the original pixel size?
If you have an NSImage and want to save it as an image file to the filesystem, you should never use lockFocus! lockFocus creates a new image which is determined for getting shown an the screen and nothing else. Therefore lockFocus uses the properties of the screen: 72 dpi for normal screens and 144 dpi for retina screens. For what you want I propose the following code:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
[newRep setSize:[image size]]; // if you want the same resolution
NSData *pngData = [newRep representationUsingType:NSPNGFileType properties:nil];
[pngData writeToFile:path atomically:YES];
[newRep autorelease];
}
NSImage is resolution aware and uses a HiDPI graphics context when you lockFocus on a system with retina screen.
The image dimensions you pass to your NSBitmapImageRep initializer are in points (not pixels). An 150.0 point-wide image therefore uses 300 horizontal pixels in a #2x context.
You could use convertRectToBacking: or backingScaleFactor: to compensate for the #2x context. (I didn't try that), or you can use the following NSImage category, that creates a drawing context with explicit pixel dimensions:
#interface NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error;
#end
#implementation NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error
{
BOOL result = YES;
NSImage* scalingImage = [NSImage imageWithSize:[self size] flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[self drawAtPoint:NSMakePoint(0.0, 0.0) fromRect:dstRect operation:NSCompositeSourceOver fraction:1.0];
return YES;
}];
NSRect proposedRect = NSMakeRect(0.0, 0.0, outputSizePx.width, outputSizePx.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef cgContext = CGBitmapContextCreate(NULL, proposedRect.size.width, proposedRect.size.height, 8, 4*proposedRect.size.width, colorSpace, kCGBitmapByteOrderDefault|kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO];
CGContextRelease(cgContext);
CGImageRef cgImage = [scalingImage CGImageForProposedRect:&proposedRect context:context hints:nil];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)(URL), kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, cgImage, nil);
if(!CGImageDestinationFinalize(destination))
{
NSDictionary* details = #{NSLocalizedDescriptionKey:#"Error writing PNG image"};
[details setValue:#"ran out of money" forKey:NSLocalizedDescriptionKey];
*error = [NSError errorWithDomain:#"SSWPNGAdditionsErrorDomain" code:10 userInfo:details];
result = NO;
}
CFRelease(destination);
return result;
}
#end
I found this code on web , and it works on retina. Paste here, hope can help someone.
NSImage *computerImage = [NSImage imageNamed:NSImageNameComputer];
NSInteger size = 256;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size
pixelsHigh:size
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(size, size)];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[computerImage drawInRect:NSMakeRect(0, 0, size, size) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSData *data = [rep representationUsingType:NSPNGFileType properties:nil];
Just incase anyone stumbles up on this thread. Here is certainly flawed solution that does the job of saving image at 1x size (image.size) regardless of device in swift
public func writeToFile(path: String, atomically: Bool = true) -> Bool{
let bitmap = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(self.size.width), pixelsHigh: Int(self.size.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)!
bitmap.size = self.size
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(NSGraphicsContext(bitmapImageRep: bitmap))
self.drawAtPoint(CGPoint.zero, fromRect: NSRect.zero, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
if let imagePGNData = bitmap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [NSImageCompressionFactor: 1.0]) {
return imagePGNData.writeToFile((path as NSString).stringByStandardizingPath, atomically: atomically)
} else {
return false
}
}
Here's a Swift 5 version based on Heinrich Giesen's answer:
static func saveImage(_ image: NSImage, atUrl url: URL) {
guard
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)
else { return } // TODO: handle error
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size // if you want the same size
guard
let pngData = newRep.representation(using: .png, properties: [:])
else { return } // TODO: handle error
do {
try pngData.write(to: url)
}
catch {
print("error saving: \(error)")
}
}
My 2 cents for OS X including write that handles extensions + offscreen image drawing (method 2); one can verify with NSGraphicsContext.currentContextDrawingToScreen()
func createCGImage() -> CGImage? {
//method 1
let image = NSImage(size: NSSize(width: bounds.width, height: bounds.height), flipped: true, drawingHandler: { rect in
self.drawRect(self.bounds)
return true
})
var rect = CGRectMake(0, 0, bounds.size.width, bounds.size.height)
return image.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
//method 2
if let pdfRep = NSPDFImageRep(data: dataWithPDFInsideRect(bounds)) {
return pdfRep.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
}
return nil
}
func PDFImageData(filter: QuartzFilter?) -> NSData? {
return dataWithPDFInsideRect(bounds)
}
func bitmapContext() -> NSGraphicsContext? {
var context : NSGraphicsContext? = nil
if let imageRep = NSBitmapImageRep(bitmapDataPlanes: nil,
pixelsWide: Int(bounds.size.width),
pixelsHigh: Int(bounds.size.height), bitsPerSample: 8,
samplesPerPixel: 4, hasAlpha: true, isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: Int(bounds.size.width) * 4,
bitsPerPixel: 32) {
imageRep.size = NSSize(width: bounds.size.width, height: bounds.size.height)
context = NSGraphicsContext(bitmapImageRep: imageRep)
}
return context
}
func writeImageData(view: MyView, destination: NSURL) {
if let dest = CGImageDestinationCreateWithURL(destination, imageUTType, 1, nil) {
let properties = imageProperties
let image = view.createCGImage()!
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue) {
CGImageDestinationAddImage(dest, image, properties)
CGImageDestinationFinalize(dest)
}
}
}

Can NSView be made to use software rendering for CIImages?

From a previous question,
CIImage drawing EXC_BAD_ACCESS,
I learned to work around a CoreImage issue by telling a CIContext to do software rendering. I am now trying to figure out a crasher that happens when AppKit tries to draw an NSImageView that I've set to display a CIImage using the code below:
- (void)setCIImage:(CIImage *)processedImage;
{
NSSize size = [processedImage extent].size;
if (size.width == 0) {
[self setImage:nil];
return;
}
NSData * pixelData = [[OMFaceRecognizer defaultRecognizer] imagePlanarFData:processedImage];
LCDocument * document = [[[self window] windowController] document];
[[NSNotificationCenter defaultCenter] postNotificationName:LCCapturedImageNotification
object:document
userInfo:#{ #"data": pixelData, #"size": [NSValue valueWithSize:size] }];
#if 1
static dispatch_once_t onceToken;
static CGColorSpaceRef colorSpace;
static size_t bytesPerRow;
dispatch_once(&onceToken, ^ {
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
bytesPerRow = size.width * sizeof(float);
});
// For whatever bizarre reason, CoreGraphics uses big-endian floats (!)
const float * data = [[[OMFaceRecognizer defaultRecognizer] byteswapPlanarFData:pixelData
swapInPlace:NO] bytes];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
size:size];
CGImageRelease(renderedImage);
#else
NSCIImageRep * rep = [NSCIImageRep imageRepWithCIImage:processedImage];
NSImage * image = [[NSImage alloc] initWithSize:size];
[image addRepresentation:rep];
#endif
[self setImage:image];
}
Is there some way to get the NSImageView to use software rendering? I looked around in IB but I did not see anything that looked obviously promising…

converting CMSampleBufferRef to UIImage

i always get : CGImageCreate: invalid image size: 0 x 0.
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos
usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];
// For this example, we're only interested in the first item.
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0]
options:0
usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) {
// The end of the enumeration is signaled by asset == nil.
if (alAsset) {
ALAssetRepresentation *representation = [[alAsset defaultRepresentation] retain];
NSURL *url = [representation url];
AVURLAsset *avAsset = [[AVURLAsset URLAssetWithURL:url options:nil] retain];
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:avAsset error:nil] retain];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
AVAssetReaderTrackOutput *assetReaderOutput = [[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil] retain];
if (![assetReader canAddOutput:assetReaderOutput]) {printf("could not read reader output\n");}
[assetReader addOutput:assetReaderOutput];
[assetReader startReading];
CMSampleBufferRef nextBuffer = [assetReaderOutput copyNextSampleBuffer];
UIImage* image = imageFromSampleBuffer(nextBuffer);
}
}];
}
failureBlock: ^(NSError *error) {NSLog(#"No groups");}];
the imageFromSampleBuffer comes directly from apple:
UIImage* imageFromSampleBuffer(CMSampleBufferRef nextBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(nextBuffer);
printf("total size:%u\n",CMSampleBufferGetTotalSampleSize(nextBuffer));
// Lock the base address of the pixel buffer.
//CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer.
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height.
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
printf("b:%d w:%d h:%d\n",bytesPerRow,width,height);
// Create a device-dependent RGB color space.
static CGColorSpaceRef colorSpace = NULL;
if (colorSpace == NULL) {
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
// Handle the error appropriately.
return nil;
}
}
// Get the base address of the pixel buffer.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage =
CGImageCreate(width, height, 8, 32, bytesPerRow,
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
// Create and return an image object to represent the Quartz image.
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
i try to get the length and width, basically it will print out the size of the sample buffer, knowing that the buffer itself is not inexistant, but i get no UIImage
for AVAssetReaderTrackOutput *assetReaderOutput...
NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
[outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
I understand you want to read first image from all your local videos?
You can use simple way to do all of this.
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos
usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];
// For this example, we're only interested in the first item.
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0]
options:0
usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) {
// The end of the enumeration is signaled by asset == nil.
if (alAsset) {
ALAssetRepresentation *representation = [[alAsset defaultRepresentation] retain];
NSURL *url = [representation url];
AVURLAsset *avAsset = [[AVURLAsset URLAssetWithURL:url options:nil] retain];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:avAsset];
CMTime thumbTime = CMTimeMakeWithSeconds(1, 30);
NSError *error;
CMTime actualTime;
[imageGenerator setMaximumSize:MAXSIZE];
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
}
}];
}
failureBlock: ^(NSError *error) {NSLog(#"No groups");}];

Resources