NSImage and PDFImageRep caching still draws at only one resolution - macos

I have an NSImage, initialized with PDF data, created like this:
NSData* data = [view dataWithPDFInsideRect:view.bounds];
slideImage = [[NSImage alloc] initWithData:data];
The slideImage is now the size of the view.
When I try to render the image in an NSImageView, it only draws sharp when the image view is exactly the original size of the image, even if you clear the cache or change the image size. I tried setting the cacheMode to NSImageCacheNever, which also didn't work. The only image rep in the image is the PDF one, and when I render it to a PDF file it shows that it's vector.
As a workaround, I create a NSBitmapImageRep with a different size, call drawInRect on the original image, and put the bitmap representation inside a new NSImage and render that, which works, but it feels like it's not optimal:
- (NSBitmapImageRep*)drawToBitmapOfWidth:(NSInteger)width
andHeight:(NSInteger)height
withScale:(CGFloat)scale
{
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:width * scale
pixelsHigh:height * scale
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0
];
bmpImageRep = [bmpImageRep bitmapImageRepByRetaggingWithColorSpace:
[NSColorSpace sRGBColorSpace]];
[bmpImageRep setSize:NSMakeSize(width, height)];
NSGraphicsContext *bitmapContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bmpImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapContext];
[self drawInRect:NSMakeRect(0, 0, width, height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1];
[NSGraphicsContext restoreGraphicsState];
return bmpImageRep;
}
- (NSImage*)rasterizedImageForSize:(NSSize)size
{
NSImage* newImage = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [self drawToBitmapOfWidth:size.width andHeight:size.height withScale:1];
[newImage addRepresentation:rep];
return newImage;
}
How can I get the PDF to render nicely at any size without resorting to hacks like mine?

The point of NSImage is that you create it with the size (in points) that you want it to be. The backing representation can be vector based (e.g. PDF), and the NSImage is resolution independent (i.e. it supports different pixels per point), but the NSImage still has a fixed size (in points).
One one the points of an NSImage is that it will / can add a cache representation to speed up subsequent drawing.
If you need to draw a PDF to multiple sizes, and you want to use an NSImage, you're probably best of creating an NSImage for your given target size. If you want to, you can keep the NSPDFImageRef around -- I don't think it'll save you much.

We tried the following:
NSPDFImageRep* rep = self.representations.lastObject;
return [NSImage imageWithSize:size flipped:NO drawingHandler:^BOOL (NSRect dstRect)
{
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[rep drawInRect:dstRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1 respectFlipped:YES hints:#{
NSImageHintInterpolation: #(NSImageInterpolationHigh)
}];
return YES;
}];
And that does give you nice results when scaling up, but makes for blurry images
when scaling down.

Related

What's the preferable way to draw a collection of lines in an NSView?

Let's say there's a collection of lines in some instance variable in my NSView instance. This collection stays the same over time. How would you draw the collection?
I currently draw to an NSImage when the collection is set up, and override -drawRect: to display the image's contents. Is there a better way?
Tested out two options so far :
Draw to an NSImage when the collection is set up, and override -drawRect: to display the image's contents.
Draw the list of lines in -drawRect: itself.
Benchmarks show the image drawing to be much faster. If I resize my window and it has, say; 10000 line segments, then the resize is incredibly slow if I use the second option. Not so with the first.
What's the preferable (eg. most efficient) strategy?
Example code :
- (void)drawSegments:(NSSize)size segments:(segment *)segments count:(int)nb_segments {
NSBitmapImageRep *lineRepresentation = [self bitmapForPixels:linePixels];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep:lineRepresentation]];
memset(*linePixels, 0, pixels*4);
NSBezierPath *path = [NSBezierPath bezierPath];
for (int i=0 ; i<nb_segments ; i++) {
[path moveToPoint:NSMakePoint(segments[i].x1, segments[i].y1)];
[path lineToPoint:NSMakePoint(segments[i].x2, segments[i].y2)];
}
[[NSColor blackColor] setStroke];
[path stroke];
[lineRepresentation release];
}
- (NSBitmapImageRep *)bitmapForPixels:(int **)pixels {
NSBitmapImageRep *representation = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:((unsigned char **)pixels)
pixelsWide:SCREEN_SIZE.width
pixelsHigh:SCREEN_SIZE.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:true
isPlanar:false
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:4*SCREEN_SIZE.width
bitsPerPixel:32];
return representation;
}

How can I programmatically render fullscreen openGL at a specific resolution?

I am working on a OSX/Cocoa graphics application which (for performance reasons) I would like to render at 640x480 when the user selects "full screen" mode. For what it's worth, the content is a custom NSView which draws using openGL.
I understand that rather than actually change the user's resolution, it's preferable to change the backbuffer (as explained on another SO question here: Programmatically change resolution OS X).
Following that advice, I end up with the following two methods (see below) to toggle between fullscreen and windowed. The trouble is that when I go fullscreen, the content does indeed render at 640x480 but is not scaled (IE it appears as if we stayed at the window's resolution and "zoomed" into a 640x480 corner of the render).
I'm probably missing something obvious here - I suppose I could translate the render according to the actual screen resolution to "center" it, but that seems overcomplicated?
- (void)goFullscreen{
// Bounce if we're already fullscreen
if(_isFullscreen){return;}
// Save original size and position
NSRect frame = [self.window.contentView frame];
original_size = frame.size;
original_position = frame.origin;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO],NSFullScreenModeAllScreens,
nil];
// In lieu of changing resolution, we set the backbuffer to 640x480
GLint dim[2] = {640, 480};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go fullscreen!
[self enterFullScreenMode:[NSScreen mainScreen] withOptions:options];
_isFullscreen=true;
}
- (void)goWindowed{
// Bounce if we're already windowed
if(!_isFullscreen){return;}
// Reset backbuffer
GLint dim[2] = {original_size.width, original_size.height};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go windowed!
[self exitFullScreenModeWithOptions:nil];
[self.window makeFirstResponder:self];
_isFullscreen=false;
}
Update
Here's now to do something similar to datenwolf's suggestion below, but not using openGL (useful for non-gl content).
// Render into a specific size
renderDimensions = NSMakeSize(640, 480);
NSImage *drawIntoImage = [[NSImage alloc] initWithSize:renderDimensions];
[drawIntoImage lockFocus];
[self drawViewOfSize:renderDimensions];
[drawIntoImage unlockFocus];
[self syphonSendImage:drawIntoImage];
// Resize to fit preview area and draw
NSSize newSize = NSMakeSize(self.frame.size.width, self.frame.size.height);
[drawIntoImage setSize: newSize];
[[NSColor blackColor] set];
[self lockFocus];
[NSBezierPath fillRect:self.frame];
[drawIntoImage drawAtPoint:NSZeroPoint fromRect:self.frame operation:NSCompositeCopy fraction:1];
[self unlockFocus];
Use a FBO with a texture of the desired target resolution attached and render to that FBO/texture in said resolution. Then switch to the main framebuffer and draw a full screen quad using the texture rendered to just before. Use whatever magnification filter you like best. If you want to bring out the big guns you could implement a Lancosz / sinc interpolator in the fragment shader to upscaling the intermediary texture.

NSBitmapImageRep generated BMP can't be read on Windows

I have an NSBitmapImageRep that I am creating the following way:
+ (NSBitmapImageRep *)bitmapRepOfImage:(NSURL *)imageURL {
CIImage *anImage = [CIImage imageWithContentsOfURL:imageURL];
CGRect outputExtent = [anImage extent];
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
And being saved as BMP this way:
NSBitmapImageRep *original = [imageTools bitmapRepOfImage:fileURL];
NSData *converted = [original representationUsingType:NSBMPFileType properties:nil];
[converted writeToFile:filePath atomically:YES];
The thing here is that the BMP file can be read and manipulated correctly under Mac OSX, but under Windows, it just fails to load, just like in this screenshot:
screenshot http://dl.dropbox.com/u/1661304/Grab/74a6dadb770654213cdd9290f0131880.png
If the file is opened with MS Paint (yes, MS Paint can open it) and then resaved, though, it will work.
Would appreciate a hand here. :)
Thanks in advance.
I think the main reason your code is failing is that you are creating your NSBitmapImageRep with 0 bits per pixel. That means your image rep will have precisely zero information in it. You almost certainly want 32 bits per pixel.
However, your code is an unbelievably convoluted way to obtain an NSBitmapImageRep from an image file on disk. Why on earth are you using a CIImage? That is a Core Image object designed for use with Core Image filters and makes no sense here at all. You should be using an NSImage or CGImageRef.
Your method is also poorly named. It should instead be named something like +bitmapRepForImageFileAtURL: to better indicate what it is doing.
Also, this code makes no sense:
[[theBitMapToBeSaved retain] autorelease]
Calling retain and then autorelease does nothing, because all it does in increment the retain count and then decrement it again immediately.
You are responsible for releasing theBitMapToBeSaved because you created it using alloc. Since it is being returned, you should call autorelease on it. Your additional retain call just causes a leak for no reason.
Try this:
+ (NSBitmapImageRep*)bitmapRepForImageFileAtURL:(NSURL*)imageURL
{
NSImage* image = [[[NSImage alloc] initWithContentsOfURL:imageURL] autorelease];
return [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
}
+ (NSData*)BMPDataForImageFileAtURL:(NSURL*)imageURL
{
NSBitmapImageRep* bitmap = [self bitmapRepForImageFileAtURL:imageURL];
return [bitmap representationUsingType:NSBMPFileType properties:nil];
}
You really need to review the Cocoa Drawing Guide and the Memory Management Guidelines, because it appears that you are having trouble with some basic concepts.

image scaling performance in Quartz/Cocoa vs Qt4

I wrote a test application in Qt4 which uses QImage.scaled() or QPixmap.scaled() methods that turned to be very slow. Even a perspective transform is faster, while a scaling transform is the same slow.
[I tried to scale directly a QPainter but I do not master paintEvent() so I always get "painter not active" or paintEvent() is not called at all. So I do not know the painter scaling performaces.]
I ask here if the same issue is known for Quartz/Cocoa or instead their performances for similar tasks are better. I am particularly interested in native Quartz pdf rendering capability and subsequent image scaling.
NIRTimer *timer = [NIRTimer timer];
[timer start];
NSImage *image = [[NSImage alloc]initWithContentsOfFile:#"filename"];
NSImage *scaledImage = [[NSImage alloc]initWithSize:NSMakeSize(720, 480)];
[scaledImage lockFocus];
[image drawInRect:NSMakeRect(0, 0, 720, 480) fromRect:NSZeroRect operation:NSCompositeSourceAtop fraction:1];
[scaledImage unlockFocus];
[image release];
[scaledImage release];
NSLog(#"time: %ld", [timer microseconds]);
This is how to scale an image in Cocoa, and it takes 80000 microseconds (0.08 seconds).

Clearing the alpha channel of an NSImage

It can be done by mallocing a temporary bitmap with 32bits per pixel
and then clearing the alpha component with a for loop and
and finally turn it back into a NSImage again.
I suspect is can be done in a simpler way using a clever
combination of NSColor and NSCompositingOperation. Or perhaps the image
needs to be composited with itself using drawAtPoint.
My code looks like this.
NSImage* img = some image with RGB and Alpha;
NSRect rect = some rect inside the image;
[img lockFocus];
[[NSColor clearColor] set];
NSRectFillUsingOperation(rect, NSCompositeXOR);
[img unlockFocus];
NOTE: Setting the alpha channel to 1 can be done by
using a blackColor with NSCompositePlusLighter.
What is the secret in clearing the alpha channel?
It won't be fast but this will work as well:
NSImage *newImage = [[NSImage alloc] initWithSize:[srcImage size]];
[newImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0,0,[newImage size].width, [newImage size].height));
[srcImage compositeToPoint:NSZeroPoint operation:NSCompositeCopy];
[newImage unlockFocus];
Please read the AppKit release notes on the subject of image mutability. NSImage should basically be treated as immutable.
All of the pixel formats supported in graphics contexts have premultiplied alpha. If the alpha channel is zero, the other channels have to be zero too.

Resources