How can I programmatically render fullscreen openGL at a specific resolution? - macos

I am working on a OSX/Cocoa graphics application which (for performance reasons) I would like to render at 640x480 when the user selects "full screen" mode. For what it's worth, the content is a custom NSView which draws using openGL.
I understand that rather than actually change the user's resolution, it's preferable to change the backbuffer (as explained on another SO question here: Programmatically change resolution OS X).
Following that advice, I end up with the following two methods (see below) to toggle between fullscreen and windowed. The trouble is that when I go fullscreen, the content does indeed render at 640x480 but is not scaled (IE it appears as if we stayed at the window's resolution and "zoomed" into a 640x480 corner of the render).
I'm probably missing something obvious here - I suppose I could translate the render according to the actual screen resolution to "center" it, but that seems overcomplicated?
- (void)goFullscreen{
// Bounce if we're already fullscreen
if(_isFullscreen){return;}
// Save original size and position
NSRect frame = [self.window.contentView frame];
original_size = frame.size;
original_position = frame.origin;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO],NSFullScreenModeAllScreens,
nil];
// In lieu of changing resolution, we set the backbuffer to 640x480
GLint dim[2] = {640, 480};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go fullscreen!
[self enterFullScreenMode:[NSScreen mainScreen] withOptions:options];
_isFullscreen=true;
}
- (void)goWindowed{
// Bounce if we're already windowed
if(!_isFullscreen){return;}
// Reset backbuffer
GLint dim[2] = {original_size.width, original_size.height};
CGLSetParameter([[self openGLContext] CGLContextObj], kCGLCPSurfaceBackingSize, dim);
CGLEnable ([[self openGLContext] CGLContextObj], kCGLCESurfaceBackingSize);
// Go windowed!
[self exitFullScreenModeWithOptions:nil];
[self.window makeFirstResponder:self];
_isFullscreen=false;
}
Update
Here's now to do something similar to datenwolf's suggestion below, but not using openGL (useful for non-gl content).
// Render into a specific size
renderDimensions = NSMakeSize(640, 480);
NSImage *drawIntoImage = [[NSImage alloc] initWithSize:renderDimensions];
[drawIntoImage lockFocus];
[self drawViewOfSize:renderDimensions];
[drawIntoImage unlockFocus];
[self syphonSendImage:drawIntoImage];
// Resize to fit preview area and draw
NSSize newSize = NSMakeSize(self.frame.size.width, self.frame.size.height);
[drawIntoImage setSize: newSize];
[[NSColor blackColor] set];
[self lockFocus];
[NSBezierPath fillRect:self.frame];
[drawIntoImage drawAtPoint:NSZeroPoint fromRect:self.frame operation:NSCompositeCopy fraction:1];
[self unlockFocus];

Use a FBO with a texture of the desired target resolution attached and render to that FBO/texture in said resolution. Then switch to the main framebuffer and draw a full screen quad using the texture rendered to just before. Use whatever magnification filter you like best. If you want to bring out the big guns you could implement a Lancosz / sinc interpolator in the fragment shader to upscaling the intermediary texture.

Related

NSImage and PDFImageRep caching still draws at only one resolution

I have an NSImage, initialized with PDF data, created like this:
NSData* data = [view dataWithPDFInsideRect:view.bounds];
slideImage = [[NSImage alloc] initWithData:data];
The slideImage is now the size of the view.
When I try to render the image in an NSImageView, it only draws sharp when the image view is exactly the original size of the image, even if you clear the cache or change the image size. I tried setting the cacheMode to NSImageCacheNever, which also didn't work. The only image rep in the image is the PDF one, and when I render it to a PDF file it shows that it's vector.
As a workaround, I create a NSBitmapImageRep with a different size, call drawInRect on the original image, and put the bitmap representation inside a new NSImage and render that, which works, but it feels like it's not optimal:
- (NSBitmapImageRep*)drawToBitmapOfWidth:(NSInteger)width
andHeight:(NSInteger)height
withScale:(CGFloat)scale
{
NSBitmapImageRep *bmpImageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:width * scale
pixelsHigh:height * scale
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0
];
bmpImageRep = [bmpImageRep bitmapImageRepByRetaggingWithColorSpace:
[NSColorSpace sRGBColorSpace]];
[bmpImageRep setSize:NSMakeSize(width, height)];
NSGraphicsContext *bitmapContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bmpImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapContext];
[self drawInRect:NSMakeRect(0, 0, width, height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1];
[NSGraphicsContext restoreGraphicsState];
return bmpImageRep;
}
- (NSImage*)rasterizedImageForSize:(NSSize)size
{
NSImage* newImage = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [self drawToBitmapOfWidth:size.width andHeight:size.height withScale:1];
[newImage addRepresentation:rep];
return newImage;
}
How can I get the PDF to render nicely at any size without resorting to hacks like mine?
The point of NSImage is that you create it with the size (in points) that you want it to be. The backing representation can be vector based (e.g. PDF), and the NSImage is resolution independent (i.e. it supports different pixels per point), but the NSImage still has a fixed size (in points).
One one the points of an NSImage is that it will / can add a cache representation to speed up subsequent drawing.
If you need to draw a PDF to multiple sizes, and you want to use an NSImage, you're probably best of creating an NSImage for your given target size. If you want to, you can keep the NSPDFImageRef around -- I don't think it'll save you much.
We tried the following:
NSPDFImageRep* rep = self.representations.lastObject;
return [NSImage imageWithSize:size flipped:NO drawingHandler:^BOOL (NSRect dstRect)
{
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[rep drawInRect:dstRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1 respectFlipped:YES hints:#{
NSImageHintInterpolation: #(NSImageInterpolationHigh)
}];
return YES;
}];
And that does give you nice results when scaling up, but makes for blurry images
when scaling down.

Redrawed inset NSShadow on a Custom View using -setClip method

I have and odd problem, related with the answer of this question:
Draw an Inset NSShadow and Inset Stroke
I use this code into the drawRect method of a custom view. I have exactly this:
- (void)drawRect:(NSRect)rect
{
// Create and fill the shown path
NSBezierPath *path = [NSBezierPath
bezierPathWithRoundedRect:[self bounds]
xRadius:4.0f
yRadius:4.0f];
[[NSColor colorWithCalibratedWhite:0.8f alpha:0.2f] set];
[path fill];
// Save the graphics state for shadow
[NSGraphicsContext saveGraphicsState];
// Set the shown path as the clip
[path setClip];
// Create and stroke the shadow
NSShadow * shadow = [[[NSShadow alloc] init] autorelease];
[shadow setShadowColor:[NSColor colorWithCalibratedWhite:0.0f alpha:0.8f]];
[shadow setShadowBlurRadius:2.0];
[shadow set];
[path stroke];
// Restore the graphics state
[NSGraphicsContext restoreGraphicsState];
if ( highlight && [[self window] firstResponder] == self ) {
NSSetFocusRingStyle(NSFocusRingOnly);
[[NSBezierPath bezierPathWithRect:[self bounds]] fill];
}
}
The problem comes when I add a Multiline Label (either sibling or child of my custom view).
When my program window loses the focus and I come back to it, my inner shadow / stroke go darker. It seems that the shadows superimpose. It's strange because as said, if my window only have this custom view, it goes well.
If I comment the line
[path setClip];
the shadow isn't superimposed anymore, but I don't get the desired effect of rounded corners (similar than NSBox).
I tried what happens with a Push Button instead of a Multiline Label, and by losing / getting the window focus it has no problems, but when I click the button the shadow gets superimposed.
I find the problem is similar than here, but in Cocoa instead of Java:
Java setClip seems to redraw
Thanks for your help!
You should never use -setClip unless you know what you're doing. You should use -addClip instead, which respects the existing clipping paths.

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

How come IKImageBrowserView can resize images so much faster than I can?

This is my image resize code:
CALayer *newCALayer = [[CALayer layer] retain];
NSImage* image = [[NSImage alloc] initWithData:[NSData dataWithContentsOfFile:path]];
CGImageRef newCGImageFullResolution = [image CGImageForProposedRect:nil context:nil hints:nil];
CGContextRef context = CGBitmapContextCreate(NULL, drawRect.size.width, drawRect.size.height,
CGImageGetBitsPerComponent(newCGImageFullResolution),
CGImageGetBytesPerRow(newCGImageFullResolution),
CGImageGetColorSpace(newCGImageFullResolution),
CGImageGetAlphaInfo(newCGImageFullResolution));
CGContextDrawImage(context, CGRectMake(0, 0, drawRect.size.width, drawRect.size.height), newCGImageFullResolution);
CGImageRef scaledImage = CGBitmapContextCreateImage(context);
newCALayer.contents = (id)scaledImage;
CGImageRelease(scaledImage);
newCALayer.contentsGravity = kCAGravityResizeAspect;
newCALayer.opacity = 0.0;
newCALayer.anchorPoint = CGPointMake(0.0f,0.0f);
newCALayer.frame = CGRectMake( 0.0,
0.0,
[Singleton sharedSingleton].fullscreenRect.size.width,
[Singleton sharedSingleton].fullscreenRect.size.height);
[newCALayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];
//CGImageRelease(cgImageFullResolution); (bonus points if you can explain why I can't release this! I mean, I can release the scaled image ok??)
CGContextRelease(context);
[image release];
I am doing all of this from a background thread in order to preload pictures so my GUI feels snappy. It took some work getting synchronization and what not set up so the CALayers ends up in view.
But I believe the term for describing how fast this is would be "it's a dog".
Comparing to IKImageView - that thing flings up thumbnails of images faster than I can scroll.
Does anybody have some suggestions for how to handle this better than I am doing it now?
In other words, my problem is that I want to have a super-fast UX. I believe the way to accomplish this is by preloading things to CALayers (this may be wrong? I tried NSImageView and some IK-stuff, but at least CALayer is better than that).
ImageKit is probably using CGImageSourceCreateThumbnailAtIndex() to quickly get an image appropriate to the destination, rather than reading in the entire image file.
Here:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:path] autorelease];
[image setScalesWhenResized:YES]; // *
[image setDataRetained:YES]; // *
[image setSize:desiredNewSize];
Then use the image as it is.
As for why your app is slow, run it under Instruments. That will tell you specifically where you are spending the majority of the processor time you use—it may not be in your scaling code after all.
*Since 10.6, these messages do nothing useful and are deprecated, so you can omit them if you are requiring Snow Leopard or later.

Clearing the alpha channel of an NSImage

It can be done by mallocing a temporary bitmap with 32bits per pixel
and then clearing the alpha component with a for loop and
and finally turn it back into a NSImage again.
I suspect is can be done in a simpler way using a clever
combination of NSColor and NSCompositingOperation. Or perhaps the image
needs to be composited with itself using drawAtPoint.
My code looks like this.
NSImage* img = some image with RGB and Alpha;
NSRect rect = some rect inside the image;
[img lockFocus];
[[NSColor clearColor] set];
NSRectFillUsingOperation(rect, NSCompositeXOR);
[img unlockFocus];
NOTE: Setting the alpha channel to 1 can be done by
using a blackColor with NSCompositePlusLighter.
What is the secret in clearing the alpha channel?
It won't be fast but this will work as well:
NSImage *newImage = [[NSImage alloc] initWithSize:[srcImage size]];
[newImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0,0,[newImage size].width, [newImage size].height));
[srcImage compositeToPoint:NSZeroPoint operation:NSCompositeCopy];
[newImage unlockFocus];
Please read the AppKit release notes on the subject of image mutability. NSImage should basically be treated as immutable.
All of the pixel formats supported in graphics contexts have premultiplied alpha. If the alpha channel is zero, the other channels have to be zero too.

Resources