I am having trouble at drawing a NSCIImageRep which I obtain via a QTKit mCaptureDecompressedVideoOutput.
As I do not want to draw the image using OpenGL, I attempted to subclass a NSView and draw the image there:
- (void) drawRect:(NSRect)dirtyRect
{
NSLog(#"DrawInRect");
CGContextRef myContext = [[NSGraphicsContext currentContext] graphicsPort];
if (imageRep != nil)
{
CGImageRef image = [imageRep CGImageForProposedRect: &dirtyRect context: [NSGraphicsContext currentContext] hints:nil];
CGContextDrawImage(myContext, dirtyRect, image);
CGImageRelease(image);
}
}
imageRep is a pointer to the CGImageRef I obtain via the mCaptureDecompressedVideoOutput callback
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputVideoFrame:(CVImageBufferRef)videoFrame withSampleBuffer:(QTSampleBuffer *)sampleBuffer fromConnection:(QTCaptureConnection *)connection`.
This code crashes my machine. Any suggestions?
You don't need to go via CGImageRef just to draw an NSCIImageRep. Just ask it for a CIImage and draw that:
CIImage* anImage = [yourNSCIImageRep CIImage];
[anImage drawAtPoint:NSZeroPoint
fromRect:NSRectFromCGRect([anImage extent])
operation:NSCompositeSourceOver
fraction:1.0];
Related
How to shift NSImage horizontally that shifted pixels appear at the other side so it looks like a loop?
Currently I am using drawInRect. Is there any CIFilter or smarter way to do this?
- (CIImage *)image:(NSImage *)image shiftedBy:(CGFloat)shiftAmount
{
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(width, height)];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:rep];
[NSGraphicsContext setCurrentContext:context];
CGRect rect0 = CGRectMake(0, 0, width, height);
CGRect leftSourceRect, rightSourceRect;
CGRectDivide(rect0, &leftSourceRect, &rightSourceRect, shiftAmount, CGRectMinXEdge);
CGRect rightDestinationRect = CGRectOffset(leftSourceRect, width - rightSourceRect.origin.x, 0);
CGRect leftDestinationRect = rightSourceRect;
leftDestinationRect.origin.x = 0;
[image drawInRect:leftDestinationRect fromRect:rightSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[image drawInRect:rightDestinationRect fromRect:leftSourceRect operation:NSCompositingOperationSourceOver fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
return [[CIImage alloc] initWithBitmapImageRep:rep];
}
I tried it with CIFilter however the performance hit is 3-4x slower. However, the code is more readable.
- (CIImage *)image:(NSImage *)image shiftXBy:(CGFloat)shiftX YBy:(CGFloat)shiftY
{
//avoid calling TIFFRepresentation here cause the performance hit is even bigger
CIImage *ciImage = [[CIImage alloc] initWithData:[image TIFFRepresentation]];
CGAffineTransform xform = CGAffineTransformIdentity;
NSValue *xformObj = [NSValue valueWithBytes:&xform objCType:#encode(CGAffineTransform)];
ciImage = [ciImage imageByApplyingFilter:#"CIAffineTile"
withInputParameters:#{kCIInputTransformKey : xformObj} ];
ciImage = [ciImage imageByCroppingToRect:CGRectMake(shiftX, shiftY, image.size.width, image.size.height)];
return ciImage;
}
For optimum performance you need to use CALayer. Here’s the basic concept:
Your NSView subclass should have wantsLayer and layerUsesCoreImageFilters set to true.
Assign your image to content property of NSView.layer (or add a new sublayer).
Create CIAffineTile filter and add it to the layer.
Now you can change values of the filter without reloading or redrawing the image. This all would be hardware accelerated.
I have a custom NSView that lives inside of a NSScrollView that is in a NSSplitView. The custom view uses the following drawing code:
- (void)drawRect:(NSRect)dirtyRect {
NSGraphicsContext *ctx = [NSGraphicsContext currentContext];
[ctx saveGraphicsState];
// Rounded Rect
NSRect rect = [self bounds];
NSRect pathRect = NSMakeRect(rect.origin.x + 3, rect.origin.y + 6, rect.size.width - 6, rect.size.height - 6);
NSBezierPath *path = [NSBezierPath bezierPathWithRoundedRect:pathRect cornerRadius:kDefaultCornerRadius];
// Shadow
[NSShadow setShadowWithColor:[NSColor colorWithCalibratedWhite:0 alpha:0.66]
blurRadius:4.0
offset:NSMakeSize(0, -3)];
[[NSColor colorWithCalibratedWhite:0.196 alpha:1.0] set];
[path fill];
[NSShadow clearShadow];
// Background Gradient
NSGradient *gradient = [[NSGradient alloc] initWithStartingColor:[UAColor darkBlackColor] endingColor:[UAColor lightBlackColor]];
[gradient drawInBezierPath:path angle:90.0];
[gradient release];
// Image
[path setClip];
NSRect imageRect = NSMakeRect(pathRect.origin.x, pathRect.origin.y, pathRect.size.height * kLargeImageRatio, pathRect.size.height);
[self.image drawInRect:imageRect
fromRect:NSZeroRect
operation:NSCompositeSourceAtop
fraction:1.0];
[ctx restoreGraphicsState];
[super drawRect:dirtyRect];
}
I have tried every different type of operation but the image still draws on top of the other half of the NSSplitView like so:
…instead of drawing under the NSScrollView. I think this has to do with drawing everything instead of the dirtyRect only, but I don't know how I could edit the image drawing code to only draw the part of it that lies in the dirtyRect. How can I either prevent it from drawing on top, or only draw the dirty rect for this NSImage?
I finally got it. I don't know if it is optimal yet, but I will find out when doing performance testing. I just had to figure out the intersection of the image rect and the dirty rect using NSIntersectionRect, then figuring out which subpart of the NSImage to draw for the drawInRect:fromRect:operation:fraction: call.
Here is the important part:
NSRect imageRect = NSMakeRect(pathRect.origin.x, pathRect.origin.y, pathRect.size.height * kLargeImageRatio, pathRect.size.height);
[self.image setSize:imageRect.size];
NSRect intersectionRect = NSIntersectionRect(dirtyRect, imageRect);
NSRect fromRect = NSMakeRect(intersectionRect.origin.x - imageRect.origin.x,
intersectionRect.origin.y - imageRect.origin.y,
intersectionRect.size.width,
intersectionRect.size.height);
[self.image drawInRect:intersectionRect
fromRect:fromRect
operation:NSCompositeSourceOver
fraction:1.0];
I have a project that needs to draw text in a view with a gradient fill in a custom subclass of NSView, like this example below.
I'm wondering how I can achieve this, as I'm pretty new to Cocoa drawing.
Try creating an alpha mask from the text then using NSGradient to draw into it. Here's a simple example based on the linked code:
- (void)drawRect:(NSRect)rect
{
// Create a grayscale context for the mask
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef maskContext =
CGBitmapContextCreate(
NULL,
self.bounds.size.width,
self.bounds.size.height,
8,
self.bounds.size.width,
colorspace,
0);
CGColorSpaceRelease(colorspace);
// Switch to the context for drawing
NSGraphicsContext *maskGraphicsContext =
[NSGraphicsContext
graphicsContextWithGraphicsPort:maskContext
flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:maskGraphicsContext];
// Draw the text right-way-up (non-flipped context)
[text
drawInRect:rect
withAttributes:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"HelveticaNeue-Bold" size:124], NSFontAttributeName,
[NSColor whiteColor], NSForegroundColorAttributeName,
nil]];
// Switch back to the window's context
[NSGraphicsContext restoreGraphicsState];
// Create an image mask from what we've drawn so far
CGImageRef alphaMask = CGBitmapContextCreateImage(maskContext);
// Draw a white background in the window
CGContextRef windowContext = [[NSGraphicsContext currentContext] graphicsPort];
[[NSColor whiteColor] setFill];
CGContextFillRect(windowContext, rect);
// Draw the gradient, clipped by the mask
CGContextSaveGState(windowContext);
CGContextClipToMask(windowContext, NSRectToCGRect(self.bounds), alphaMask);
NSGradient *gradient = [[NSGradient alloc] initWithStartingColor:[NSColor blackColor] endingColor:[NSColor grayColor]];
[gradient drawInRect:rect angle:-90];
[gradient release];
CGContextRestoreGState(windowContext);
CGImageRelease(alphaMask);
}
This uses the view bounds as the gradient bounds; if you wanted to be more accurate, you'd need to get the text height (information about that here).
Actually I want to draw the background of a selected NSStatusItem on the CALayer of my custom statusItemView. But since
- (void)drawStatusBarBackgroundInRect:(NSRect)rect withHighlight:(BOOL)highlight
does not work (?) on layers I've tried it to draw the color with the backgroundColor property. But converting the selectedMenuItemColor into RGB doesn't help very much. It looks really plain without the gradient. :-/
I converted [NSColor selectedMenuItemColor] into a CGColorRef with this code:
- (CGColorRef)highlightColor {
static CGColorRef highlight = NULL;
if(highlight == NULL) {
CGFloat red, green, blue, alpha;
NSColor *hlclr = [[NSColor selectedMenuItemColor] colorUsingColorSpace:
[NSColorSpace genericRGBColorSpace]];
[hlclr getRed:&red green:&green blue:&blue alpha:&alpha];
CGFloat values[4] = {red, green, blue, alpha};
highlight = CGColorCreate([self genericRGBSpace], values);
}
return highlight;
}
Any idea how to draw a native looking statusitem background on a CALayer?
NSImage *backgroundImage = [[NSImage alloc] initWithSize:self.frame.size]];
[backgroundImage lockFocus];
[self.statusItem drawStatusBarBackgroundInRect:self.bounds withHighlight:YES];
[backgroundImage unlockFocus];
[self.layer setContents:backgroundImage];
[backgroundImage release];
Try subclassing CALayer and implementing the drawInContext: method to create an NSGraphicsContext for the CGContext, set the NSGraphicsContext as the current context, and then tell the status item to draw its background.
I use this code in my layer delegate:
- (void)drawLayer:(CALayer *)layer
inContext:(CGContextRef)context {
NSGraphicsContext* gc = [NSGraphicsContext graphicsContextWithGraphicsPort:context flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:gc];
[self.statusItem drawStatusBarBackgroundInRect:self.frame
withHighlight:self.isHighlighted];
[NSGraphicsContext restoreGraphicsState];
}
How to draw a default image in imageview in the center of imageView?using - (void)drawRect:(NSRect)rect overridden method of NSImageView
Yes. That was one way. I've used the following code.
// Drawing
- (void)drawRect:(NSRect)rect
{
if([self image])
{
[[NSColor grayColor] set];
NSRectFill(rect);
//ImageView Bounds and Size
NSRect vBounds = [self bounds];
NSSize vSize = vBounds.size;
//Get the size and origin of default image set to imageView
NSRect imageRect;
imageRect.size = [[self image] size];
imageRect.origin = NSZeroPoint;
//Create a preview image
NSSize previewSize = NSMakeSize( [self image].width / 4.0, [self image].height / 4.0 );
NSImage *previewImage = [[NSImage alloc] initWithSize:previewSize];
//Get the point where the preview image needs to be draw
NSRect newRect;
newRect.origin.x = vSize.width/2-previewSize.width/2;
newRect.origin.y = vSize.height/2-previewSize.height/2;
newRect.size = [previewImage size];
//Draw preview image in imageView
[[self image] drawInRect:newRect fromRect:imageRect operation:NSCompositeSourceOver fraction:1.0];
[previewImage release];
}
}
Why not just set the image view's image initially to the default image, then change it later?