Warning: I'm a Cocoa newbie.
I'm reading "Cocoa Programming For Mac OS X" by Hillegass.
On p.301 it's written:
To make the drawing appear on the image instead of on the screen, you must first lock focus on the image. When the drawing is complete, you must unlock focus.
The code I have, inside -(void)mouseDragged:(NSEvent *)theEvent of an NSView is as follows:
[resizedImage lockFocus];
[sourceImage drawInRect: NSMakeRect(0, 0, resizeWidth, resizeHeight) fromRect: NSMakeRect(0, 0, originalSize.width, originalSize.height) operation: NSCompositeSourceOver fraction: 1.0];
[resizedImage unlockFocus];
Without the lock/unlock, this does not work, but I still don't understand exactly what is going on.
I see that the 2nd line of code has no mention of resizedImage so does that mean when I use lockFocus it makes sure any 'drawing' that happens takes place there? Could someone explain this better?
Drawing requires a 'graphics context'. You'll notice that, unlike Core Graphics, none of the AppKit drawing methods take a parameter that specifies where the drawing ends up. Instead, the destination is stored globally as [NSGraphicsContext currentContext]. All AppKit drawing methods affect this current context.
The main purpose of -lockFocus (on images and views alike) is to set up the graphics context so your drawing ends up going where you want it to.
From the docs for -[NSImage lockFocus]:
This method sets the current drawing context to the area of the offscreen window used to cache the receiver's contents.
So there exists an offscreen window which you draw on when you draw to the image. This image has a graphics context and lockFocus makes this context the current drawing context so that drawInRect:... uses it for its drawing. It's similar to +[NSGraphicsContext setCurrentContext].
Related
Is there a way to create a colored fill pattern dynamically in Cocoa?
In particular instead of using a fixed pattern from an image file via
NSColor *fillPattern = [NSColor colorWithPatternImage:patternImage];
I'd like to create a pattern by dynamically choosing the appropriate colors at runtime.
Background is highlighting a colored object by rendering stripes or squares in the ''opposite'' color on top of it - whatever opposite might mean in this context, but that's a different story..
Being applied to potentially hundreds of objects in a drawing app it needs to be a rather fast method so I suppose just swapping colors in patternImage won't be good enough.
(It did work just fine back in QuickDraw..!)
Why not just draw to an in-memory image and use that for your pattern?
NSImage* patternImage = [[NSImage alloc] initWithSize:someSize];
[patternImage lockFocus];
//draw your pattern
[patternImage unlockFocus];
NSColor* patternColor = [NSColor colorWithPatternImage:patternImage];
//do something with the pattern color
//remember to release patternImage if you're not using ARC
Performance-wise, you generally should be looking at optimising drawing by paying attention to the rect passed in to drawRect: and making sure you only draw what is necessary. If you do that then I can't see the pattern drawing performance being a major problem.
Background is highlighting a colored object by rendering stripes or squares in the ''opposite'' color on top of it - whatever opposite might mean in this context, but that's a different story..
You'll want to use one of Quartz's blend modes (most of them are present in Photoshop, Pixelmator, and Opacity, so you can experiment in one of those apps to determine which one you need).
You should then be able to fill with a static image—or a dynamic pattern, if it's really necessary—and Quartz will blend it in appropriately.
There's no way to do this in AppKit alone; you'll need to get a CGContext from the current NSGraphicsContext and do it in Quartz.
Update
Nevermind :-)
Figured it out using bitmapImageRepForCachingDisplayInRect & cacheDisplayInRect:toBitmapImageRep:.
I'll leave the question here for posterity, though.
I'm working on a little application that, among other things, has an NSView subclass that draws a bunch of bezierPaths. I'd like to be able to save the drawn result as either EPS or PNG.
The view is being drawn in an offscreen window (for scaling reasons), and even though it returns the correct EPS data, I can't seem to get any useful bitmap data from it.
EPS is no problem (I simply write the NSData from -dataWithEPSInsideRect: to a file), but I can't seem to get a PNG bitmap.
If I try calling:
[self lockFocus];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:self.bounds];
[self unlockFocus];
return rep;
from a category method I've added to NSView, I get useless white PNG data out of it when I try representationUsingType:NSPNGFileType properties:[NSDictionary dictionary].
Strangely, if I try calling lockFocus/initBitmapWith../unlockFocus from outside the view (or its category methods), I get an exception saying that the view or one of its ancestors is hidden. And well, yes, it's offscreen (the offscreen window's been init'ed with defer:NO, by the way, so it should paint).
So I can either get useless data or I can get an exception. Not awesome.
To add to my confusion: If I make an NSImage containing an EPS representation, and a (useless) white bitmap, where both should be the size of the view's bounds, the bitmap representation is always 20 pixels/units narrower than the bounds. No idea why! The EPS is the correct size.
Since this may all be related to how the offscreen window's created, here's the code for that (from the NSView category)
- (NSWindow*)placeInOffscreenWindow {
NSRect windowBounds = { { -1000.0 , -1000.0 } , self.bounds.size };
NSWindow *hiddenWindow = [[NSWindow alloc] initWithContentRect: windowBounds
styleMask: NSTitledWindowMask | NSClosableWindowMask
backing: NSBackingStoreNonretained
defer: NO];
[[hiddenWindow contentView] addSubview:self];
return hiddenWindow;
}
Any ideas would be appreciated!
Ended up using bitmapImageRepForCachingDisplayInRect: & cacheDisplayInRect:toBitmapImageRep: instead
I suggest you create an NSImage, lock focus on it, tell the view to draw its bounds, and unlock focus, then ask the image to create a CGImage, then pass that to a CGImageDestination to write out the PNG data/file.
I have an app that currently has this line:
[myView setWantsLayer:YES];
In order to draw a GUI element via NSBezierPath. This line is required, otherwise when the user types in an adjacent (and overlapping) NSTextField, the contents of myView shudders.
I discovered that calling CoreAnimation loads the OpenGL framework, but does not unload it. See this question.
I think I can get around this by drawing the NSBezierPath to NSImage and then to display the NSImage in lieu of the NSBezierPath, but I haven't found a single source that shows me how to go about this.
Edit:
I should note that I want to save this BEFORE The NSBezierPath is displayed - so solutions that draw an existing view to an NSImage are not useful.
Question:
Can someone point me in the right direction for converting NSBezierPath to an NSImage?
You can draw anything directly into an NSImage, and you can create a blank NSImage. So, create an image whose size is the size of the bounds of the path, translate the path so that it's at the origin of the image, and then lock focus on the image, draw the path, and unlock focus.
I want to use a focus ring animation as an indicator of incorrect data in field. So I'm sending becomeFirstResponder: to field and want focus ring to fade from red to default color.
I'm wrestling with Core Animation but still have not found any way to do it. Is it possible?
I'm not sure if this strategy follows the HIG, its often more common to do something like display a persistent icon indicating a field doesn't validate next to the field, but it shouldn't be too hard to get the effect you're seeking.
It might be easier to use a simple NSAnimation here instead of using Core Animation.
The standard code for drawing a focus ring generally goes something like the following:
[NSGraphicsContext saveGraphicsState];
NSSetFocusRingStyle(NSFocusRingOnly);
[[NSColor clearColor] set];
[[NSBezierPath bezierPathWithRect:focusRect] fill];
[NSGraphicsContext restoreGraphicsState];
This code would be implemented in the drawRect: method in custom subclass of your control.
In order to draw a custom colored focus ring, you'll need to draw the rectangle yourself, and won't be able to benefit from the NSSetFocusRingStyle function. The color would be driven off of the NSAnimation, which would also set the control to repaint itself. Because you're not using Cocoa's facilities to draw the focus ring, you'll also probably need to inset the content of your view so you'll have space to draw the ring.
More information regarding NSAnimations is available in the Animation Programming Guide for Cocoa
I am a complete newbie to XCode, so I have been climbing the Quartz2D learning curve. I understand that a view's drawRect method is called whenever a refresh of the view's graphics is needed, and that the setNeedsDisplay method activates a redrawing.
But what I can't find is a clear explanation of the relationship between the graphics context and a specific view. The graphics context itself is apparently not an instance variable of the view, so if I want to modify a view, and I create a complex path using the CGContext... methods, what code is needed to marry that graphics context to the view I wish to alter?
Thanks in advance for any guidance on this question.
jrdoner
You can create a graphics context but that is needed only in complex drawing operation.For most of the cases it is done for you. You just need to get the context by calling UIGraphicsGetCurrentContext().
When the framework determines that a view needs redrawing (for various reasons, one of which is that you indicated it by calling setNeedsDisplay:), it will generate (or restore) a graphics context for that view, and make it the current context before calling -drawRect:. Your job is then to draw in the context you've been provided. Afterwards, it is the framework's problem to clip the resulting context, blend it with other contexts and finally draw it into screen memory.
Do be a little careful of doing too much complex drawing in -drawRect: if you can help it. The iPhone doesn't have nearly as powerful a CPU as a desktop machine, and it is recommended that you do most of your drawing work using images rather than paths. Apple has even removed many of the more convenient drawing wrappers from Mac, almost intentionally dissuading developers from using Core Graphics too much.
I assume that you are creating your path within -(void)drawRect:(CGRect)rect method of your UIView subclass.
Inside drawRect you can get access to graphic context by calling (CGContextRef)UIGraphicsGetCurrentContext(void);
Example from lecture 5 cs193p:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
[[UIColor grayColor] set];
UIRectFill ([self bounds]);
CGContextBeginPath (context);
CGContextMoveToPoint (context, 75, 10);
CGContextAddLineToPoint (context, 10, 150);
CGContextAddLineToPoint (context, 160, 150);
CGContextClosePath (context);
[[UIColor redColor] setFill];
[[UIColor blackColor] setStroke];
CGContextDrawPath (context, kCGPathFillStroke);
}