NSImage drawInRect and TemplateImages - cocoa

In have an NSImage is a template image (that is, [NSImage isTemplate] returns YES).
When I use it inside an NSImageView, it is drawn correctly as a template image.
However, if I draw it manually using drawInRect:fromRect:operation:fraction:, it is drawn as a flat black image.
How can I draw an NSImage manually, and still get the 'template' effect?

The special drawing of template images is part of CoreUI and not public.
You can imitate the effect with Quartz though. Details and code can be found in this answer here on SO:
https://stackoverflow.com/a/7138497/100848

Related

NSImageView rending a blurry NSimage despite it being of the same size

I have an NSCollectionViewItem with an NSImageView (32x32) to which i supply a #1x image of the same size.
It looks perfect in the interface builder, but when the app is built, the resolution looks quite off. Is there any particular reason for this?
Just to add that the Image in the asset manager also has a #2x
EDIT: Still investigating this issue, but I have just noticed that If the collection view which contains the collection item, which contains the NSImageView is enclosed by a bordered NSSCrollView the images are perfect (.ie non blurry)
Turns out if you draw images in frames with either the x,y coords or the height and width having fractions you end up with blurry images. passing the drawingRect through NSIntegralRect fixes that.

NSImage is always being scaled and looks bad

I want to create an NSImage of an NSScrollView object, so I can use the flat graphic for animation purposes.
When I render my scrollview object into a graphic and add it back to my window, it works but looks really bad like it's been scaled to 99% or something. I want the image to not be scaled and 100% pixel accurate. (Note: the image isn't scaled, it's the same size, it just looks like it's been poorly rescaled - the text looks rough and poor compared to the view onscreen in the scrollview)
My code:
(scrollView is my NSScrollView object)
NSData *pdf = [scrollView dataWithPDFInsideRect:[scrollView bounds]];
NSImage *image = [[NSImage alloc] initWithData:pdf];
NSImageView *imageView = [[NSImageView alloc] initWithFrame:[scrollView bounds]];
[imageView setImage: image];
[mainGUIPanel addSubview: imageView];
I've tried a heap of things, messed with pixel sizes, bounds, used IB to create the destination NSView and put the image inside that but just cannot get the image to not look bad. Any ideas?
Edit:
I tried writing the pdf data to a pdf file and viewed it, and it looked ok. So the bitmap image is being captured ok, it's just on the display that it looks like it's being scaled somewhat.
Edit2:
Also tried getting the bitmap like this:
NSBitmapImageRep *bitmap = [scrollView bitmapImageRepForCachingDisplayInRect:[scrollView bounds]];
[scrollView cacheDisplayInRect:[scrollView bounds] toBitmapImageRep:bitmap];
NSImage * image = [[NSImage alloc] initWithSize:[bitmap size]];
[image addRepresentation: bitmap];
Same results - the bitmap looks exactly the same, bad and scaled when displayed.
This leads me to believe that capturing the bitmap data either way works fine, it's creating the view and rendering the image that is doing the scaling. How can I make sure that the view and image are shown at the correct size and scaling?
Edit3:
Ok, I started a new blank project and set this up, and it works perfectly - the new imageview is identical to the grabbed bitmap. So I suspect my issue is stemming from some rendering/compositing issue when drawing the bitmap to the view. Investigating further...
It turns out the issue stems from the scrollView that I am rendering from. This has a transparent background (Draw Background is off in IB) and the text is the scrollView looks good. If I turn "Draw Background ON", with a transparent background color, the text is rendered badly, exactly as it is when I capture the image programatically.
So, in my app, even though Draw Background is off, the scrollView image is captured as though Draw Background is on. So I need to understand why the text is rendered badly when Draw Background is on and set to transparent, and hopefully this will lead me towards a solution.
Also tried creating an NSClipview with background drawing turned off and putting the bitmap view into that, but it sill renders the same. I can't find a way to render the transparent image to the screen without horrible artifacting.
Ok, I've found a solution. Instead of getting a grab of the transparent background scrollview object itself, I'm instead getting a grab of the parent view (essentially the window background), and restricting the bounds to the size of the scrollview object.
This captures both the background, and the contents of the scrollview, and displays correctly without any issues of transparency.

masking with an image in iOS

I'd like to take an image and use it as a mask for a view on which I add numerous image views. I know of the quartz CGContextClipToMask() call but what would be the best way to approach this? Can I override the drawRect method of a container view, call CGContextClipToMask() within it, and then expect its subviews to adhere to that clipping region? It doesn't seem to work.
Do I need to instead add some blocking mask image over top?
Instead of subclassing or overriding drawing functions, I chose to overlay the images with an image that had transparency in the viewable portion. i.e., if my 'surface' was an image of a parchment, and I aimed to draw a bunch of images on it. I would have the parchment image, then a container UIView for any images to be put on that parchment, then a masking image over top of that which was the original parchment image but with the parchment itself converted instead to full transparency, while the surrounding area is left exactly as the background the parchment is on (then all other UI widgets over top of that).
This seems a viable solution in all cases except if one were to need some image to visually animate around and behind the parchment (not my case).

NSBezierPath to NSImage in order to avoid CoreAnimation

I have an app that currently has this line:
[myView setWantsLayer:YES];
In order to draw a GUI element via NSBezierPath. This line is required, otherwise when the user types in an adjacent (and overlapping) NSTextField, the contents of myView shudders.
I discovered that calling CoreAnimation loads the OpenGL framework, but does not unload it. See this question.
I think I can get around this by drawing the NSBezierPath to NSImage and then to display the NSImage in lieu of the NSBezierPath, but I haven't found a single source that shows me how to go about this.
Edit:
I should note that I want to save this BEFORE The NSBezierPath is displayed - so solutions that draw an existing view to an NSImage are not useful.
Question:
Can someone point me in the right direction for converting NSBezierPath to an NSImage?
You can draw anything directly into an NSImage, and you can create a blank NSImage. So, create an image whose size is the size of the bounds of the path, translate the path so that it's at the origin of the image, and then lock focus on the image, draw the path, and unlock focus.

Getting a CGImageRef from IKImageBrowserView

I feed the image browser view with image filenames and it manages loading them.
Is there a way to retrieve the CGImageRef of those images from the browser after it loads them? I'd like to do some Core Animation with them when the user clicks on an image.
Probably the simplest way to do this is to use the NSView method
-bitmapImageRepForCachingDisplayInRect:
to build an NSBitmapImageRep for the area, then
-cacheDisplayInRect:toBitmapImageRep:
to draw into it, then NSBitmapImageRep's
-CGImage
method to get a CGImageRef.

Resources