Update
Nevermind :-)
Figured it out using bitmapImageRepForCachingDisplayInRect & cacheDisplayInRect:toBitmapImageRep:.
I'll leave the question here for posterity, though.
I'm working on a little application that, among other things, has an NSView subclass that draws a bunch of bezierPaths. I'd like to be able to save the drawn result as either EPS or PNG.
The view is being drawn in an offscreen window (for scaling reasons), and even though it returns the correct EPS data, I can't seem to get any useful bitmap data from it.
EPS is no problem (I simply write the NSData from -dataWithEPSInsideRect: to a file), but I can't seem to get a PNG bitmap.
If I try calling:
[self lockFocus];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:self.bounds];
[self unlockFocus];
return rep;
from a category method I've added to NSView, I get useless white PNG data out of it when I try representationUsingType:NSPNGFileType properties:[NSDictionary dictionary].
Strangely, if I try calling lockFocus/initBitmapWith../unlockFocus from outside the view (or its category methods), I get an exception saying that the view or one of its ancestors is hidden. And well, yes, it's offscreen (the offscreen window's been init'ed with defer:NO, by the way, so it should paint).
So I can either get useless data or I can get an exception. Not awesome.
To add to my confusion: If I make an NSImage containing an EPS representation, and a (useless) white bitmap, where both should be the size of the view's bounds, the bitmap representation is always 20 pixels/units narrower than the bounds. No idea why! The EPS is the correct size.
Since this may all be related to how the offscreen window's created, here's the code for that (from the NSView category)
- (NSWindow*)placeInOffscreenWindow {
NSRect windowBounds = { { -1000.0 , -1000.0 } , self.bounds.size };
NSWindow *hiddenWindow = [[NSWindow alloc] initWithContentRect: windowBounds
styleMask: NSTitledWindowMask | NSClosableWindowMask
backing: NSBackingStoreNonretained
defer: NO];
[[hiddenWindow contentView] addSubview:self];
return hiddenWindow;
}
Any ideas would be appreciated!
Ended up using bitmapImageRepForCachingDisplayInRect: & cacheDisplayInRect:toBitmapImageRep: instead
I suggest you create an NSImage, lock focus on it, tell the view to draw its bounds, and unlock focus, then ask the image to create a CGImage, then pass that to a CGImageDestination to write out the PNG data/file.
Related
I want to create an NSImage of an NSScrollView object, so I can use the flat graphic for animation purposes.
When I render my scrollview object into a graphic and add it back to my window, it works but looks really bad like it's been scaled to 99% or something. I want the image to not be scaled and 100% pixel accurate. (Note: the image isn't scaled, it's the same size, it just looks like it's been poorly rescaled - the text looks rough and poor compared to the view onscreen in the scrollview)
My code:
(scrollView is my NSScrollView object)
NSData *pdf = [scrollView dataWithPDFInsideRect:[scrollView bounds]];
NSImage *image = [[NSImage alloc] initWithData:pdf];
NSImageView *imageView = [[NSImageView alloc] initWithFrame:[scrollView bounds]];
[imageView setImage: image];
[mainGUIPanel addSubview: imageView];
I've tried a heap of things, messed with pixel sizes, bounds, used IB to create the destination NSView and put the image inside that but just cannot get the image to not look bad. Any ideas?
Edit:
I tried writing the pdf data to a pdf file and viewed it, and it looked ok. So the bitmap image is being captured ok, it's just on the display that it looks like it's being scaled somewhat.
Edit2:
Also tried getting the bitmap like this:
NSBitmapImageRep *bitmap = [scrollView bitmapImageRepForCachingDisplayInRect:[scrollView bounds]];
[scrollView cacheDisplayInRect:[scrollView bounds] toBitmapImageRep:bitmap];
NSImage * image = [[NSImage alloc] initWithSize:[bitmap size]];
[image addRepresentation: bitmap];
Same results - the bitmap looks exactly the same, bad and scaled when displayed.
This leads me to believe that capturing the bitmap data either way works fine, it's creating the view and rendering the image that is doing the scaling. How can I make sure that the view and image are shown at the correct size and scaling?
Edit3:
Ok, I started a new blank project and set this up, and it works perfectly - the new imageview is identical to the grabbed bitmap. So I suspect my issue is stemming from some rendering/compositing issue when drawing the bitmap to the view. Investigating further...
It turns out the issue stems from the scrollView that I am rendering from. This has a transparent background (Draw Background is off in IB) and the text is the scrollView looks good. If I turn "Draw Background ON", with a transparent background color, the text is rendered badly, exactly as it is when I capture the image programatically.
So, in my app, even though Draw Background is off, the scrollView image is captured as though Draw Background is on. So I need to understand why the text is rendered badly when Draw Background is on and set to transparent, and hopefully this will lead me towards a solution.
Also tried creating an NSClipview with background drawing turned off and putting the bitmap view into that, but it sill renders the same. I can't find a way to render the transparent image to the screen without horrible artifacting.
Ok, I've found a solution. Instead of getting a grab of the transparent background scrollview object itself, I'm instead getting a grab of the parent view (essentially the window background), and restricting the bounds to the size of the scrollview object.
This captures both the background, and the contents of the scrollview, and displays correctly without any issues of transparency.
Is it possible to convert an NSGradient to an NSColor
- (void) viewWillDraw {
NSGradient *grad = [[NSGradient alloc] initWithStartingColor:[NSColor lightGrayColor] endingColor:[NSColor darkGrayColor]];
[super setBackgroundColor:*gradient*;
}
This is my method, I want to be able to pass the NSGradient in as an NSColor, which obviously i cant, is there any way to convert it to one?
On 10.8, you can create, in the following order:
A block that draws the gradient however you like.
An image that is backed by the block.
A color that repeats the image as a pattern.
In this way, you can create a color that looks like anything, including a gradient.
That said, this may not work correctly with window resizing if you try to have the gradient adapt to the size of the background (by using the rect passed to the block) and the background is of a text view in a scroll view. (When I tried it awhile back, the pattern didn't redraw the block; it simply tiled, which looked weird in at least one dimension.) If either your gradient or your window is fixed in size, then you will not have that problem.
NSGradient is not Convertible to NSColor.
The NSGradient class provides support for drawing gradient fill
colors, also known as shadings in Quartz. This class provides
convenience methods for drawing radial or linear (axial) gradients for
rectangles and NSBezierPath objects.
As you want to set the viewBackground to to an effect (Gradient effect) you need to do as:
[grad drawInRect:<the rect of your view> angle:270]; //angle is upto your requirement,
I have a UIScrollView with a UIImageView inside it. The user can pan and zoom the image inside the scrollView.
I also have a UIImageView in the main view (above the scrollView), and the user can move that image around the screen.
I'm using CISourceOverCompositing to combine them both:
-(UIImage *) compositeFinalImage {
CIImage *foregroundImage = [CIImage imageWithCGImage:foregroundImageView.image.CGImage];
CIFilter *composite =[CIFilter filterWithName:#"CISourceOverCompositing"];
[composite setValue: foregroundImage forKey: #"inputImage"];
[composite setValue: [CIImage imageWithCGImage:backgroundImage.CGImage] forKey: #"inputBackgroundImage"];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *finalImage = [composite valueForKey:#"outputImage"];
return [UIImage imageWithCGImage:[context createCGImage:finalImage fromRect:finalImage.extent] scale:1.0 orientation:userImageOrientation];
}
...but the location and scale of the image has been lost when I make it a CIImage and the foreground image is always at the bottom left.
So I tried to use imageByApplyingTransform to move the position of the foreground image (and eventually I will have to apply scale transform as well), but I get the position from the ImageView, but the coordinates need to be in the native resolution of the background image itself... but the background image is moving around (which I guess means I have to take the ContentOffset into account somehow), and the background image has a certain scale, and the foreground image has a certain scale...
It seems weird that it's needed to re-produce all of the transformation with regards to different scale, rotation and position variables of each image...
This is the basic idea of what I'm trying to do (the left side is in the main view coordinates, while the right side is in native image coordinates).
Any help will be much appreciated!
Thanks!
Greetings! I have a problem and Googling brought no results...
I implemented the drawRect method for my NSView (subclass) to draw some shadows and semi-transparent fills. Everything looks great! But now I need to create an NSImage from my NSView (make snapshot) for drag&drop purposes.
It works, but draws in some different manner: darker and not so contrast as should be.
Why? Maybe because of NSGraphicContext different options? Need help and/or advice!
Here is the code for getting NSImage from NSView:
- (NSImage *)makeImageSnapshot {
NSSize imgSize = self.bounds.size;
NSBitmapImageRep * bir = [self bitmapImageRepForCachingDisplayInRect:[self bounds]];
[bir setSize:imgSize];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[[NSImage alloc] initWithSize:imgSize] autorelease];
[image addRepresentation:bir];
return image;
}
And here are the images to compare visually:
Normal - drawn by drawRect usual call: http://cl.ly/image/213C1Y1V0v2H
Bad - captured into NSImage: http://cl.ly/image/183q442S2J14
Thought the difference might seem very small, believe me - it is obvious while working with application. I don't undertand why that is happening. And hope someone can help...
Thanks in advance!
I believe the transparency problem is caused by not having other things to blend it against when it's cached on its own.
Try -dataWithPDFInsideRect: or -dataWithEPSInsideRect: and get your image reps from there.
I have an app that currently has this line:
[myView setWantsLayer:YES];
In order to draw a GUI element via NSBezierPath. This line is required, otherwise when the user types in an adjacent (and overlapping) NSTextField, the contents of myView shudders.
I discovered that calling CoreAnimation loads the OpenGL framework, but does not unload it. See this question.
I think I can get around this by drawing the NSBezierPath to NSImage and then to display the NSImage in lieu of the NSBezierPath, but I haven't found a single source that shows me how to go about this.
Edit:
I should note that I want to save this BEFORE The NSBezierPath is displayed - so solutions that draw an existing view to an NSImage are not useful.
Question:
Can someone point me in the right direction for converting NSBezierPath to an NSImage?
You can draw anything directly into an NSImage, and you can create a blank NSImage. So, create an image whose size is the size of the bounds of the path, translate the path so that it's at the origin of the image, and then lock focus on the image, draw the path, and unlock focus.