How can I show an image in a NSView using an CGImageRef image - xcode

I want to show an image in NSview or in the NSImageView. In my header file I have
#interface FVView : NSView
{
NSImageView *imageView;
}
#end
here is what I been trying to do in my implementation file:
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
(Here I get an image called fitsImage........ then I do)
//Here I make the image
CGImageRef cgImage = CGImageRetain([fitsImage CGImageScaledToSize:maxSize]);
NSImage *imageR = [self imageFromCGImageRef:cgImage];
[imageR lockFocus];
//Here I have the view context
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
//Here I set the via dimensions
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
[self.layer renderInContext:ctx];
[imageR unlockFocus];
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
I don't get anything in the NSview window when I run the script. No errors at all I just can't see what I'm doing wrong. My Xcode version in 5.1.1
I'm trying to learn how to manipulate CGImageRef and view it in a window or nsview.
Thank you.

I'm not quite sure what exactly your setup is. Drawing an image in a custom view is a separate thing from using an NSImageView. Also, a custom view that may (or may not) be layer-backed is different from a layer-hosting view.
You have a lot of the right elements, but they're all mixed up together. In no case do you have to lock focus on an NSImage. That's for drawing into an NSImage. Also, a custom view that subclasses from NSView doesn't have to call super in its -drawRect:. NSView doesn't draw anything.
To draw an image in a custom view, try:
- (void) drawRect:(NSRect)dirtyRect
{
CGImageRef cgImage = /* ... */;
NSSize maxSize = /* ... */;
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
If you have an NSImageView, then you don't need a custom view or any drawing method or code. Just do the following at the point where you obtain the image or the information necessary to generate it:
NSImageView* imageView = /* ... */; // Often an outlet to a view in a NIB rather than a local variable.
CGImageRef cgImage = /* ... */;
NSImage* image = [[NSImage alloc] initWithCGImage:cgImage size:/* ... */];
imageView.image = image;
CGImageRelease(cgImage);
If you're working with a layer-hosting view, you just need to set the CGImage as the layer's content. Again, you do this whenever you obtain the image or the information necessary to generate it. It's not in -drawRect:.
CALayer* layer = /* ... */; // Perhaps someView.layer
CGImageRef cgImage = /* ... */;
layer.contents = (__bridge id)cgImage;
CGImageRelease(cgImage);

Related

NSView image corruption when dragging from scaled view

I have a custom subclass of NSView that implements drag/drop for copying the image in the view to another application. The relevant code in my class looks like this:
#pragma mark -
#pragma mark Dragging Support
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
- (void)mouseDown:(NSEvent *)theEvent
{
NSSize dragOffset = NSMakeSize(0.0, 0.0); // not used in the method below, but required.
NSPasteboard *pboard;
NSImage *image = [self imageWithSubviews];
pboard = [NSPasteboard pasteboardWithName:NSDragPboard];
[pboard declareTypes:[NSArray arrayWithObject:NSTIFFPboardType]
owner:self];
[pboard setData:[image TIFFRepresentation]
forType:NSTIFFPboardType];
[self dragImage:image
at:self.bounds.origin
offset:dragOffset
event:theEvent
pasteboard:pboard
source:self
slideBack:YES];
return;
}
#pragma mark -
#pragma mark NSDraggingSource Protocol
- (NSDragOperation)draggingSession:(NSDraggingSession *)session sourceOperationMaskForDraggingContext:(NSDraggingContext)context
{
return NSDragOperationCopy;
}
- (BOOL)ignoreModifierKeysForDraggingSession:(NSDraggingSession *)session
{
return YES;
}
This works as expected until I resize the main window. The main window only increases size/width in the same increments to maintain the proper ratio in this view. The view properly displays its content on the screen when the window is resized.
The problem comes when I resize the window more than about + 25%. While it still displays as expected, the image that is dragged off of it (into Pages, for example) is corrupt. It appears to have a portion of this image repeated on top of itself.
Here is what it looks like normally:
And here is what it looks like when dragged to Pages after resizing the main window to make it large (downsized to show here -- imagine it at 2-3x the size of the first image):
Note that I highlighted the corrupt area with a dotted rectangle.
A few more notes:
I have my bounds set like NSMakeRect(-200,-200,400,400) because it makes the symmetrical drawing a bit easier. When the window resizes, I recalculate the bounds to keep 0,0 in the center of the NSView. The NSView always is square.
Finally, the Apple docs state the following for the bitmapImageRep parameter in cacheDisplayInRect:toBitmapImageRep: should
An NSBitmapImageRep object. For pixel-format compatibility, bitmapImageRep should have been obtained from bitmapImageRepForCachingDisplayInRect:.
I've tried using bitmapImageRepForCachingDisplayInRect:, but then all I see is the lower-left quadrant of the pyramid in the upper-right quadrant of the image. That makes me think that I need to add an offset for the capture of the bitmapImageRep, but I've been unable to determine how to do that.
Here's what the code for imageWithSubviews looks like when I try that:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [self bitmapImageRepForCachingDisplayInRect:[self bounds]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
And this is how the resulting image appears:
That is a view of the lower left quadrant being drawn in the upper-right corner.
What is causing the corruption when I drag from the NSView after enlarging the window? How to I fix that and/or change my implementation of the methods that I listed above to avoid the problem?
More info:
When I change the imageWithSubviews method to:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
I get a corrupted image without scaling, where the bottom-left quadrant of the image is drawn again on top of the top-right quadrant, like this:
What in the world am I doing wrong?
Solution:
While it does not address the core problem of drawing with NSBitmapImageRep, the following -imageWithSubviews prevents the corruption and outputs the correct image:
- (NSImage *)imageWithSubviews
{
NSData *pdfData = [self dataWithPDFInsideRect:[self bounds]];
NSImage* image = [[NSImage alloc] initWithData:pdfData];
return image;
}
Based on some debugging above, we determined the problem was in -imageWithSubviews.
Instead of generating image data for the view using -cacheDisplayInRect:toBitmapImageRep:, changing it to -dataWithPDFInRect: fixed the issue.

Capturing an offline NSView to an NSImage

I'm trying to make a custom animation for replacing an NSView with another.
For that reason I need to get an image of the NSView before it appears on the screen.
The view may contain layers and NSOpenGLView subviews, and therefore standard options like initWithFocusedViewRect and bitmapImageRepForCachingDisplayInRect do not work well in this case (they layers or OpenGL content well in my experiments).
I am looking for something like CGWindowListCreateImage, that is able to "capture" an offline NSWindow including layers and OpenGL content.
Any suggestions?
I created a category for this:
#implementation NSView (PecuniaAdditions)
/**
* Returns an offscreen view containing all visual elements of this view for printing,
* including CALayer content. Useful only for views that are layer-backed.
*/
- (NSView*)printViewForLayerBackedView;
{
NSRect bounds = self.bounds;
int bitmapBytesPerRow = 4 * bounds.size.width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGContextRef context = CGBitmapContextCreate (NULL,
bounds.size.width,
bounds.size.height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL)
{
NSLog(#"getPrintViewForLayerBackedView: Failed to create context.");
return nil;
}
[[self layer] renderInContext: context];
CGImageRef img = CGBitmapContextCreateImage(context);
NSImage* image = [[NSImage alloc] initWithCGImage: img size: bounds.size];
NSImageView* canvas = [[NSImageView alloc] initWithFrame: bounds];
[canvas setImage: image];
CFRelease(img);
CFRelease(context);
return canvas;
}
#end
This code is primarily for printing NSViews which contain layered child views. Might help you too.

CGGradient isn't visible (not using interface builder) and UIButtons can't be triggered

I have created a view that contains a CGGradient:
// Bar ContextRef
CGRect bar = CGRectMake(0, screenHeight-staffAlignment, screenWidth, barWidth);
CGContextRef barContext = UIGraphicsGetCurrentContext();
CGContextSaveGState(barContext);
CGContextClipToRect(barContext,bar);
// Bar GradientRef
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat components[16] = { 1.0,1.0,1.0,0.0, 0.0,0.0,0.0,1.0, 0.0,0.0,0.0,1.0, 1.0,1.0,1.0,0.0};
CGFloat locations[4] = {0.95,0.85,0.15,0.05};
size_t count = 4;
CGGradientRef gradientRef = CGGradientCreateWithColorComponents(colorSpace, components, locations, count);
// Draw Bar
CGPoint startPoint = {0.0,0.0};
CGPoint endPoint = {screenWidth,0.0};
CGContextDrawLinearGradient(barContext, gradientRef, startPoint, endPoint, 0);
CGContextRestoreGState(barContext);
This code is called in the drawRect method of the UIView. I then use a UIViewController to access the created view.
- (void)loadView {
MainPageView *mpView = [[MainPageView alloc] initWithFrame:[window bounds]];
[self setView:mpView];
[mpView release];
}
and displayed on the screen through the appDelegate:
mpViewController = [[MainPageViewController alloc] init];
[window addSubview:[mpViewController view]];
[window makeKeyAndVisible];
The UIView contains more objects, such as UIButtons, that are visible. I am assuming because they are added as a subview. But I can't work out how to add the CGGradient as a subview? Does it need to be? Is there another reason CGGradient is not visible?
I also don't get the functionality on the UIButtons. I guess that is because of where I have added the UIButtons to the view. Do the buttons need to be added in the UIViewController or the appDelegate to have functionality. Sorry to ask what would seem like simple questions but I am trying to accomplish the programming without the Interface Builder and material on that is scarce. If anyone could point me in the right direction on both these problems I would really appreciate it.
Thanks!
The functionality on the buttons was lost because the frame was too large but the buttons were still visible because the background was clearColor

Programmatically edit image in Cocoa?

I want to set a custom mouse cursor for my app and would like to programmatically change the default cursor's color to a custom color by replacing the white border with the custom color. The problem is that I don't even know where to start to programmatically edit images in Cocoa, so any help is appreciated!
You can get the default cursor with -[NSCursor arrowCursor]. Once you have a cursor, you can get its image with -[NSCursor image]. You shouldn't modify another object's image, so you should copy that image. Then you should edit the image, and create a new cursor with -[NSCursor initWithImage:hotSpot:]. Your code should look something like this:
- (NSImage *)customArrowCursorImage {
NSImage *image = [[[NSCursor arrowCursor] image] copy];
[image lockFocus];
/// Do custom drawing
[image unlockFocus];
}
- (NSCursor *)customArrowCursor {
NSImage *image = [self customArrowCursorImage];
NSPoint hotSpot = [[NSCursor arrowCursor] hotSpot];
return [[[NSCursor alloc] initWithImage:image hotSpot:hotSpot] autorelease];
}
You should be able to replace the white color of the image with a custom color by using a core image filter. But if you just want to get started, you can use NSReadPixel() and NSRectFill to color one pixel at a time. Drawing one pixel a a time like that with with NSReadPixel and NSRectFill will be exceptionally slow, so you should only do that to get a feel for how all of this works.
My final code. This takes the standard IBeam cursor (that one when you hover over a textview) and stores the colored cursor in the coloredIBeamCursor pointer.
- (void)setPointerColor:(NSColor *)newColor {
// create the new cursor image
[[NSGraphicsContext currentContext] CIContext];
// create the layer with the same color as the text
CIFilter *backgroundGenerator=[CIFilter filterWithName:#"CIConstantColorGenerator"];
CIColor *color=[[[CIColor alloc] initWithColor:newColor] autorelease];
[backgroundGenerator setValue:color forKey:#"inputColor"];
CIImage *backgroundImage=[backgroundGenerator valueForKey:#"outputImage"];
// create the cursor image
CIImage *cursor=[CIImage imageWithData:[[[NSCursor IBeamCursor] image] TIFFRepresentation]];
CIFilter *filter=[CIFilter filterWithName:#"CIColorInvert"];
[filter setValue:cursor forKey:#"inputImage"];
CIImage *outputImage=[filter valueForKey:#"outputImage"];
// apply a multiply filter
filter=[CIFilter filterWithName:#"CIMultiplyCompositing"];
[filter setValue:backgroundImage forKey:#"inputImage"];
[filter setValue:outputImage forKey:#"inputBackgroundImage"];
outputImage=[filter valueForKey:#"outputImage"];
// get the NSImage from the CIImage
NSCIImageRep *rep=[NSCIImageRep imageRepWithCIImage:outputImage];
NSImage *newImage=[[[NSImage alloc] initWithSize:[outputImage extent].size] autorelease];
[newImage addRepresentation:rep];
// remove the old cursor (if any)
if (coloredIBeamCursor!=nil) {
[self removeCursorRect:[self visibleRect] cursor:coloredIBeamCursor];
[coloredIBeamCursor release];
}
// set the new cursor
NSCursor *coloredIBeamCursor=[[NSCursor alloc] initWithImage:newImage hotSpot:[[NSCursor IBeamCursor] hotSpot]];
[self resetCursorRects];
}

Copying the drawn contents of one UIView to another

I'd like to take a UITextView and allow the user to enter text into it and then trigger a copy of the contents onto a quartz bitmap context. Does anyone know how I can perform this copy action? Should I override the drawRect method and call [super drawRect] and then take the resulting context and copy it? If so, does anyone have any reference to sample code to copy from one context to another?
Update: from reading the link in the answer below, I put together this much to attempt to copy my UIView contents into a bitmap context, but something is still not right. I get my contents mirrored across the X axis (i.e. upside down). I tried using CGContextScaleCTM() but that seems to have no effect.
I've verified that the created UIImage from the first four lines do properly create a UIImage that isn't strangely rotated/flipped, so there is something I'm doing wrong with the later calls.
// copy contents to bitmap context
UIGraphicsBeginImageContext(mTextView.bounds.size);
[mTextView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];
// render the created image to the bitmap context
CGImageRef cgImage = [image CGImage];
CGContextScaleCTM(mContext, 1.0, -1.0); // doesn't seem to change result
CGContextDrawImage(mContext, CGRectMake(
mTextView.frame.origin.x,
mTextView.frame.origin.y,
[image size].width, [image size].height), cgImage);
Any suggestions?
Here is the code I used to get a UIImage of UIView:
#implementation UIView (Sreenshot)
- (UIImage *)screenshot{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale);
/* iOS 7 */
BOOL visible = !self.hidden && self.superview;
CGFloat alpha = self.alpha;
BOOL animating = self.layer.animationKeys != nil;
BOOL success = YES;
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]){
//only works when visible
if (!animating && alpha == 1 && visible) {
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}else{
self.alpha = 1;
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
self.alpha = alpha;
}
}
if(!success){ /* iOS 6 */
self.alpha = 1;
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.alpha = alpha;
}
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end
You can use in iOS 7 and later:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates

Resources