Copying the drawn contents of one UIView to another - cocoa

I'd like to take a UITextView and allow the user to enter text into it and then trigger a copy of the contents onto a quartz bitmap context. Does anyone know how I can perform this copy action? Should I override the drawRect method and call [super drawRect] and then take the resulting context and copy it? If so, does anyone have any reference to sample code to copy from one context to another?
Update: from reading the link in the answer below, I put together this much to attempt to copy my UIView contents into a bitmap context, but something is still not right. I get my contents mirrored across the X axis (i.e. upside down). I tried using CGContextScaleCTM() but that seems to have no effect.
I've verified that the created UIImage from the first four lines do properly create a UIImage that isn't strangely rotated/flipped, so there is something I'm doing wrong with the later calls.
// copy contents to bitmap context
UIGraphicsBeginImageContext(mTextView.bounds.size);
[mTextView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];
// render the created image to the bitmap context
CGImageRef cgImage = [image CGImage];
CGContextScaleCTM(mContext, 1.0, -1.0); // doesn't seem to change result
CGContextDrawImage(mContext, CGRectMake(
mTextView.frame.origin.x,
mTextView.frame.origin.y,
[image size].width, [image size].height), cgImage);
Any suggestions?

Here is the code I used to get a UIImage of UIView:
#implementation UIView (Sreenshot)
- (UIImage *)screenshot{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale);
/* iOS 7 */
BOOL visible = !self.hidden && self.superview;
CGFloat alpha = self.alpha;
BOOL animating = self.layer.animationKeys != nil;
BOOL success = YES;
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]){
//only works when visible
if (!animating && alpha == 1 && visible) {
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}else{
self.alpha = 1;
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
self.alpha = alpha;
}
}
if(!success){ /* iOS 6 */
self.alpha = 1;
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.alpha = alpha;
}
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end

You can use in iOS 7 and later:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates

Related

How can I show an image in a NSView using an CGImageRef image

I want to show an image in NSview or in the NSImageView. In my header file I have
#interface FVView : NSView
{
NSImageView *imageView;
}
#end
here is what I been trying to do in my implementation file:
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
(Here I get an image called fitsImage........ then I do)
//Here I make the image
CGImageRef cgImage = CGImageRetain([fitsImage CGImageScaledToSize:maxSize]);
NSImage *imageR = [self imageFromCGImageRef:cgImage];
[imageR lockFocus];
//Here I have the view context
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
//Here I set the via dimensions
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
[self.layer renderInContext:ctx];
[imageR unlockFocus];
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
I don't get anything in the NSview window when I run the script. No errors at all I just can't see what I'm doing wrong. My Xcode version in 5.1.1
I'm trying to learn how to manipulate CGImageRef and view it in a window or nsview.
Thank you.
I'm not quite sure what exactly your setup is. Drawing an image in a custom view is a separate thing from using an NSImageView. Also, a custom view that may (or may not) be layer-backed is different from a layer-hosting view.
You have a lot of the right elements, but they're all mixed up together. In no case do you have to lock focus on an NSImage. That's for drawing into an NSImage. Also, a custom view that subclasses from NSView doesn't have to call super in its -drawRect:. NSView doesn't draw anything.
To draw an image in a custom view, try:
- (void) drawRect:(NSRect)dirtyRect
{
CGImageRef cgImage = /* ... */;
NSSize maxSize = /* ... */;
CGContextRef ctx = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGRect renderRect = CGRectMake(0., 0., maxSize.width, maxSize.height);
CGContextDrawImage(ctx, renderRect, cgImage);
CGImageRelease(cgImage);
}
If you have an NSImageView, then you don't need a custom view or any drawing method or code. Just do the following at the point where you obtain the image or the information necessary to generate it:
NSImageView* imageView = /* ... */; // Often an outlet to a view in a NIB rather than a local variable.
CGImageRef cgImage = /* ... */;
NSImage* image = [[NSImage alloc] initWithCGImage:cgImage size:/* ... */];
imageView.image = image;
CGImageRelease(cgImage);
If you're working with a layer-hosting view, you just need to set the CGImage as the layer's content. Again, you do this whenever you obtain the image or the information necessary to generate it. It's not in -drawRect:.
CALayer* layer = /* ... */; // Perhaps someView.layer
CGImageRef cgImage = /* ... */;
layer.contents = (__bridge id)cgImage;
CGImageRelease(cgImage);

Capturing an offline NSView to an NSImage

I'm trying to make a custom animation for replacing an NSView with another.
For that reason I need to get an image of the NSView before it appears on the screen.
The view may contain layers and NSOpenGLView subviews, and therefore standard options like initWithFocusedViewRect and bitmapImageRepForCachingDisplayInRect do not work well in this case (they layers or OpenGL content well in my experiments).
I am looking for something like CGWindowListCreateImage, that is able to "capture" an offline NSWindow including layers and OpenGL content.
Any suggestions?
I created a category for this:
#implementation NSView (PecuniaAdditions)
/**
* Returns an offscreen view containing all visual elements of this view for printing,
* including CALayer content. Useful only for views that are layer-backed.
*/
- (NSView*)printViewForLayerBackedView;
{
NSRect bounds = self.bounds;
int bitmapBytesPerRow = 4 * bounds.size.width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGContextRef context = CGBitmapContextCreate (NULL,
bounds.size.width,
bounds.size.height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL)
{
NSLog(#"getPrintViewForLayerBackedView: Failed to create context.");
return nil;
}
[[self layer] renderInContext: context];
CGImageRef img = CGBitmapContextCreateImage(context);
NSImage* image = [[NSImage alloc] initWithCGImage: img size: bounds.size];
NSImageView* canvas = [[NSImageView alloc] initWithFrame: bounds];
[canvas setImage: image];
CFRelease(img);
CFRelease(context);
return canvas;
}
#end
This code is primarily for printing NSViews which contain layered child views. Might help you too.

Zoom & Pan in UIImageView inside UIScrollView

I have a UIScrollView filled with UIImageView. The UIScrollView are paging enabled to let the users to "flip" those images. Each UIImageView has UIPinchGestureRecognizer for pinch zoom, and UIPanGestureRecognizer for to pan that image when zoomed in.
As probably you have noticed, what I'd like to achieve is just like what iBooks does in its application.
However, I have difficult times to get this work.
In my "BookPageViewController", I have set up UIScrollView, then fill them with images from the folder based on data (page numbers, file names, etc) from sqlite.
_pageScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
NSUInteger pageCount = [pages count];
for (int i = 0; i < pageCount; i++) {
CGFloat x = i * self.view.frame.size.width;
// Page Settings
UIImageView *pageView = [[UIImageView alloc] initWithFrame:CGRectMake(x, 0, self.view.frame.size.width, self.view.frame.size.height)];
pageView.backgroundColor = [UIColor greenColor];
pageView.image = [pages objectAtIndex:i];
pageView.userInteractionEnabled = YES;
pageView.tag = i;
// Gesture Reocgnisers
UIPinchGestureRecognizer *pinchGr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchImage:)];
pinchGr.delegate = self;
[pageView addGestureRecognizer:pinchGr];
UIPanGestureRecognizer *panGr = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panImage:)];
[pageView addGestureRecognizer:panGr];
// Finally add this page to the ScrollView
[_pageScrollView addSubview:pageView];
}
_pageScrollView.contentSize = CGSizeMake(self.view.frame.size.width * pageCount, self.view.frame.size.height);
_pageScrollView.pagingEnabled = YES;
_pageScrollView.delegate = self;
[self.view addSubview:_pageScrollView];
And with the help of another good questions here in Stackoverflow, I have put those:
- (void) pinchImage:(UIPinchGestureRecognizer*)pgr {
[self adjustAnchorPointForGestureRecognizer:pgr];
if ([pgr state] == UIGestureRecognizerStateBegan || [pgr state] == UIGestureRecognizerStateChanged) {
if ([pgr scale]) {
NSLog(#"SCALE: %f", [pgr scale]);
[pgr view].transform = CGAffineTransformScale([[pgr view] transform], [pgr scale], [pgr scale]);
[pgr setScale:1];
} else {
NSLog(#"[PAMPHLET PAGE]: The image cannot be scaled.");
}
}
}
My problem is, when I zoom in the one of the UIImageView with Pinch Gesture, the image exceeds (go over) to the next image and hides it. I believe that there should be the way to "limit" the zoom in/out and the size of UIImageView (not UIImage itself), but I don't know where to go from here. I have also put a code to limit the scale something like:
([pgr scale] > 1.0f && [pgr scale] < 1.014719) || ([pgr scale] < 1.0f && [pgr scale] > 0.98f)
but it didn't work...
I know it's not hard for ios professionals, but I'm quite new to objective-c, and this is my first time to develop real application. If this is not a good practice, I also would like to know any ideas to achieve just how to do this like ibooks do (e.g. put UIImageView in UIScrollView, then put the UIScrollView to another UIScrollView)...
Sorry for another beginner question, but I really need help.
Thanks in advance.
Without trying it out myself, my first guess would be that the transform on the UIImageView also transforms its frame. One way to solve that, would be to put the UIImageView in another UIView, and put that UIView in the UIScrollView. Your gesture recognizers and the transform would still be on the UIImageView. Make sure the UIView's clipsToBounds property is set to YES.

UIPageViewController changes size?

I'm using the Page-Based Application template in Xcode 4 to load pages of a PDF and create a UITextView over hidden text boxes so the user can write notes.
So far I have it all working, but when I add the UITextView, it's in the wrong place in landscape mode (showing 2 pages).
// ModelController.m
- (id)init
{
self = [super init];
if (self) {
NSString *pathToPdfDoc = [[NSBundle mainBundle] pathForResource:#"My PDF File" ofType:#"pdf"];
NSURL *pdfUrl = [NSURL fileURLWithPath:pathToPdfDoc];
self.pageData = CGPDFDocumentCreateWithURL((__bridge CFURLRef)pdfUrl); // pageData holds the PDF file
}
return self;
}
- (DataViewController *)viewControllerAtIndex:(NSUInteger)index storyboard:(UIStoryboard *)storyboard
{
// Return the data view controller for the given index.
if( CGPDFDocumentGetNumberOfPages( self.pageData ) == 0 || (index >= CGPDFDocumentGetNumberOfPages( self.pageData )))
return nil;
// Create a new view controller and pass suitable data.
DataViewController *dataViewController = [storyboard instantiateViewControllerWithIdentifier:#"DataViewController"];
dataViewController.dataObject = CGPDFDocumentGetPage( self.pageData, index + 1 ); // dataObject holds the page of the PDF file
[dataViewController view]; // make sure the view is loaded so that all subviews can be accessed
UITextView *textView = [[UITextView alloc] initWithFrame:CGRectMake( 10, 20, 30, 40 )];
textView.layer.borderWidth = 1.0f;
textView.layer.borderColor = [[UIColor grayColor] CGColor];
[dataViewController.dataView addSubview:textView]; // dataView is a subview of dataViewController.view in the storyboard/xib
CGRect viewFrame = dataViewController.dataView.frame; // <- *** THIS IS THE WRONG SIZE IN LANDSCAPE ***
}
This behavior really surprised me, because viewControllerAtIndex isn't called when I rotate the iPad, so I have no way of knowing what the real size of the view frame is. I get the same view frame in both portrait and landscape:
# in Xcode console:
po [dataViewController view]
# result in either orientation:
(id) $4 = 0x0015d160 <UIView: 0x15d160; frame = (0 20; 768 1004); autoresize = RM+BM; layer = <CALayer: 0x15d190>>
#
Does anyone know if there is a transform I'm supposed to use to position the UITextView correctly? I'm concerned that I may have to store the locations of the elements independently and reposition them upon receiving shouldAutorotateToInterfaceOrientation messages.
It seems that Apple may have implemented UIPageViewController improperly, but all I could find was this partial workaround that I'm still trying to figure out:
UIPageViewController and off screen orientation changes
Thanks!
I think the trick here is to override viewDidLayoutSubviews in your DataViewController and manage the size of all your programmatically-inserted non-autosizing views, since you don't really know what the parent is going to do to its subviews until that time.
-(void)viewDidLayoutSubviews
{
[super viewDidLayoutSubviews];
self.textView.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height);
}

Setting image property of UIImageView causes major lag

Let me tell you about the problem I am having and how I tried to solve it. I have a UIScrollView which loads subviews as one scrolls from left to right. Each subview has 10-20 images around 400x200 each. When I scroll from view to view, I experience quite a bit of lag.
After investigating, I discovered that after unloading all the views and trying it again, the lag was gone. I figured that the synchronous caching of the images was the cause of the lag. So I created a subclass of UIImageView which loaded the images asynchronously. The loading code looks like the following (self.dispatchQueue returns a serial dispatch queue).
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
UIImage *image = [UIImage imageNamed:name];
dispatch_sync(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
However, after changing all of my UIImageViews to this subclass, I still experienced lag (I'm not sure if it was lessened or not). I boiled down the cause of the problem to self.image = image;. Why is this causing so much lag (but only on the first load)?
Please help me. =(
EDIT 3: iOS 15 now offers UIImage.prepareForDisplay(completionHandler:).
image.prepareForDisplay { decodedImage in
imageView.image = decodedImage
}
or
imageView.image = await image.byPreparingForDisplay()
EDIT 2: Here is a Swift version that contains a few improvements. (Untested.)
https://gist.github.com/fumoboy007/d869e66ad0466a9c246d
EDIT: Actually, I believe all that is necessary is the following. (Untested.)
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:pathToImage];
// Decompress image
if (image) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
ORIGINAL ANSWER: It turns out that it wasn't self.image = image; directly. The UIImage image loading methods don't decompress and process the image data right away; they do it when the view refreshes its display. So the solution was to go a level lower to Core Graphics and decompress and process the image data myself. The new code looks like the following.
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *uiImage = nil;
if (pathToImage) {
// Load the image
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithFilename([pathToImage fileSystemRepresentation]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imageDataProvider, NULL, NO, kCGRenderingIntentDefault);
// Create a bitmap context from the image's specifications
// (Note: We need to specify kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
// because PNGs are optimized by Xcode this way.)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetWidth(image) * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
// Draw the image into the bitmap context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
// Extract the decompressed image
CGImageRef decompressedImage = CGBitmapContextCreateImage(bitmapContext);
// Create a UIImage
uiImage = [[UIImage alloc] initWithCGImage:decompressedImage];
// Release everything
CGImageRelease(decompressedImage);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
CGDataProviderRelease(imageDataProvider);
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = uiImage;
});
});
}
I think, the problem could be Images themselves. For example - I got in one of my projects 10 images 640x600 layered with alpha transparency on each other. When I try to push or pop viewcontroller from this viewcontroller.. it lags a lot.
when I leave only few images or use quite smaller images - no lag.
P.S. tested on sdk 4.2 ios5 iphone 4.

Resources