CCLayer to UIImage - Anti-aliasing? - xcode

When I grab a snapshot of a CCLayer as an UIImage with the help of CCRenderTexture it seems like I'm loosing the anti-aliasing, resulting in the output image looking slightly different from what the screen actually looks like.
Is there a way of getting an output image that corresponds more exactly to what is shown on the screen?
This is how I'm getting my UIImage:
-(UIImage*)layerRepresentation {
CCLayer *layer1 = self;
CCRenderTexture *renderer01 = [CCRenderTexture renderTextureWithWidth:layer1.contentSize.width height:layer1.contentSize.height];
[renderer01 begin];
[self visit];
[renderer01 end];
UIImage *image = [renderer01 getUIImage];
return image;
}

When CCRenderTexture is created, it sendssetAliasTexParameters message to its texture. Try
[renderer01.sprite.texture setAntiAliasTexParameters];

Related

UIImage Animation Reverting Back To Original Position

Im pretty new to coding but I'm starting to get the hang of the basics.
how to make an image stay in its new position after an animation?
Example:
I'm giving an animating object a random position, however, the animation causes the object not to animate at the random position, but instead animate at the position it was given in the view controller. This also happens when I animate a completly different object.
Code I used:
int Random1x;
int Random1y;
IBOutlet UIButton *Start;
IBOutlet UIImageview *Object2;
-(void)ObjectMoving;
-(void)Object2Animate;
-(IBAction)Start:(id)sender{
[self ObjectMoving];
[self Object2Animate];
}
-(void)Object2Animate {
Object2.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"2.png"],
[UIImage imageNamed: #"3.png"],
[UIImage imageNamed: #"4.png"],
[UIImage imageNamed: #"1.png"], nil];
Object2.animationDuration = .5
[Object2 setanimationRepeatCount: 0]
[Object2 startAnimating];
}
-(void)ObjectMoving {
Random1y = arc4random() % 466;
Random1y = Random1y + 60;
Random1x = arc4random() % 288;
Object2.center = CGPointMake(Random1x, Random1y);
}
I'd greatly appreciate help, thank you!
If you go to your story board file and click on the View Controller and then file inspector you will see a box for Auto Layout
Post back if that worked.
If you do need to use auto layout then you would have to figure out a different way of moving the image.

iOS7 screenshot not taking into consideration blur effect

I'm taking screen shot with this code
- (UIImage *)screenshot {
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
but the resulting image doesn't have the alpha and blur effects showing properly
any way to fix this?
When you look into the documentation of "renderInContext" you can see it has some downsides when it comes to Animations and so one. Try it with this, if it isn't necessary to take a screenshot of the layer directly
- (UIImage *)screenshot {
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 0);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:NO];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

NSView image corruption when dragging from scaled view

I have a custom subclass of NSView that implements drag/drop for copying the image in the view to another application. The relevant code in my class looks like this:
#pragma mark -
#pragma mark Dragging Support
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
- (void)mouseDown:(NSEvent *)theEvent
{
NSSize dragOffset = NSMakeSize(0.0, 0.0); // not used in the method below, but required.
NSPasteboard *pboard;
NSImage *image = [self imageWithSubviews];
pboard = [NSPasteboard pasteboardWithName:NSDragPboard];
[pboard declareTypes:[NSArray arrayWithObject:NSTIFFPboardType]
owner:self];
[pboard setData:[image TIFFRepresentation]
forType:NSTIFFPboardType];
[self dragImage:image
at:self.bounds.origin
offset:dragOffset
event:theEvent
pasteboard:pboard
source:self
slideBack:YES];
return;
}
#pragma mark -
#pragma mark NSDraggingSource Protocol
- (NSDragOperation)draggingSession:(NSDraggingSession *)session sourceOperationMaskForDraggingContext:(NSDraggingContext)context
{
return NSDragOperationCopy;
}
- (BOOL)ignoreModifierKeysForDraggingSession:(NSDraggingSession *)session
{
return YES;
}
This works as expected until I resize the main window. The main window only increases size/width in the same increments to maintain the proper ratio in this view. The view properly displays its content on the screen when the window is resized.
The problem comes when I resize the window more than about + 25%. While it still displays as expected, the image that is dragged off of it (into Pages, for example) is corrupt. It appears to have a portion of this image repeated on top of itself.
Here is what it looks like normally:
And here is what it looks like when dragged to Pages after resizing the main window to make it large (downsized to show here -- imagine it at 2-3x the size of the first image):
Note that I highlighted the corrupt area with a dotted rectangle.
A few more notes:
I have my bounds set like NSMakeRect(-200,-200,400,400) because it makes the symmetrical drawing a bit easier. When the window resizes, I recalculate the bounds to keep 0,0 in the center of the NSView. The NSView always is square.
Finally, the Apple docs state the following for the bitmapImageRep parameter in cacheDisplayInRect:toBitmapImageRep: should
An NSBitmapImageRep object. For pixel-format compatibility, bitmapImageRep should have been obtained from bitmapImageRepForCachingDisplayInRect:.
I've tried using bitmapImageRepForCachingDisplayInRect:, but then all I see is the lower-left quadrant of the pyramid in the upper-right quadrant of the image. That makes me think that I need to add an offset for the capture of the bitmapImageRep, but I've been unable to determine how to do that.
Here's what the code for imageWithSubviews looks like when I try that:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [self bitmapImageRepForCachingDisplayInRect:[self bounds]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
And this is how the resulting image appears:
That is a view of the lower left quadrant being drawn in the upper-right corner.
What is causing the corruption when I drag from the NSView after enlarging the window? How to I fix that and/or change my implementation of the methods that I listed above to avoid the problem?
More info:
When I change the imageWithSubviews method to:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
I get a corrupted image without scaling, where the bottom-left quadrant of the image is drawn again on top of the top-right quadrant, like this:
What in the world am I doing wrong?
Solution:
While it does not address the core problem of drawing with NSBitmapImageRep, the following -imageWithSubviews prevents the corruption and outputs the correct image:
- (NSImage *)imageWithSubviews
{
NSData *pdfData = [self dataWithPDFInsideRect:[self bounds]];
NSImage* image = [[NSImage alloc] initWithData:pdfData];
return image;
}
Based on some debugging above, we determined the problem was in -imageWithSubviews.
Instead of generating image data for the view using -cacheDisplayInRect:toBitmapImageRep:, changing it to -dataWithPDFInRect: fixed the issue.

Setting image property of UIImageView causes major lag

Let me tell you about the problem I am having and how I tried to solve it. I have a UIScrollView which loads subviews as one scrolls from left to right. Each subview has 10-20 images around 400x200 each. When I scroll from view to view, I experience quite a bit of lag.
After investigating, I discovered that after unloading all the views and trying it again, the lag was gone. I figured that the synchronous caching of the images was the cause of the lag. So I created a subclass of UIImageView which loaded the images asynchronously. The loading code looks like the following (self.dispatchQueue returns a serial dispatch queue).
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
UIImage *image = [UIImage imageNamed:name];
dispatch_sync(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
However, after changing all of my UIImageViews to this subclass, I still experienced lag (I'm not sure if it was lessened or not). I boiled down the cause of the problem to self.image = image;. Why is this causing so much lag (but only on the first load)?
Please help me. =(
EDIT 3: iOS 15 now offers UIImage.prepareForDisplay(completionHandler:).
image.prepareForDisplay { decodedImage in
imageView.image = decodedImage
}
or
imageView.image = await image.byPreparingForDisplay()
EDIT 2: Here is a Swift version that contains a few improvements. (Untested.)
https://gist.github.com/fumoboy007/d869e66ad0466a9c246d
EDIT: Actually, I believe all that is necessary is the following. (Untested.)
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:pathToImage];
// Decompress image
if (image) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
ORIGINAL ANSWER: It turns out that it wasn't self.image = image; directly. The UIImage image loading methods don't decompress and process the image data right away; they do it when the view refreshes its display. So the solution was to go a level lower to Core Graphics and decompress and process the image data myself. The new code looks like the following.
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *uiImage = nil;
if (pathToImage) {
// Load the image
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithFilename([pathToImage fileSystemRepresentation]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imageDataProvider, NULL, NO, kCGRenderingIntentDefault);
// Create a bitmap context from the image's specifications
// (Note: We need to specify kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
// because PNGs are optimized by Xcode this way.)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetWidth(image) * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
// Draw the image into the bitmap context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
// Extract the decompressed image
CGImageRef decompressedImage = CGBitmapContextCreateImage(bitmapContext);
// Create a UIImage
uiImage = [[UIImage alloc] initWithCGImage:decompressedImage];
// Release everything
CGImageRelease(decompressedImage);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
CGDataProviderRelease(imageDataProvider);
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = uiImage;
});
});
}
I think, the problem could be Images themselves. For example - I got in one of my projects 10 images 640x600 layered with alpha transparency on each other. When I try to push or pop viewcontroller from this viewcontroller.. it lags a lot.
when I leave only few images or use quite smaller images - no lag.
P.S. tested on sdk 4.2 ios5 iphone 4.

Copying the drawn contents of one UIView to another

I'd like to take a UITextView and allow the user to enter text into it and then trigger a copy of the contents onto a quartz bitmap context. Does anyone know how I can perform this copy action? Should I override the drawRect method and call [super drawRect] and then take the resulting context and copy it? If so, does anyone have any reference to sample code to copy from one context to another?
Update: from reading the link in the answer below, I put together this much to attempt to copy my UIView contents into a bitmap context, but something is still not right. I get my contents mirrored across the X axis (i.e. upside down). I tried using CGContextScaleCTM() but that seems to have no effect.
I've verified that the created UIImage from the first four lines do properly create a UIImage that isn't strangely rotated/flipped, so there is something I'm doing wrong with the later calls.
// copy contents to bitmap context
UIGraphicsBeginImageContext(mTextView.bounds.size);
[mTextView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];
// render the created image to the bitmap context
CGImageRef cgImage = [image CGImage];
CGContextScaleCTM(mContext, 1.0, -1.0); // doesn't seem to change result
CGContextDrawImage(mContext, CGRectMake(
mTextView.frame.origin.x,
mTextView.frame.origin.y,
[image size].width, [image size].height), cgImage);
Any suggestions?
Here is the code I used to get a UIImage of UIView:
#implementation UIView (Sreenshot)
- (UIImage *)screenshot{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale);
/* iOS 7 */
BOOL visible = !self.hidden && self.superview;
CGFloat alpha = self.alpha;
BOOL animating = self.layer.animationKeys != nil;
BOOL success = YES;
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]){
//only works when visible
if (!animating && alpha == 1 && visible) {
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}else{
self.alpha = 1;
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
self.alpha = alpha;
}
}
if(!success){ /* iOS 6 */
self.alpha = 1;
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.alpha = alpha;
}
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end
You can use in iOS 7 and later:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates

Resources