I have a camera app which shows a video preview in a AVCaptureVideoPreviewLayer instance, which includes a CGRect (for example a frame around the user). This layer has the same size as the UIView which contains the layer (for instance 667x375).
How do I translate the CGRect from the layer's coordinate space to the image output?
I found the answer myself; the right way is to use the method metadataOutputRectConverted.
let tempRect = layer.metadataOutputRectConverted(fromLayerRect: rect)
let trans_rect = CGRect(x: tempRect.origin.x * image.size.width,
y: tempRect.origin.y * image.size.height,
width: tempRect.width * image.size.width,
height: tempRect.height * image.size.height)
Where image is a UIImage instance.
Related
When you use .imageView.adjustsImageWhenAncestorFocused you get your image scaled, you get it bigger on focus. The image goes beyond it's bounds. That means that you can't see full image - it's coped on all sides.
You have:
You want:
If you want it work from the box, you need to remember to reserve Focused/Safe zone size (from documentation) when you create your image.
In my case I have my images from server and I can't edit them.
What worked for me - it's to redraw image right before setting:
UIImage *oldImage = [UIImage imageNamed:#"example"];
CGFloat width = cell.imageView.frame.size.width;
CGFloat height = cell.imageView.frame.size.height;
CGSize newSize = CGSizeMake(width, height);
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = oldImage.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef resizeContext = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(resizeContext, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(resizeContext, flipVertical);
CGContextDrawImage(resizeContext, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(resizeContext);
UIImage *newImageResized = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
cell.imageView.image = newImageResized;
});
It's nor necessary to redraw in this way, but the point is you need to draw your UIImage again in order to have your image displayed properly.
Code from here.
Is it possible to modify the anchorPoint property on the root CALayer of a layer-backed NSView?
I have a view called myView and it seems every time I set the anchorPoint, it gets overridden in the next run loop. I am doing this:
NSView *myView = [[myView alloc] initWithFrame:CGRectMake(0, 0, 50, 50)];
//set the root layer
myView.layer = [CALayer layer];
myView.wantsLayer = YES;
//gets overridden on the next run loop
myView.layer.anchorPoint = CGPointMake(1,1);
On 10.8, AppKit will control the following properties on a CALayer
(both when "layer-hosted" or "layer-backed"): geometryFlipped, bounds,
frame (implied), position, anchorPoint, transform, shadow*, hidden,
filters, and compositingFilter. … Use the appropriate NSView cover
methods to change these properties.
Basically it will set the anchor to [0,0] from [0.5,0.5], to account for this i uses something like :
+(void) accountForLowerLeftAnchor:(CALayer*)layer
{
CGRect frame = layer.frame;
CGPoint center = CGPointMake(CGRectGetMidX(frame), CGRectGetMidY(frame));
layer.position = center;
layer.anchorPoint = CGPointMake(0.5, 0.5);
}
I'm developing an iphone application with an UIView that have an image as a background and multiple buttons with image as background. The image is an image of a country and the buttons are the regions/states of that country.
The problem is that the screen is too much little to display all this data, so I want to enable pinch-to-zoom of the UIView which allows user to have a better view of the country.
Inside 'viewDiDLoad' of my UIViewController, I added:
UIPinchGestureRecognizer *twoFingerPinch = [[[UIPinchGestureRecognizer alloc]
initWithTarget:self
action:#selector(twoFingerPinch:)]
autorelease];
[[self view] addGestureRecognizer:twoFingerPinch];
and I added twoFingerPinch method:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)recognizer {
if (recognizer.scale > minValue){
CGAffineTransform transform = CGAffineTransformMakeScale(recognizer.scale, recognizer.scale);
self.view.transform = transform;
}
}
and it work correctly, the problem is that the UIView becomes bigger but user can't navigate inside.
So I switched my UIView to a UIScrollView and inside 'twoFingerPinch' method I added:
if (recognizer.state == UIGestureRecognizerStateEnded){
float scaleForView = mLastScale;
UIScrollView *tempScrollView=(UIScrollView *)self.view;
int newWidth = self.view.frame.size.width * scaleForView;
int newHeight = self.view.frame.size.height * scaleForView;
tempScrollView.contentSize=CGSizeMake(newWidth,newHeight);
mLastScale = 1.0;
}
but it doesn't work well: the UISrollView becomes much bigger but it's not bigger as the image scaled and also is not centered where pinchToZoom started.
How can i solve this problem?
Thank You
I am trying to draw an image using core graphics such that it has rounded corners and a drop shadow. Here is a snippet of my code:
CGContextSetShadowWithColor(context, CGSizeMake(0, 1), 2, shadowColor);
CGContextAddPath(context, path);
CGContextClip(context);
CGContextDrawImage(context, rect, image);
The problem I am having is that the clipping to create the rounded corners is also clipping the shadow. Since the image may be transparent in areas, I cannot simply draw the rounded rectangle with a shadow under the image. I guess I need to apply the rounded shape to the image first, and then draw the resulting image to the screen and add the shadow. Does anyone know how to do this?
Thanks!
Okay, so assuming that you have a UIView subclass, which has an instance variable, image, which is a UIImage, then you can do your drawRect: function like so...
- (void)drawRect:(CGRect)rect {
[super drawRect:rect];
CGRect _bounds = [self bounds];
CGColorRef aColor;
CGContextRef context = UIGraphicsGetCurrentContext();
// Create a path
CGRect insetRect = CGRectInset(_bounds, kBSImageButtonBorder, kBSImageButtonBorder);
CGRect offsetRect = insetRect; offsetRect.origin = CGPointZero;
UIGraphicsBeginImageContext(insetRect.size);
CGContextRef imgContext = UIGraphicsGetCurrentContext();
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:offsetRect cornerRadius:CORNER_RADIUS].CGPath;
CGContextAddPath(imgContext, clippingPath);
CGContextClip(imgContext);
// Draw the image
[image drawInRect:offsetRect];
// Get the image
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Setup the shadow
aColor = [UIColor colorWithRed:0.0f green:0.0f blue:0.0f alpha:0.5f].CGColor;
CGContextSetShadowWithColor(context, CGSizeMake(0.0f, 2.0f), 2.0f, aColor);
// Draw the clipped image in the context
[img drawInRect:insetRect];
}
I'm a little new to Quartz programming myself, but that should give you your image, centered in the rectangle, minus a border, with a corner radius, and a 2.f point shadow 2.f points below it. Hope that helps.
Here is a function to round the corners of an image using Daniel Thorpe's answer, in case you came here, like me, just looking for a way to do this.
+ (UIImage *) roundCornersOfImage:(UIImage *)image toRadius:(float)radius {
// create image sized context
UIGraphicsBeginImageContext(image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// add rounded rect clipping path to context using radius
CGRect imageBounds = CGRectMake(0, 0, image.size.width, image.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:imageBounds cornerRadius:radius].CGPath;
CGContextAddPath(context, clippingPath);
CGContextClip(context);
// draw the image
[image drawInRect:imageBounds];
// get the image
UIImage *outImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outImage;
}
You can use an imageView with a layer, the layer has properties for setting shadows and borders, this is how:
self.imageView = [[NSImageView alloc] initWithFrame:NSMakeRect(0,0,60,60)];
self.imageView.image = [NSImage imageNamed:#"yourImageName"];
self.imageView.wantsLayer = YES;
self.imageView.layer.cornerRadius = 10;
//make the shadow and set it
NSShadow* shadow = [[NSShadow alloc] init];
shadow.shadowBlurRadius = 2;
shadow.shadowOffset = NSMakeSize(2, -2);
shadow.shadowColor = [NSColor blackColor];
self.imageView.shadow = shadow;
Hope this helps, this is also much faster to draw then using drawRect overrides
I'm making an app in which the user chooses an image, resize it by zooming (in a uiscrollview) and move it in the view. When finished, I would like to save the image like we can see it in the uiscrollview. Do you have any idea ?
Thanks.
Here is the code ......
Here frontGround is a imageview and
testScrool is scroll view
hope this will work for you.
UIGraphicsBeginImageContext(testScrool.frame.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, testScrool.frame.size.width, testScrool.frame.size.height);
CGContextClipToRect(currentContext, clippedRect);
CGRect drawRect = CGRectMake(testScrool.frame.origin.x * -1,
testScrool.frame.origin.y * -1,frontGround.frame.size.width, frontGround.frame.size.height);
CGContextDrawImage(currentContext, drawRect, frontGround.image.CGImage);
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
frontGround.image = cropped;