How to save a masked image in iOS - uiimageview

This is what I am doing to mask an image.. This is working fine.. My problem is that self.imgView.image is not the masked image.. How can I retrieve masked images? Thanks.
- (void) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView {
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = self.imgView.frame;
maskLayer.path = [clippingPath CGPath];
maskLayer.fillColor = [[UIColor whiteColor] CGColor];
maskLayer.backgroundColor = [[UIColor clearColor] CGColor];
self.imgView.layer.mask = maskLayer;
}

You can use this method to covert a CALayer to a UIImage. It's from https://stackoverflow.com/a/3454613/749786
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

Here's how in Swift 5.4.
func image(from layer: CALayer?) -> UIImage? {
UIGraphicsBeginImageContext(layer?.frame.size ?? CGSize.zero)
if let context = UIGraphicsGetCurrentContext() {
layer?.render(in: context)
}
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage
}

Related

iOS:How to crop default camera image to circle

I am using the ios default camera in my application. I would like to change something the edit view that shows after the user takes a photo.Normally, it shows a rectangle to crop, but I would like it to show a circle how would I do this.
Here is the solution which might help you to create crop overlay:-
- (void)navigationController:(UINavigationController *)navigationController didShowViewController:(UIViewController *)viewController animated:(BOOL)animated
{
if ([navigationController.viewControllers count] == 3)
{
CGFloat screenHeight = [[UIScreen mainScreen] bounds].size.height;
UIView *plCropOverlay = [[[viewController.view.subviews objectAtIndex:1]subviews] objectAtIndex:0];
plCropOverlay.hidden = YES;
int position = 0;
if (screenHeight == 568)
{
position = 124;
}
else
{
position = 80;
}
CAShapeLayer *circleLayer = [CAShapeLayer layer];
UIBezierPath *path2 = [UIBezierPath bezierPathWithOvalInRect:
CGRectMake(0.0f, position, 320.0f, 320.0f)];
[path2 setUsesEvenOddFillRule:YES];
[circleLayer setPath:[path2 CGPath]];
[circleLayer setFillColor:[[UIColor clearColor] CGColor]];
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, 320, screenHeight-72) cornerRadius:0];
[path appendPath:path2];
[path setUsesEvenOddFillRule:YES];
CAShapeLayer *fillLayer = [CAShapeLayer layer];
fillLayer.path = path.CGPath;
fillLayer.fillRule = kCAFillRuleEvenOdd;
fillLayer.fillColor = [UIColor blackColor].CGColor;
fillLayer.opacity = 0.8;
[viewController.view.layer addSublayer:fillLayer];
UILabel *moveLabel = [[UILabel alloc]initWithFrame:CGRectMake(0, 10, 320, 50)];
[moveLabel setText:#"Move and Scale"];
[moveLabel setTextAlignment:NSTextAlignmentCenter];
[moveLabel setTextColor:[UIColor whiteColor]];
[viewController.view addSubview:moveLabel];
}
}

How do I draw NSGradient to NSImage?

I'm trying to take an NSGradient and save it as an image in RubyMotion, but I can't get it to work. This is the code I have so far:
gradient = NSGradient.alloc.initWithColors(colors,
atLocations: locations.to_pointer(:double),
colorSpace: NSColorSpace.genericRGBColorSpace
)
size = Size(width, height)
image = NSImage.imageWithSize(size, flipped: false, drawingHandler: lambda do |rect|
gradient.drawInRect(rect, angle: angle)
true
end)
data = image.TIFFRepresentation
data.writeToFile('output.tif', atomically: false)
It runs without error, but the file that is saved is blank and there is no image data. Can anyone help point me in the right direction?
I don’t know about RubyMotion, but here’s how to do it in Objective-C:
NSGradient *grad = [[NSGradient alloc] initWithStartingColor:[NSColor redColor]
endingColor:[NSColor blueColor]];
NSRect rect = CGRectMake(0.0, 0.0, 50.0, 50.0);
NSImage *image = [[NSImage alloc] initWithSize:rect.size];
NSBezierPath *path = [NSBezierPath bezierPathWithRect:rect];
[image lockFocus];
[grad drawInBezierPath:path angle:0.0];
NSBitmapImageRep *imgRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:rect];
NSData *data = [imgRep representationUsingType:NSPNGFileType properties:nil];
[image unlockFocus];
[data writeToFile: #"/path/to/file.png" atomically:NO];
In case you want to know how it works in Swift 5:
extension NSImage {
convenience init?(gradientColors: [NSColor], imageSize: NSSize) {
guard let gradient = NSGradient(colors: gradientColors) else { return nil }
let rect = NSRect(origin: CGPoint.zero, size: imageSize)
self.init(size: rect.size)
let path = NSBezierPath(rect: rect)
self.lockFocus()
gradient.draw(in: path, angle: 0.0)
self.unlockFocus()
}
}

UIImage from MASKED CALayer

I'm in need of an UIImage from a Masked CALayer. This is the function I use:
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
The problem is that the mask isn't maintained.
This is the completed code:
CAShapeLayer * layerRight= [CAShapeLayer layer];
layerRight.path = elasticoRight;
im2.layer.mask = layerRight;
CAShapeLayer * layerLeft= [CAShapeLayer layer];
layerLeft.path = elasticoLeft;
im3.layer.mask = layerLeft;
[viewImage.layer addSublayer:im2.layer];
[viewImage.layer addSublayer:im3.layer];
UIImage *image_result = [self imageFromLayer:viewImage.layer];
If I visualize the viewImage, the result is correct, but if I try to obtain the image relative to the layer, the masks are lost.
I've solved.
Now i obtaining the image mask and use CGContextClipToMask.
CGRect rect = CGRectMake(0, 0, 1024, 768);
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0.0);
{
[[UIColor blackColor] setFill];
UIRectFill(rect);
[[UIColor whiteColor] setFill];
UIBezierPath *leftPath = [UIBezierPath bezierPath];
// Set the starting point of the shape.
CGPoint p1 = [(NSValue *)[leftPoints objectAtIndex:0] CGPointValue];
[leftPath moveToPoint:CGPointMake(p1.x, p1.y)];
for (uint i=1; i<leftPoints.count; i++)
{
CGPoint p = [(NSValue *)[leftPoints objectAtIndex:i] CGPointValue];
[leftPath addLineToPoint:CGPointMake(p.x, p.y)];
}
[leftPath closePath];
[leftPath fill];
}
UIImage *mask = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
{
CGContextClipToMask(UIGraphicsGetCurrentContext(), rect, mask.CGImage);
[im_senza drawAtPoint:CGPointZero];
}
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

What is the best Core Image filter to produce black and white effects?

I am using Core Image and would like to produce a black and white effect on the chosen image.
Ideally I would like to have access to the same sort of options that are available on Photoshop i.e. Reds, Cyan, Greens, Blues and Magenta. The goal being to create different types of the black and white effect.
Does anyone know what filter would be best to manipulate these sort of options? If not does anyone know of a good approach to creating the black and white effect using other filters?
Thanks
Oliver
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
//UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
Upd.: For iOS6 there is CIColorMonochrome filter, but I played with it and found it not so good as mine.
here is example with CIColorMonochrome
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *output = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:kCIInputImageKey, beginImage, #"inputIntensity", [NSNumber numberWithFloat:1.0], #"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:self.scale orientation:self.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
To create a pure monochrome effect, I’ve used CIColorMatrix with the R, G and B vector parameters all set to (0.2125, 0.7154, 0.0721, 0), and the alpha and bias vectors left with their defaults.
The values are RGB to greyscale conversion coefficients I looked up on the internets at some point. By changing these coefficients, you can change the contribution of the input channels. By scaling each copy of the vector, and optionally setting a bias vector, you can colourize the output.
Here is the top rated solution converted to Swift (iOS 7 and above):
func blackAndWhiteImage(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let ciImage = CoreImage.CIImage(image: image)!
// Set image color to b/w
let bwFilter = CIFilter(name: "CIColorControls")!
bwFilter.setValuesForKeysWithDictionary([kCIInputImageKey:ciImage, kCIInputBrightnessKey:NSNumber(float: 0.0), kCIInputContrastKey:NSNumber(float: 1.1), kCIInputSaturationKey:NSNumber(float: 0.0)])
let bwFilterOutput = (bwFilter.outputImage)!
// Adjust exposure
let exposureFilter = CIFilter(name: "CIExposureAdjust")!
exposureFilter.setValuesForKeysWithDictionary([kCIInputImageKey:bwFilterOutput, kCIInputEVKey:NSNumber(float: 0.7)])
let exposureFilterOutput = (exposureFilter.outputImage)!
// Create UIImage from context
let bwCGIImage = context.createCGImage(exposureFilterOutput, fromRect: ciImage.extent)
let resultImage = UIImage(CGImage: bwCGIImage, scale: 1.0, orientation: image.imageOrientation)
return resultImage
}
With regard to the answers suggesting to use CIColorMonochrome: There now are a few dedicated grayscale filters available from iOS7 (and OS X 10.9):
CIPhotoEffectTonal
imitate black-and-white photography film without significantly altering contrast.
CIPhotoEffectNoir:
imitate black-and-white photography film with exaggerated contrast
Source: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Here is the most liked answer from #Shmidt written as an UIImage extension with a performance update in Swift:
import CoreImage
extension UIImage
{
func imageBlackAndWhite() -> UIImage?
{
if let beginImage = CoreImage.CIImage(image: self)
{
let paramsColor: [String : AnyObject] = [kCIInputBrightnessKey: NSNumber(double: 0.0),
kCIInputContrastKey: NSNumber(double: 1.1),
kCIInputSaturationKey: NSNumber(double: 0.0)]
let blackAndWhite = beginImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(double: 0.7)]
let output = blackAndWhite.imageByApplyingFilter("CIExposureAdjust", withInputParameters: paramsExposure)
let processedCGImage = CIContext().createCGImage(output, fromRect: output.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
}
macOS (NSImage) Swift 3 version of #FBente's conversion of #Shmidt's answer:
extension NSImage
{
func imageBlackAndWhite() -> NSImage?
{
if let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil)
{
let beginImage = CIImage.init(cgImage: cgImage)
let paramsColor: [String : AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0),
kCIInputContrastKey: NSNumber(value: 1.1),
kCIInputSaturationKey: NSNumber(value: 0.0)]
let blackAndWhite = beginImage.applyingFilter("CIColorControls", withInputParameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(value: 0.7)]
let output = blackAndWhite.applyingFilter("CIExposureAdjust", withInputParameters: paramsExposure)
if let processedCGImage = CIContext().createCGImage(output, from: output.extent) {
return NSImage(cgImage: processedCGImage, size: self.size)
}
}
return nil
}
}
I have tried Shmidt solution but it appears to me too over exposed on iPad Pro. I am using just the first part of his solution, without the exposure filter:
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [blackAndWhite valueForKey:#"outputImate"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
//UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}

How to capture UIView to UIImage without loss of quality on retina display

My code works fine for normal devices but creates blurry images on retina devices.
Does anybody know a solution for my issue?
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Switch from use of UIGraphicsBeginImageContext to UIGraphicsBeginImageContextWithOptions (as documented on this page). Pass 0.0 for scale (the third argument) and you'll get a context with a scale factor equal to that of the screen.
UIGraphicsBeginImageContext uses a fixed scale factor of 1.0, so you're actually getting exactly the same image on an iPhone 4 as on the other iPhones. I'll bet either the iPhone 4 is applying a filter when you implicitly scale it up or just your brain is picking up on it being less sharp than everything around it.
So, I guess:
#import <QuartzCore/QuartzCore.h>
+ (UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
And in Swift 4:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
The currently accepted answer is now out of date, at least if you are supporting iOS 7.
Here is what you should be using if you are only supporting iOS7+:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0f);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Swift 4:
func imageWithView(view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext()
}
As per this article, you can see that the new iOS7 method drawViewHierarchyInRect:afterScreenUpdates: is many times faster than renderInContext:.
I have created a Swift extension based on #Dima solution:
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
}
EDIT: Swift 4 improved version
extension UIImage {
class func imageWithView(_ view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0)
defer { UIGraphicsEndImageContext() }
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext() ?? UIImage()
}
}
Usage:
let view = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
let image = UIImage.imageWithView(view)
Using modern UIGraphicsImageRenderer
public extension UIView {
#available(iOS 10.0, *)
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
To improve answers by #Tommy and #Dima, use the following category to render UIView into UIImage with transparent background and without loss of quality. Working on iOS7. (Or just reuse that method in implementation, replacing self reference with your image)
UIView+RenderViewToImage.h
#import <UIKit/UIKit.h>
#interface UIView (RenderToImage)
- (UIImage *)imageByRenderingView;
#end
UIView+RenderViewToImage.m
#import "UIView+RenderViewToImage.h"
#implementation UIView (RenderViewToImage)
- (UIImage *)imageByRenderingView
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
#end
Swift 3
The Swift 3 solution (based on Dima's answer) with UIView extension should be like this:
extension UIView {
public func getSnapshotImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: false)
let snapshotImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
For Swift 5.1 you can use this extension:
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { layer.render(in: $0.cgContext) }
}
}
Drop-in Swift 3.0 extension that supports the new iOS 10.0 API & the previous method.
Note:
iOS version check
Note the use of defer to simplify the context cleanup.
Will also apply the opacity & current scale of the view.
Nothing is just unwrapped using ! which could cause a crash.
extension UIView
{
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage?
{
if #available(iOS 10.0, *)
{
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.scale = self.layer.contentsScale
rendererFormat.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: rendererFormat)
return
renderer.image
{
_ in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
}
}
else
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, self.layer.contentsScale)
defer
{
UIGraphicsEndImageContext()
}
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
}
Swift 2.0:
Using extension method:
extension UIImage{
class func renderUIViewToImage(viewToBeRendered:UIView?) -> UIImage
{
UIGraphicsBeginImageContextWithOptions((viewToBeRendered?.bounds.size)!, false, 0.0)
viewToBeRendered!.drawViewHierarchyInRect(viewToBeRendered!.bounds, afterScreenUpdates: true)
viewToBeRendered!.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Usage:
override func viewDidLoad() {
super.viewDidLoad()
//Sample View To Self.view
let sampleView = UIView(frame: CGRectMake(100,100,200,200))
sampleView.backgroundColor = UIColor(patternImage: UIImage(named: "ic_120x120")!)
self.view.addSubview(sampleView)
//ImageView With Image
let sampleImageView = UIImageView(frame: CGRectMake(100,400,200,200))
//sampleView is rendered to sampleImage
var sampleImage = UIImage.renderUIViewToImage(sampleView)
sampleImageView.image = sampleImage
self.view.addSubview(sampleImageView)
}
Swift 3.0 implementation
extension UIView {
func getSnapshotImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let snapshotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
All Swift 3 answers did not worked for me so I have translated the most accepted answer:
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let img: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}
Here's a Swift 4 UIView extension based on the answer from #Dima.
extension UIView {
func snapshotImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
UIGraphicsImageRenderer is a relatively new API, introduced in iOS 10. You construct a UIGraphicsImageRenderer by specifying a point size. The image method takes a closure argument and returns a bitmap that results from executing the passed closure. In this case, the result is the original image scaled down to draw within the specified bounds.
https://nshipster.com/image-resizing/
So be sure the size you are passing into UIGraphicsImageRenderer is points, not pixels.
If your images are larger than you are expecting, you need to divide your size by the scale factor.
Some times drawRect Method makes problem so I got these answers more appropriate. You too may have a look on it
Capture UIImage of UIView stuck in DrawRect method
- (UIImage*)screenshotForView:(UIView *)view
{
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// hack, helps w/ our colors when blurring
NSData *imageData = UIImageJPEGRepresentation(image, 1); // convert to jpeg
image = [UIImage imageWithData:imageData];
return image;
}
In this method just pass a view object and it will returns a UIImage object.
-(UIImage*)getUIImageFromView:(UIView*)yourView
{
UIGraphicsBeginImageContext(yourView.bounds.size);
[yourView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Add this to method to UIView Category
- (UIImage*) capture {
UIGraphicsBeginImageContext(self.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

Resources