UIImageOrientation documentation is incorrect - uiimageorientation

Looking through Apple's documentation for UIImageOrientation I notice that the images that go with the descriptions are incorrect.
https://developer.apple.com/reference/uikit/uiimageorientation
This has been painful for me, so I'm going to leave this here, with the correct images in the answer in case others find the same.
If people think this shouldn't be here, please comment / vote down and I'll remove.

Here's how I got the correct images:
extension UIImage {
var normalised: UIImage {
if imageOrientation == .up {
return self
}
var normalisedImage: UIImage
let format = UIGraphicsImageRendererFormat.default()
format.scale = scale
format.opaque = true
format.prefersExtendedRange = false
normalisedImage = UIGraphicsImageRenderer(size: size, format: format).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
return normalisedImage
}
func translated(to orientation: UIImageOrientation) -> UIImage {
guard let cgImage = cgImage else {
return self
}
return UIImage(cgImage: cgImage, scale: 1, orientation: orientation).normalised
}
}
Then using this image as the "base"

Related

How to merge 3 images(100x100px) to one new large (300x100px) in Swift

iam very new to swiftUI and dont found a way.
I want to create one image (300x100) dyncmicaly by merging three single 100x100 images horizontaly
ImageA(100x100) + ImageB(100x100) + ImageC(100x100) = ImageD(300x100)
Found a way to show them in a HStack but how can is get one new image to send the Date to new function
Regards
Alex
thanks a lot, I tried to use your function but got an error here "Cannot convert value of type 'UIImage?' to expected element type 'UIImage'!
Code should drat 3x zero. in the number-0.png ist just one lagre zero
//
// ContentView.swift
// imagecombine
//
// Created by Alex on 02.08.20.
// Copyright © 2020 Alex. All rights reserved.
//
import SwiftUI
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { context in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
let redImage = UIImage(named: "number-0.png")
let greenImage = UIImage(named: "number-0.png")
let blueImage = UIImage(named: "number-0.png")
let image = combineHorizontally([redImage, greenImage, blueImage])
struct ContentView: View {
var body: some View {
Image(image)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
You can use UIImageGraphicsRenderer and draw the images one after another:
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
You said that there were only images and they were 100 × 100, but the above should work regardless of the number and size (memory permitting, of course).
Anyway, this
let image = combineHorizontally([redImage, greenImage, blueImage])
Results in:
To use that in a context where you don’t want an optional, you can use ! forced unwrapping operator, ?? nil coalescing operator, or some other unwrapping pattern, e.g. guard let, if let, etc.
Alternatively, if you don’t want to deal with optionals at all, you can write a rendition that doesn’t return an optional at all (but also doesn’t detect the error scenario where an empty array was provided):
func combineHorizontally(_ images: [UIImage]) -> UIImage {
var size = CGSize.zero
var scale: CGFloat = 1
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}

getting error after processing image. iOS, Swift

I am getting this error when i try to process an image and send to Swift OCR.
NSAssert( widthOfImage > 0 && heightOfImage > 0, #"Passed image must not be empty - it should be at least 1px tall and wide");
if I bypass the handle rectangles function and just call the swift OCR function with the first image taken it works fine, but after putting the image through the function processImage it crashes with the above error.
here is my functions.
lazy var rectanglesRequest: VNDetectRectanglesRequest = {
print("Tony 1 Requested....")
return VNDetectRectanglesRequest(completionHandler: self.handleRectangles)
}()
#objc func processImage() {
finalImage = nil
// finalImage = main.correctedImageView.image
guard let uiImage = correctedImageView.image
else { fatalError("no image from image picker") }
guard let ciImage = CIImage(image: uiImage)
else { fatalError("can't create CIImage from UIImage") }
let orientation = CGImagePropertyOrientation(uiImage.imageOrientation)
inputImage = ciImage.oriented(forExifOrientation: Int32(orientation.rawValue))
// Show the image in the UI.
// imageView.image = uiImage
// Run the rectangle detector, which upon completion runs the ML classifier.
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: CGImagePropertyOrientation(rawValue: UInt32(Int32(orientation.rawValue)))!)
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([self.rectanglesRequest])
} catch {
print(error)
}
}
}
func handleRectangles(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNRectangleObservation]
else { fatalError("unexpected result type from VNDetectRectanglesRequest") }
guard let detectedRectangle = observations.first else {
// DispatchQueue.main.async {
// self.classificationLabel.text = "No rectangles detected."
// }
return
}
let imageSize = inputImage.extent.size
// Verify detected rectangle is valid.
let boundingBox = detectedRectangle.boundingBox.scaled(to: imageSize)
guard inputImage.extent.contains(boundingBox)
else { print("invalid detected rectangle"); return }
// Rectify the detected image and reduce it to inverted grayscale for applying model.
let topLeft = detectedRectangle.topLeft.scaled(to: imageSize)
let topRight = detectedRectangle.topRight.scaled(to: imageSize)
let bottomLeft = detectedRectangle.bottomLeft.scaled(to: imageSize)
let bottomRight = detectedRectangle.bottomRight.scaled(to: imageSize)
let correctedImage = inputImage
.cropped(to: boundingBox)
.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: topLeft),
"inputTopRight": CIVector(cgPoint: topRight),
"inputBottomLeft": CIVector(cgPoint: bottomLeft),
"inputBottomRight": CIVector(cgPoint: bottomRight)
])
// Show the pre-processed image
DispatchQueue.main.async {
print("Tony: 1 adding image")
self.finalImage = UIImage(ciImage: correctedImage)
self.FinalizedImage.image = self.finalImage
// }else {
// print("Tony: No corected image......")
if self.FinalizedImage.image != nil {
print("Tony: 2 Got here to OCR")
self.perform(#selector(self.startOCR), with: nil, afterDelay: 1.0)
}
}
}
with this OCR function
#objc func startOCR() {
print("Tony: OCR called")
if self.FinalizedImage.image != nil {
swiftOCRInstance.recognize(FinalizedImage.image!) {recognizedString in
self.classificationLabel.text = recognizedString
print("Tony: \(recognizedString)")
}
}else {
print("Tony: No image here")
}
}
I was able to figure this out. I had to convert the image from ciImage to cgImage and then back to uiImage. as the ciImage is only data of how the filter will effect the image once it is processed so i needed to solidify it first.
a ciImage var inputImage or make input image = another ciImage you are using
inputImage = correctedImage
// Show the pre-processed image
DispatchQueue.main.async {
print("Tony: 1 adding image")
let cgImage = self.context.createCGImage(self.inputImage, from: self.inputImage.extent)
self.finalImageView.image = UIImage(cgImage: cgImage!)
//Filter Logic
let currentFilter = CIFilter(name: "CISharpenLuminance")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = self.context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
self.finalImageView.image = processedImage
// Then start the OCR work using finalImageView.image as the input image of the OCR

Swift 3 Colour Space macOS not IOS

How can I convert an RGB image into its grayscaled colour space? I can find a lot of code for iOS but non for macOS.. And the Apple's documentations are all in objective C....
let width = image.size.width
let height = image.size.height
let imageRect = NSMakeRect(0, 0, width, height);
let colorSpace = CGColorSpaceCreateDeviceGray();
let bits = image.representations.first as! NSBitmapImageRep;
bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: nil)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue);
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue);
context.draw(image.cgImage!, in : imageRect);// and this line is wrong obviously..
This is what I have got so far..just copy and pasting from the internet.. but I have no idea on how to go further...
I have found an interesting way to do this.. My code are simply copied from the three sources below.
how to create grayscale image from nsimage in swift?
Greyscale Image using COCOA and NSImage
Changing the Color Space of NSImage: The second reply
My Code:
func saveImage(image:NSImage, destination:URL) throws{
let rep = greyScale(image);
var data = rep.representation(using: NSJPEGFileType, properties: [:]);
try data?.write(to: destination);
}
// rgb2gray
func greyScale(image: NSImage) -> NSBitmapImageRep{
let w = image.size.width
let h = image.size.height
let imageRect : NSRect! = NSMakeRect(0,0, w, h);
let colourSpace : ColourSpace! = CGColorSpaceCreateDeviceGray();
let context : CGContext! = CGContext(data: nil, width: Int(w),
height: Int(h), bitsPerComponent: 8,
bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.none.rawValue);
context.draw(nsImageToCGImage(image: image), in: imageRect);
let greyImage : CGImage! = context.makeImage();
return NSBitmapImageRep(cgImage: greyImage);
}
func nsImageToCGImage(image: NSImage) -> CGImage{
if let imageData = image.tiffRepresentation as NSData! {
let imageSource : CGImageSource! = CGImageSourceCreateWithData(imageData,
nil);
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
return image;
}
return nil;
}
I am still trying to understand the principle behind.
You can try CIFilter. The annoyance is that you have to convert back and forth between NSImage and CIImage:
import Cocoa
import CoreImage
let url = Bundle.main.url(forResource: "image", withExtension: "jpg")!
let image = CIImage(contentsOf: url)!
let bwFilter = CIFilter(name: "CIColorControls", withInputParameters: ["inputImage": image, "inputSaturation": 0.0])!
if let ciImage = bwFilter.outputImage {
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
// nsImage is now your black-and-white image
}

My app crash when i upload an image to parse because of the fill size how do i shrink it? [duplicate]

I've been searching google, and have only come across libraries that either reduce the height/width or some how edit the UIImage appearance via CoreImage. But I have not seen or found one library, post that explains how to reduce image size so when it uploads, it's not the full image size.
so far I have this:
if image != nil {
//let data = NSData(data: UIImagePNGRepresentation(image))
let data = UIImagePNGRepresentation(image)
body.appendString("--\(boundary)\r\n")
body.appendString("Content-Disposition: form-data; name=\"image\"; filename=\"randomName\"\r\n")
body.appendString("Content-Type: image/png\r\n\r\n")
body.appendData(data)
body.appendString("\r\n")
}
and it's sending 12MB photos. How can I reduce this to 1mb? thanks!
Xcode 9 • Swift 4 or later
edit/update: For iOS10+ We can use UIGraphicsImageRenderer. For older Swift syntax check edit history.
extension UIImage {
func resized(withPercentage percentage: CGFloat, isOpaque: Bool = true) -> UIImage? {
let canvas = CGSize(width: size.width * percentage, height: size.height * percentage)
let format = imageRendererFormat
format.opaque = isOpaque
return UIGraphicsImageRenderer(size: canvas, format: format).image {
_ in draw(in: CGRect(origin: .zero, size: canvas))
}
}
func resized(toWidth width: CGFloat, isOpaque: Bool = true) -> UIImage? {
let canvas = CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))
let format = imageRendererFormat
format.opaque = isOpaque
return UIGraphicsImageRenderer(size: canvas, format: format).image {
_ in draw(in: CGRect(origin: .zero, size: canvas))
}
}
}
Usage:
let image = UIImage(data: try! Data(contentsOf: URL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!))!
let thumb1 = image.resized(withPercentage: 0.1)
let thumb2 = image.resized(toWidth: 72.0)
This is the way which i followed to resize image.
-(UIImage *)resizeImage:(UIImage *)image
{
float actualHeight = image.size.height;
float actualWidth = image.size.width;
float maxHeight = 300.0;
float maxWidth = 400.0;
float imgRatio = actualWidth/actualHeight;
float maxRatio = maxWidth/maxHeight;
float compressionQuality = 0.5;//50 percent compression
if (actualHeight > maxHeight || actualWidth > maxWidth)
{
if(imgRatio < maxRatio)
{
//adjust width according to maxHeight
imgRatio = maxHeight / actualHeight;
actualWidth = imgRatio * actualWidth;
actualHeight = maxHeight;
}
else if(imgRatio > maxRatio)
{
//adjust height according to maxWidth
imgRatio = maxWidth / actualWidth;
actualHeight = imgRatio * actualHeight;
actualWidth = maxWidth;
}
else
{
actualHeight = maxHeight;
actualWidth = maxWidth;
}
}
CGRect rect = CGRectMake(0.0, 0.0, actualWidth, actualHeight);
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
NSData *imageData = UIImageJPEGRepresentation(img, compressionQuality);
UIGraphicsEndImageContext();
return [UIImage imageWithData:imageData];
}
Using this method my image having 6.5 MB reduced to 104 KB.
Swift 4 code:
func resize(_ image: UIImage) -> UIImage {
var actualHeight = Float(image.size.height)
var actualWidth = Float(image.size.width)
let maxHeight: Float = 300.0
let maxWidth: Float = 400.0
var imgRatio: Float = actualWidth / actualHeight
let maxRatio: Float = maxWidth / maxHeight
let compressionQuality: Float = 0.5
//50 percent compression
if actualHeight > maxHeight || actualWidth > maxWidth {
if imgRatio < maxRatio {
//adjust width according to maxHeight
imgRatio = maxHeight / actualHeight
actualWidth = imgRatio * actualWidth
actualHeight = maxHeight
}
else if imgRatio > maxRatio {
//adjust height according to maxWidth
imgRatio = maxWidth / actualWidth
actualHeight = imgRatio * actualHeight
actualWidth = maxWidth
}
else {
actualHeight = maxHeight
actualWidth = maxWidth
}
}
let rect = CGRect(x: 0.0, y: 0.0, width: CGFloat(actualWidth), height: CGFloat(actualHeight))
UIGraphicsBeginImageContext(rect.size)
image.draw(in: rect)
let img = UIGraphicsGetImageFromCurrentImageContext()
let imageData = img?.jpegData(compressionQuality: CGFloat(compressionQuality))
UIGraphicsEndImageContext()
return UIImage(data: imageData!) ?? UIImage()
}
Swift 5 & Xcode 14
I was not satisfied with the solutions here, which generate an image based on a given KB size, since most of them used .jpegData(compressionQuality: x). This method won't work with large images, since even with compression quality set to 0.0, the large image will remain large, e.g. a 10 MB produced by portrait mode of a newer iPhone still will be above 1 MB with compressionQuality set to 0.0.
Therefore I used some answers here and rewrote a Helper Struct which converts an image in a background que:
import UIKit
struct ImageCompressor {
static func compress(image: UIImage, maxByte: Int,
completion: #escaping (UIImage?) -> ()) {
DispatchQueue.global(qos: .userInitiated).async {
guard let currentImageSize = image.jpegData(compressionQuality: 1.0)?.count else {
return completion(nil)
}
var iterationImage: UIImage? = image
var iterationImageSize = currentImageSize
var iterationCompression: CGFloat = 1.0
while iterationImageSize > maxByte && iterationCompression > 0.01 {
let percentageDecrease = getPercentageToDecreaseTo(forDataCount: iterationImageSize)
let canvasSize = CGSize(width: image.size.width * iterationCompression,
height: image.size.height * iterationCompression)
UIGraphicsBeginImageContextWithOptions(canvasSize, false, image.scale)
defer { UIGraphicsEndImageContext() }
image.draw(in: CGRect(origin: .zero, size: canvasSize))
iterationImage = UIGraphicsGetImageFromCurrentImageContext()
guard let newImageSize = iterationImage?.jpegData(compressionQuality: 1.0)?.count else {
return completion(nil)
}
iterationImageSize = newImageSize
iterationCompression -= percentageDecrease
}
completion(iterationImage)
}
}
private static func getPercentageToDecreaseTo(forDataCount dataCount: Int) -> CGFloat {
switch dataCount {
case 0..<5000000: return 0.03
case 5000000..<10000000: return 0.1
default: return 0.2
}
}
}
Compress an image to max 2 MB:
ImageCompressor.compress(image: image, maxByte: 2000000) { image in
guard let compressedImage = image else { return }
// Use compressedImage
}
}
In case someone is looking for resizing image to less than 1MB with Swift 3 and 4.
Just copy&paste this extension:
extension UIImage {
func resized(withPercentage percentage: CGFloat) -> UIImage? {
let canvasSize = CGSize(width: size.width * percentage, height: size.height * percentage)
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
draw(in: CGRect(origin: .zero, size: canvasSize))
return UIGraphicsGetImageFromCurrentImageContext()
}
func resizedTo1MB() -> UIImage? {
guard let imageData = UIImagePNGRepresentation(self) else { return nil }
var resizingImage = self
var imageSizeKB = Double(imageData.count) / 1000.0 // ! Or devide for 1024 if you need KB but not kB
while imageSizeKB > 1000 { // ! Or use 1024 if you need KB but not kB
guard let resizedImage = resizingImage.resized(withPercentage: 0.9),
let imageData = UIImagePNGRepresentation(resizedImage)
else { return nil }
resizingImage = resizedImage
imageSizeKB = Double(imageData.count) / 1000.0 // ! Or devide for 1024 if you need KB but not kB
}
return resizingImage
}
}
And use:
let resizedImage = originalImage.resizedTo1MB()
Edit:
Please note it's blocking UI, so move to background thread if you think it's the right way for your case.
same as Leo Answer but little edits for SWIFT 2.0
extension UIImage {
func resizeWithPercentage(percentage: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: size.width * percentage, height: size.height * percentage)))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
Swift4.2
let imagedata = yourImage.jpegData(compressionQuality: 0.1)!
Here is user4261201's answer but in swift, that I am currently using:
func compressImage (_ image: UIImage) -> UIImage {
let actualHeight:CGFloat = image.size.height
let actualWidth:CGFloat = image.size.width
let imgRatio:CGFloat = actualWidth/actualHeight
let maxWidth:CGFloat = 1024.0
let resizedHeight:CGFloat = maxWidth/imgRatio
let compressionQuality:CGFloat = 0.5
let rect:CGRect = CGRect(x: 0, y: 0, width: maxWidth, height: resizedHeight)
UIGraphicsBeginImageContext(rect.size)
image.draw(in: rect)
let img: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
let imageData:Data = UIImageJPEGRepresentation(img, compressionQuality)!
UIGraphicsEndImageContext()
return UIImage(data: imageData)!
}
I think the core of the question here is how to reliably shrink a UIImage's data to a certain size before uploading to a server, rather than just shrink the UIImage itself.
Using func jpegData(compressionQuality: CGFloat) -> Data? works well if you don't need to compress to a specific size. However, for certain cases, I find it useful to be able to compress below a certain specified file size. In that case, jpegData is unreliable, and iterative compressing of an image this way results in plateauing out on filesize (and can be really expensive). Instead, I prefer to reduce the size of the UIImage itself as in Leo's answer, then convert to jpegData and iteratively check to see if the reduced size is beneath the value I chose (within a margin that I set). I adjust the compression step multiplier based on the ratio of the current filesize to the desired filesize to speed up the first iterations which are the most expensive (since the filesize is the largest at that point).
Swift 5
extension UIImage {
func resized(withPercentage percentage: CGFloat, isOpaque: Bool = true) -> UIImage? {
let canvas = CGSize(width: size.width * percentage, height: size.height * percentage)
let format = imageRendererFormat
format.opaque = isOpaque
return UIGraphicsImageRenderer(size: canvas, format: format).image {
_ in draw(in: CGRect(origin: .zero, size: canvas))
}
}
func compress(to kb: Int, allowedMargin: CGFloat = 0.2) -> Data {
guard kb > 10 else { return Data() } // Prevents user from compressing below a limit (10kb in this case).
let bytes = kb * 1024
var compression: CGFloat = 1.0
let step: CGFloat = 0.05
var holderImage = self
var complete = false
while(!complete) {
guard let data = holderImage.jpegData(compressionQuality: 1.0) else { break }
let ratio = data.count / bytes
if data.count < Int(CGFloat(bytes) * (1 + allowedMargin)) {
complete = true
return data
} else {
let multiplier:CGFloat = CGFloat((ratio / 5) + 1)
compression -= (step * multiplier)
}
guard let newImage = holderImage.resized(withPercentage: compression) else { break }
holderImage = newImage
}
return Data()
}
}
And usage:
let data = image.compress(to: 1000)
If you are uploading image in NSData format, use this :
NSData *imageData = UIImageJPEGRepresentation(yourImage, floatValue);
yourImage is your UIImage.
floatvalue is compression value(0.0 to 1.0)
The above is to convert image to JPEG.
For PNGuse : UIImagePNGRepresentation
Note : Above code is in Objective-C. Please check how to define NSData in Swift.
Based on the answer of Tung Fam. To resize to a specific file size. Like 0.7 MB you can use this code.
extension UIImage {
func resize(withPercentage percentage: CGFloat) -> UIImage? {
var newRect = CGRect(origin: .zero, size: CGSize(width: size.width*percentage, height: size.height*percentage))
UIGraphicsBeginImageContextWithOptions(newRect.size, true, 1)
self.draw(in: newRect)
defer {UIGraphicsEndImageContext()}
return UIGraphicsGetImageFromCurrentImageContext()
}
func resizeTo(MB: Double) -> UIImage? {
guard let fileSize = self.pngData()?.count else {return nil}
let fileSizeInMB = CGFloat(fileSize)/(1024.0*1024.0)//form bytes to MB
let percentage = 1/fileSizeInMB
return resize(withPercentage: percentage)
}
}
Using this you can control the size that you want:
func jpegImage(image: UIImage, maxSize: Int, minSize: Int, times: Int) -> Data? {
var maxQuality: CGFloat = 1.0
var minQuality: CGFloat = 0.0
var bestData: Data?
for _ in 1...times {
let thisQuality = (maxQuality + minQuality) / 2
guard let data = image.jpegData(compressionQuality: thisQuality) else { return nil }
let thisSize = data.count
if thisSize > maxSize {
maxQuality = thisQuality
} else {
minQuality = thisQuality
bestData = data
if thisSize > minSize {
return bestData
}
}
}
return bestData
}
Method call example:
jpegImage(image: image, maxSize: 500000, minSize: 400000, times: 10)
It will try to get a file between a maximum and minimum size of maxSize and minSize, but only try times times. If it fails within that time, it will return nil.
I think the easiest way is provided by swift itself to compress the image into compressed data below is the code in swift 4.2
let imageData = yourImageTobeCompressed.jpegData(compressionQuality: 0.5)
and you can send this imageData to upload to server.
This is what I done in swift 3 for resizing an UIImage. It reduces the image size to less than 100kb. It works proportionally!
extension UIImage {
class func scaleImageWithDivisor(img: UIImage, divisor: CGFloat) -> UIImage {
let size = CGSize(width: img.size.width/divisor, height: img.size.height/divisor)
UIGraphicsBeginImageContext(size)
img.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
Usage:
let scaledImage = UIImage.scaleImageWithDivisor(img: capturedImage!, divisor: 3)
Same in Objective-C :
interface :
#interface UIImage (Resize)
- (UIImage *)resizedWithPercentage:(CGFloat)percentage;
- (UIImage *)resizeTo:(CGFloat)weight isPng:(BOOL)isPng jpegCompressionQuality:(CGFloat)compressionQuality;
#end
implementation :
#import "UIImage+Resize.h"
#implementation UIImage (Resize)
- (UIImage *)resizedWithPercentage:(CGFloat)percentage {
CGSize canvasSize = CGSizeMake(self.size.width * percentage, self.size.height * percentage);
UIGraphicsBeginImageContextWithOptions(canvasSize, false, self.scale);
[self drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
UIImage *sizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return sizedImage;
}
- (UIImage *)resizeTo:(CGFloat)weight isPng:(BOOL)isPng jpegCompressionQuality:(CGFloat)compressionQuality {
NSData *imageData = isPng ? UIImagePNGRepresentation(self) : UIImageJPEGRepresentation(self, compressionQuality);
if (imageData && [imageData length] > 0) {
UIImage *resizingImage = self;
double imageSizeKB = [imageData length] / weight;
while (imageSizeKB > weight) {
UIImage *resizedImage = [resizingImage resizedWithPercentage:0.9];
imageData = isPng ? UIImagePNGRepresentation(resizedImage) : UIImageJPEGRepresentation(resizedImage, compressionQuality);
resizingImage = resizedImage;
imageSizeKB = (double)(imageData.length / weight);
}
return resizingImage;
}
return nil;
}
Usage :
#import "UIImage+Resize.h"
UIImage *resizedImage = [self.picture resizeTo:2048 isPng:NO jpegCompressionQuality:1.0];
When I try to use the accepted answer to resize an image for use in my project it comes out very pixelated and blurry. I ended up with this piece of code to resize images without adding pixelation or blur:
func scale(withPercentage percentage: CGFloat)-> UIImage? {
let cgSize = CGSize(width: size.width * percentage, height: size.height * percentage)
let hasAlpha = true
let scale: CGFloat = 0.0 // Use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(cgSize, !hasAlpha, scale)
self.draw(in: CGRect(origin: CGPoint.zero, size: cgSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
return scaledImage
}
I came across this question while investigating image compression and export in Swift, and used it as a starting point to understand the problem better & derive a better technique.
The UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext(), UIGraphicsEndImageContext() process is an older technique which has been superseded by UIGraphicsImageRenderer, as used by iron_john_bonney and leo-dabus. Their examples were written as extensions on UIImage, whereas I chose to write an independent function. The required differences in approach can be identified by comparison (look at and near the UIGraphicsImageRenderer call), and could easily be ported back into a UIImage extension.
I thought there was potential for improvement on the compression algorithms used here, so I took an approach that started by adjusting the image to have a given total number of pixels, and then compressing it by adjusting the jpeg compression to achieve a specified final file size. The intent of specifying a total number of pixels was to avoid getting tied up in issues with image aspect ratios. Although I haven't done an exhaustive investigation, I suspect scaling an image to a specified total number of pixels will put the final jpeg image file size in a general range, and then jpeg compression can then be used to ensure that a file size limit is achieved with acceptable image quality, providing the initial pixel count isn't too high.
When using UIGraphicsImageRenderer, the CGRect is specified in logical pixels on a host Apple device, which is different to the actual pixels in the output jpeg. Look up device pixel ratios to understand this. To obtain the device pixel ratio, I tried extracting it from the environment, but these techniques caused the playground to crash, so I used a less efficient technique that worked.
If you paste this code into an Xcode playround and place an appropriate .jpg file in the Resources folder, the output file will be placed in the Playground output folder (use Quick Look in the Live View to find this location).
import UIKit
func compressUIImage(_ image: UIImage?, numPixels: Int, fileSizeLimitKB: Double, exportImage: Bool) -> Data {
var returnData: Data
if let origWidth = image?.size.width,
let origHeight = image?.size.height {
print("Original image size =", origWidth, "*", origHeight, "pixels")
let imgMult = min(sqrt(CGFloat(numPixels)/(origWidth * origHeight)), 1) // This multiplier scales the image to have the desired number of pixels
print("imageMultiplier =", imgMult)
let cgRect = CGRect(origin: .zero, size: CGSize(width: origWidth * imgMult, height: origHeight * imgMult)) // This is in *logical* pixels
let renderer = UIGraphicsImageRenderer(size: cgRect.size)
let img = renderer.image { ctx in
image?.draw(in: cgRect)
}
// Now get the device pixel ratio if needed...
var img_scale: CGFloat = 1
if exportImage {
img_scale = img.scale
}
print("Image scaling factor =", img_scale)
// ...and use to ensure *output* image has desired number of pixels
let cgRect_scaled = CGRect(origin: .zero, size: CGSize(width: origWidth * imgMult/img_scale, height: origHeight * imgMult/img_scale)) // This is in *logical* pixels
print("New image size (in logical pixels) =", cgRect_scaled.width, "*", cgRect_scaled.height, "pixels") // Due to device pixel ratios, can have fractional pixel dimensions
let renderer_scaled = UIGraphicsImageRenderer(size: cgRect_scaled.size)
let img_scaled = renderer_scaled.image { ctx in
image?.draw(in: cgRect_scaled)
}
var compQual = CGFloat(1.0)
returnData = img_scaled.jpegData(compressionQuality: 1.0)!
var imageSizeKB = Double(returnData.count) / 1000.0
print("compressionQuality =", compQual, "=> imageSizeKB =", imageSizeKB, "KB")
while imageSizeKB > fileSizeLimitKB {
compQual *= 0.9
returnData = img_scaled.jpegData(compressionQuality: compQual)!
imageSizeKB = Double(returnData.count) / 1000.0
print("compressionQuality =", compQual, "=> imageSizeKB =", imageSizeKB, "KB")
}
} else {
returnData = Data()
}
return returnData
}
let image_orig = UIImage(named: "input.jpg")
let image_comp_data = compressUIImage(image_orig, numPixels: Int(4e6), fileSizeLimitKB: 1300, exportImage: true)
func getDocumentsDirectory() -> URL {
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
return paths[0]
}
let filename = getDocumentsDirectory().appendingPathComponent("output.jpg")
try? image_comp_data.write(to: filename)
Sources included Jordan Morgan, and Hacking with Swift.
iOS 15+ Swift 5
Part of the solutions here doesn’t answer the question because they are not producing an image that has smaller file size to upload it to backend. It is very important to not uploading big image files to backend when it is not really needed. It will take much more space, will be more expensive to store and take more time to download causing UI to wait for content.
Lots of answers is using either
UIGraphicsImageRenderer(size: canvas).image {
_ in draw(in: CGRect(origin: .zero, size: canvas))
}
Or older
UIGraphicsGetImageFromCurrentImageContext()
The problem with these solutions is they generate smaller UIImage, but are not changing underlying CGImage so when you try to send image as DATA with .jpegData(compressionQuality:) you will note upload UIImage but data from underlying CGImage which is not resized and has large file size.
The other solutions are forcing compression of jpedData to smallest available which produce very large compression and quality loss.
To actually resize image with all underlying stuff and send it as really small best quality jpeg use method preparingThumbnail(of:) and set .jpegData(compressionQuality:) to 8 or 9.
extension UIImage {
func thumbnail(width: CGFloat) -> UIImage? {
guard size.width > width else { return self }
let imageSize = CGSize(
width: width,
height: CGFloat(ceil(width/size.width * size.height))
)
return preparingThumbnail(of: imageSize)
}
}
Here is documentation
preparingThumbnail(of:)
In case someone needed, here is an async version modified from Ali Pacman's answer:
import UIKit
extension UIImage {
func compress(to maxByte: Int) async -> UIImage? {
let compressTask = Task(priority: .userInitiated) { () -> UIImage? in
guard let currentImageSize = jpegData(compressionQuality: 1.0)?.count else {
return nil
}
var iterationImage: UIImage? = self
var iterationImageSize = currentImageSize
var iterationCompression: CGFloat = 1.0
while iterationImageSize > maxByte && iterationCompression > 0.01 {
let percentageDecrease = getPercentageToDecreaseTo(forDataCount: iterationImageSize)
let canvasSize = CGSize(width: size.width * iterationCompression, height: size.height * iterationCompression)
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
draw(in: CGRect(origin: .zero, size: canvasSize))
iterationImage = UIGraphicsGetImageFromCurrentImageContext()
guard let newImageSize = iterationImage?.jpegData(compressionQuality: 1.0)?.count else {
return nil
}
iterationImageSize = newImageSize
iterationCompression -= percentageDecrease
}
return iterationImage
}
return await compressTask.value
}
private func getPercentageToDecreaseTo(forDataCount dataCount: Int) -> CGFloat {
switch dataCount {
case 0..<3000000: return 0.05
case 3000000..<10000000: return 0.1
default: return 0.2
}
}
}
With Swift 5.5 using async/await and image.pngData() and not .jpegData(compressionQuality: 1.0) to get the correct data representation of the image:
import UIKit
public struct ImageCompressor {
private static func getPercentageToDecreaseTo(forDataCount dataCount: Int) -> CGFloat {
switch dataCount {
case 0..<3000000: return 0.05
case 3000000..<10000000: return 0.1
default: return 0.2
}
}
static public func compressAsync(image: UIImage, maxByte: Int) async -> UIImage? {
guard let currentImageSize = image.pngData()?.count else { return nil }
var iterationImage: UIImage? = image
var iterationImageSize = currentImageSize
var iterationCompression: CGFloat = 1.0
while iterationImageSize > maxByte && iterationCompression > 0.01 {
let percentageDecrease = getPercentageToDecreaseTo(forDataCount: iterationImageSize)
let canvasSize = CGSize(width: image.size.width * iterationCompression,
height: image.size.height * iterationCompression)
/*
UIGraphicsBeginImageContextWithOptions(canvasSize, false, image.scale)
defer { UIGraphicsEndImageContext() }
image.draw(in: CGRect(origin: .zero, size: canvasSize))
iterationImage = UIGraphicsGetImageFromCurrentImageContext()
*/
iterationImage = await image.byPreparingThumbnail(ofSize: canvasSize)
guard let newImageSize = iterationImage?.pngData()?.count else {
return nil
}
iterationImageSize = newImageSize
iterationCompression -= percentageDecrease
}
return iterationImage
}
}
extension UIImage {
func resized(toValue value: CGFloat) -> UIImage {
if size.width > size.height {
return self.resize(toWidth: value)!
} else {
return self.resize(toHeight: value)!
}
}
Resize the UIImage using .resizeToMaximumBytes

xcode swift - sending vector image to server

is it possible to send an UIImage in some vector format to an php server, and how would it be done? It's important that the image would be in some vector format.
I have created an app where you can draw lines, I would like to save those drawn lines to an image in some vector format. I say some, becouse I'm not sure what format is the best.
I just need to be able to save to it and read it.
Here is my code for drawing.
#IBAction func panning(sender: AnyObject) {
pan.maximumNumberOfTouches = 1;
pan.minimumNumberOfTouches = 1;
var currentPoint:CGPoint = pan.locationInView(self);
var midPoint:CGPoint = midpoint(previousPoint, p1: currentPoint);
if(pan.state == UIGestureRecognizerState.Began){
path.moveToPoint(currentPoint);
} else if (pan.state == UIGestureRecognizerState.Changed){
path.addQuadCurveToPoint(midPoint, controlPoint: previousPoint);
}
path.lineCapStyle = kCGLineCapRound;
path.lineWidth = 10;
previousPoint = currentPoint;
self.setNeedsDisplay();
}
override func drawRect(rect: CGRect) {
//println(pan.velocityInView(self));
UIColor.redColor().setStroke();
path.stroke();
}
#IBAction func sendImage(sender: AnyObject) {
UIGraphicsBeginImageContext(self.bounds.size);
var ctx:CGContextRef = UIGraphicsGetCurrentContext();
self.layer.renderInContext(ctx);
var saveImg = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(saveImg, self, Selector("image:didFinishSavingWithError:contextInfo:"), nil);
UIGraphicsEndImageContext();
}
func image(image: UIImage, didFinishSavingWithError error: NSErrorPointer, contextInfo: UnsafePointer<()>) {
dispatch_async(dispatch_get_main_queue(), {
UIAlertView(title: "Success", message: "This image has been saved to your Camera Roll successfully", delegate: nil, cancelButtonTitle: "Close").show()
})
}
func midpoint(p0:CGPoint, p1:CGPoint)->CGPoint{
var retCG:CGPoint = CGPoint(x: (p0.x + p1.x) / 2.0, y: (p0.y + p1.y) / 2.0);
return retCG;
}
Thanks.

Resources