Memory issue with loading multiple UIImage in SwiftUI - memory-management

I use this simple code to load image from Temp directory
struct AddNewMedia: View {
#State var filename:String
func imageFullPath(filename:String)->UIImage {
let pathToTemoDir = FileManager.default.temporaryDirectory.appendingPathComponent(filename)
return UIImage(contentsOfFile: pathToTemoDir.path)!
}
var body: some View {
Image(uiImage: self.imageFullPath(filename: filename))
.resizable()
.scaledToFit()
}
}
Every new image need a lot of memory to be visualized.
In this project I need to load 20-30 images and therefore I should reduce the memory needed for each image which is now about 23mb per image.
Is it possible to load the image directly from its path without adding it to memory? Any suggestions? Many thanks!

Anything that you want to display in your app should be loaded into memory, so I believe there is nothing to do on that part.
On iOS, UIImage's memory usage is relative to the image dimensions. For example, if you have 1920x1080 image, it will approximately consume about 8 megabytes in the memory (1920 * 1080 * 4 bytes).
Because as mentioned in WWDC 2018, "Memory use is related to the dimensions of the image, not the file size."
So as a solution, you can use an image that has lower dimensions, or you can downsample the image before presenting it.
Here is an example function for downsampling an image, credits to this link
func downsample(imageAt imageURL: URL,
to pointSize: CGSize,
scale: CGFloat = UIScreen.main.scale) -> UIImage? {
// Create an CGImageSource that represent an image
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, imageSourceOptions) else {
return nil
}
// Calculate the desired dimension
let maxDimensionInPixels = max(pointSize.width, pointSize.height) * scale
// Perform downsampling
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels
] as CFDictionary
guard let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions) else {
return nil
}
// Return the downsampled image as UIImage
return UIImage(cgImage: downsampledImage)
}

Related

How to merge 3 images(100x100px) to one new large (300x100px) in Swift

iam very new to swiftUI and dont found a way.
I want to create one image (300x100) dyncmicaly by merging three single 100x100 images horizontaly
ImageA(100x100) + ImageB(100x100) + ImageC(100x100) = ImageD(300x100)
Found a way to show them in a HStack but how can is get one new image to send the Date to new function
Regards
Alex
thanks a lot, I tried to use your function but got an error here "Cannot convert value of type 'UIImage?' to expected element type 'UIImage'!
Code should drat 3x zero. in the number-0.png ist just one lagre zero
//
// ContentView.swift
// imagecombine
//
// Created by Alex on 02.08.20.
// Copyright © 2020 Alex. All rights reserved.
//
import SwiftUI
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { context in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
let redImage = UIImage(named: "number-0.png")
let greenImage = UIImage(named: "number-0.png")
let blueImage = UIImage(named: "number-0.png")
let image = combineHorizontally([redImage, greenImage, blueImage])
struct ContentView: View {
var body: some View {
Image(image)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
You can use UIImageGraphicsRenderer and draw the images one after another:
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
You said that there were only images and they were 100 × 100, but the above should work regardless of the number and size (memory permitting, of course).
Anyway, this
let image = combineHorizontally([redImage, greenImage, blueImage])
Results in:
To use that in a context where you don’t want an optional, you can use ! forced unwrapping operator, ?? nil coalescing operator, or some other unwrapping pattern, e.g. guard let, if let, etc.
Alternatively, if you don’t want to deal with optionals at all, you can write a rendition that doesn’t return an optional at all (but also doesn’t detect the error scenario where an empty array was provided):
func combineHorizontally(_ images: [UIImage]) -> UIImage {
var size = CGSize.zero
var scale: CGFloat = 1
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}

How to crop an image with a selectable area in swift 4 or later?

I need some help with a function that I'd like to implement in my app.
I have a view with an image view with content mode in Aspect Fit. When I get an image from my library I would like to crop an area with an adjustable rectangle creating a new image.
I've looked for some exemple or online tutorial but I did not succeed.
Can anyone help me with that?
Here are the images from my View.
.
The simple solution is to just render the image view within a particular CGRect:
func snapshot(in imageView: UIImageView, rect: CGRect) -> UIImage {
return UIGraphicsImageRenderer(bounds: rect).image { _ in
imageView.drawHierarchy(in: imageView.bounds, afterScreenUpdates: true)
}
}
The limitation of that approach is that if the image is a considerably higher resolution than the image view could render (as is often the case when we use “aspect scale fit”), you’ll lose this additional precision.
If you want to preserve the resolution, you should convert the CGRect to coordinates with the image, in this case, assuming “aspect scale fit” (namely, centered and scaled so the whole image is shown):
func snapshot(in imageView: UIImageView, rect: CGRect) -> UIImage {
assert(imageView.contentMode == .scaleAspectFit)
let image = imageView.image!
// figure out what the scale is
let imageRatio = imageView.bounds.width / imageView.bounds.height
let imageViewRatio = image.size.width / image.size.height
let scale: CGFloat
if imageRatio > imageViewRatio {
scale = image.size.height / imageView.bounds.height
} else {
scale = image.size.width / imageView.bounds.width
}
// convert the `rect` into coordinates within the image, itself
let size = rect.size * scale
let origin = CGPoint(x: image.size.width / 2 - (imageView.bounds.midX - rect.minX) * scale,
y: image.size.height / 2 - (imageView.bounds.midY - rect.minY) * scale)
let scaledRect = CGRect(origin: origin, size: size)
// now render the image and grab the appropriate rectangle within
// the image’s coordinate system
let format = UIGraphicsImageRendererFormat()
format.scale = image.scale
format.opaque = false
return UIGraphicsImageRenderer(bounds: scaledRect, format: format).image { _ in
image.draw(at: .zero)
}
}
Using this extension:
extension CGSize {
static func * (lhs: CGSize, rhs: CGFloat) -> CGSize {
return CGSize(width: lhs.width * rhs, height: lhs.height * rhs)
}
}
That yields:
If I understand your question correctly there are two parts to your question:
An adjustable rectangle area over the image
Crop an UIImage
Break your google query and search for solution based on the above questions separately.
Or probably take help or use something like this:
iOS-Image-Crop-View

NSWindow view capture to image

Update: Nov.6
Thanks to pointum I revised my question.
On 10.13, I'm trying to write a view snapshot function as general purpose NSView or window extension. Here's my take as a window delegate:
var snapshot : NSImage? {
get {
guard let window = self.window, let view = self.window!.contentView else { return nil }
var rect = view.bounds
rect = view.convert(rect, to: nil)
rect = window.convertToScreen(rect)
// Adjust for titlebar; kTitleUtility = 16, kTitleNormal = 22
let delta : CGFloat = CGFloat((window.styleMask.contains(.utilityWindow) ? kTitleUtility : kTitleNormal))
rect.origin.y += delta
rect.size.height += delta*2
Swift.print("rect: \(rect)")
let cgImage = CGWindowListCreateImage(rect, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
let image = NSImage(cgImage: cgImage!, size: rect.size)
return image
}
}
to derive a 'flattened' snapshot of the window is what I'm after. Initially I'm using this image in a document icon drag.
It acts bizarrely. It seems to work initially - window in center, but subsequently the resulting image is different - smaller, especially when window is moved up or down in screen.
I think the rect capture is wrong ?
Adding to pointum's answer I came up with this:
var snapshot : NSImage? {
get {
guard let window = self.window, let view = self.window!.contentView else { return nil }
let inf = CGFloat(FP_INFINITE)
let null = CGRect(x: inf, y: inf, width: 0, height: 0)
let cgImage = CGWindowListCreateImage(null, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
let image = NSImage(cgImage: cgImage!, size: view.bounds.size)
return image
}
}
As I only want / need a single window, specifying 'null' does the trick. Well all else fails, the docs, if you know where to look :o.
Use CGWindowListCreateImage:
let rect = /* view bounds converted to screen coordinates */
let image = CGWindowListCreateImage(rect, .optionIncludingWindow,
CGWindowID(window.windowNumber), .bestResolution)
To save the image use something like this:
let dest = CGImageDestinationCreateWithURL(url, "public.jpeg", 1, nil)
CGImageDestinationAddImage(destination, image, nil)
CGImageDestinationFinalize(destination)
Note that screen coordinates are flipped. From the docs:
The coordinates of the rectangle must be specified in screen coordinates, where the screen origin is in the upper-left corner of the main display and y-axis values increase downward

Adding Single image to CGContext background makes tile view on output

Currently I am developing a project by using CGContext to draw image on the specific path.
In my sample i have used a class file which is inherited from the UIView,
Inside the Draw override method, I am getting the UIGraphics.GetCurrentContext(), by using this I have drawn the arc of 0 to 360 degree,
The CGContext setFillColor has to be filled with the image, so i have used the below given code.
var image = UIColor.FromPatternImage(imageView);
context.SetFillColor(image.CGColor);
imageView contain a single image, but on the output screen the image get repeated for many times.
Sample to replicate this issue has been updated in this query.
Please download the sample from this link
Note: This question has been raised on GitHub
They have said that this is the behaviour. When adding image on the background of CGContext we will get tiles of image on the screen.
Can any one help me to sort out this issue?
Thanks and Regards,
Selva Kumar V.
As the link (Apple Document) said . During drawing, the image in the pattern color is tiled as necessary to cover the given area if you call the method UIColor.FromPatternImage .So you can scale the size of image before you call the method .Refer to the following code.
public UIImage ScalingImageToSize(UIImage sourceImage,CGSize newSize)
{
if(UIScreen.MainScreen.Scale==2.0) //#2x iPhone 6 7 8
{
UIGraphics.BeginImageContextWithOptions(newSize, false, 2.0f);
}
else if(UIScreen.MainScreen.Scale == 3.0) //#3x iPhone 6p 7p 8p...
{
UIGraphics.BeginImageContextWithOptions(newSize, false, 3.0f);
}
else
{
UIGraphics.BeginImageContext(newSize);
}
sourceImage.Draw(new CGRect(0, 0, newSize.Width, newSize.Height));
UIImage newImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
eturn newImage;
}
//. . .
UIImage imageView = new UIImage("a0.png");
UIImage newImage = ScalingImageToSize(imageView, new CGSize(200, 400)(the size you want));
var image = UIColor.FromPatternImage(newImage);
//. . .

Resize image in NSTextView to fit

I have NSAttributedString objects with embedded images. These are being presented in NSTextViews. In iOS, I was able to resize the bounds of NSTextAttachment, and this makes the image fit.
extension NSTextAttachment {
func setImageWidth(width: CGFloat, range: NSRange) {
var thisImage = image
if thisImage == nil {
thisImage = imageForBounds(bounds, textContainer: nil, characterIndex: range.location)
}
if thisImage != nil {
let ratio = thisImage!.size.height / thisImage!.size.width
bounds = CGRectMake(bounds.origin.x, bounds.origin.y, width, ratio * width)
print("New Bounds: \(bounds)")
}
}
}
This code also runs on OSX, but it does not actually resize the image. Below you can see, there is a box of the correct size around the image, but the actual image overflows the box.
I have also followed the following guide: Implementing Rich Text with Images on OS X and iOS. This moves the code to subclasses, but has the same effect.
Any suggestions? Is there something besides NSTextAttachment.bounds that I should be adjusting?
UPDATE
I found that modifying the size component of NSImage works! However, it is now showing all my images upside, but at the correct size. :(
Solved!
extension NSImage {
func resizeToFit(containerWidth: CGFloat) {
var scaleFactor : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scaleFactor = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scaleFactor
let newHeight = currentHeight * scaleFactor
self.size = NSSize(width: newWidth, height: newHeight)
print("Size: \(size)")
}
}
As I mentioned in the update, you need to change the NSImage.size. The flip was coming from one of the subclasses I had left in there from the link in the question. Once I went back to the main classes, it works!

Resources