Swift NSImage to CGImage - cocoa

How can I convert a NSImage to CGImage in Swift? In Objective-C I did it like this:
- (CGImageRef)CGImage {
NSData *imageData = self.TIFFRepresentation;
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
return maskRef;
}
Now I tried with:
extension NSImage {
var CGImage: CGImageRef {
get {
let imageData = self.TIFFRepresentation
let source = CGImageSourceCreateWithData(imageData as CFDataRef, nil)
let maskRef = CGImageSourceCreateImageAtIndex(source, UInt(0), nil)
return maskRef;
}
}
}
I can't compile, I'm getting the error: Could not find an overload for 'init' that accepts the supplied arguments' at the line let maskRef ...

Here's what I'm using to convert NSImage to CGImage:
let image = NSImage(named:"image")
if let image = image {
var imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
}

Swift 5 code :-
if let image = NSImage(named: "Icon"){
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)
}

Ah, I found the solution. It's because in Swift you only the an unmanaged object (I just did not really understand, what this means). But this code now works:
extension NSImage {
var CGImage: CGImageRef {
get {
let imageData = self.TIFFRepresentation
let source = CGImageSourceCreateWithData(imageData as CFDataRef, nil).takeUnretainedValue()
let maskRef = CGImageSourceCreateImageAtIndex(source, UInt(0), nil)
return maskRef.takeUnretainedValue();
}
}
}

For Swift 4.0, XCode 9.2:
extension NSImage {
#objc var CGImage: CGImage? {
get {
guard let imageData = self.tiffRepresentation else { return nil }
guard let sourceData = CGImageSourceCreateWithData(imageData as CFData, nil) else { return nil }
return CGImageSourceCreateImageAtIndex(sourceData, 0, nil)
}
}
}

A Swift 5 implementation:
extension NSImage {
var CGImage: CGImage {
get {
let imageData = self.tiffRepresentation!
let source = CGImageSourceCreateWithData(imageData as CFData, nil).unsafelyUnwrapped
let maskRef = CGImageSourceCreateImageAtIndex(source, Int(0), nil)
return maskRef.unsafelyUnwrapped
}
}
}

Related

set image color of a template image

I have an image like this:
(Rendered as a template image)
I tried this code:
#IBOutlet weak var imgAdd: NSImageView!
imgAdd.layer?.backgroundColor = CGColor.white
Which only changes the background color of course.
Is there a way to change the color of this image programmatically?
So far I've tried the code below which doesn't work. (The image color doesn't change.)
func tintedImage(_ image: NSImage, tint: NSColor) -> NSImage {
guard let tinted = image.copy() as? NSImage else { return image }
tinted.lockFocus()
tint.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
NSRectFillUsingOperation(imageRect, .sourceAtop)
tinted.unlockFocus()
return tinted
}
imgDok.image = tintedImage(NSImage(named: "myImage")!, tint: NSColor.red)
Swift 4
Updated answer for Swift 4
Please note, this NSImage extension is based on #Ghost108 and #Taehyung_Cho's answers, so a larger credit goes to them.
extension NSImage {
func tint(color: NSColor) -> NSImage {
let image = self.copy() as! NSImage
image.lockFocus()
color.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
imageRect.fill(using: .sourceAtop)
image.unlockFocus()
return image
}
}
Swift 4 version
extension NSImage {
func image(withTintColor tintColor: NSColor) -> NSImage {
guard isTemplate else { return self }
guard let copiedImage = self.copy() as? NSImage else { return self }
copiedImage.lockFocus()
tintColor.set()
let imageBounds = NSMakeRect(0, 0, copiedImage.size.width, copiedImage.size.height)
imageBounds.fill(using: .sourceAtop)
copiedImage.unlockFocus()
copiedImage.isTemplate = false
return copiedImage
}
}
I found the solution with everyone's help:
(Swift 3)
func tintedImage(_ image: NSImage, tint: NSColor) -> NSImage {
guard let tinted = image.copy() as? NSImage else { return image }
tinted.lockFocus()
tint.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
NSRectFillUsingOperation(imageRect, .sourceAtop)
tinted.unlockFocus()
return tinted
}
imgDok.image = tintedImage(NSImage(named: "myImage")!, tint: NSColor.red)
Important: in interface builder I had to set the "render as" setting of the image to "Default".
The other solutions don't work when the user wants to change between light and dark mode, this method solves that:
extension NSImage {
func tint(color: NSColor) -> NSImage {
return NSImage(size: size, flipped: false) { (rect) -> Bool in
color.set()
rect.fill()
self.draw(in: rect, from: NSRect(origin: .zero, size: self.size), operation: .destinationIn, fraction: 1.0)
return true
}
}
}
Be aware that if you use .withAlphaComponent(0.5) on an NSColor instance, that color loses support for switching between light/dark mode. I recommend using color assets to avoid that issue.
Had to modify #Ghost108's answer little bit for Xcode 9.2.
NSRectFillUsingOperation(imageRect, .sourceAtop)
to
imageRect.fill(using: .sourceAtop)
Thanks.
Since your image is inside an NSImageView, the following should work fine (available since macOS 10.14):
let image = NSImage(named: "myImage")!
image.isTemplate = true
let imageView = NSImageView(image: image)
imageView.contentTintColor = .green
The solution is to apply "contentTintColor" to your NSImageView instead of the NSImage.
See: Documentation
no need to copt:
extension NSImage {
func tint(with color: NSColor) -> NSImage {
self.lockFocus()
color.set()
let srcSpacePortionRect = NSRect(origin: CGPoint(), size: self.size)
srcSpacePortionRect.fill(using: .sourceAtop)
self.unlockFocus()
return self
}
}
Since you can't use the UIImage functions, you can try using CoreImage (CI). I don't know if there is an easier version but this one will work fore sure!
First you create the CIImage
let image = CIImage(data: inputImage.tiffRepresentation!)
Now you can apply all kinds of filters and other stuff to the image, it's a really powerful tool.
The documentation for CI: https://developer.apple.com/documentation/coreimage
The Filter List: https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Here is a simple filter example, you basically initialise a filter and then set the values for it, output it and repeat.
let yourFilterName = CIFilter(name: "FilterName")
yourFilterName!.setValue(SomeInputImage, forKey: kCIInputImageKey)
yourFilterName!.setValue(10, forKey: kCIInputRadiusKey)
let yourFilterName = yourFilterName!.outputImage
Now you can just convert the output back as NSImage.
let cgimg = context.createCGImage(yourFilterName!, from: yourFilterName!.extent)
let processedImage = NSImage(cgImage: cgimg!, size: NSSize(width: 0, height: 0))
Try this code it helps.
Swift 3
let theImageView = UIImageView(image: UIImage(named:"foo")!.withRenderingMode(.alwaysTemplate))
theImageView.tintColor = UIColor.red

Swift - GLKit View CIFilter Image

I am trying to use a GLIKit View in order to modify an Image. The class I have so far is working well all the CIFilters except for the CILineOverlay it renders a black view. If I use any other effect it works well.
Why is the CILineOverlay not showing?
class ImageView: GLKView {
let clampFilter = CIFilter(name: "CIAffineClamp")!
let blurFilter = CIFilter(name: "CILineOverlay")!
let ciContext:CIContext
override init(frame: CGRect) {
let glContext = EAGLContext(API: .OpenGLES2)
ciContext = CIContext(
EAGLContext: glContext,
options: [
kCIContextWorkingColorSpace: NSNull()
]
)
super.init(frame: frame, context: glContext)
enableSetNeedsDisplay = true
}
required init(coder aDecoder: NSCoder) {
let glContext = EAGLContext(API: .OpenGLES2)
ciContext = CIContext(
EAGLContext: glContext,
options: [
kCIContextWorkingColorSpace: NSNull()
]
)
super.init(coder: aDecoder)!
context = glContext
enableSetNeedsDisplay = true
}
#IBInspectable var inputImage: UIImage? {
didSet {
inputCIImage = inputImage.map { CIImage(image: $0)! }
}
}
#IBInspectable var blurRadius: Float = 0 {
didSet {
//blurFilter.setValue(blurRadius, forKey: "inputIntensity")
setNeedsDisplay()
}
}
var inputCIImage: CIImage? {
didSet { setNeedsDisplay() }
}
override func drawRect(rect: CGRect) {
if let inputCIImage = inputCIImage {
clampFilter.setValue(inputCIImage, forKey: kCIInputImageKey)
blurFilter.setValue(clampFilter.outputImage!, forKey: kCIInputImageKey)
let rect = CGRect(x: 0, y: 0, width: drawableWidth, height: drawableHeight)
ciContext.drawImage(blurFilter.outputImage!, inRect: rect, fromRect: inputCIImage.extent)
}
}
}
The Apple docs states "The portions of the image that are not outlined are transparent." - this means you are drawing black lines over a black background. You can simply composite the output from the filter over a white background to make the lines appear:
let background = CIImage(color: CIColor(color: UIColor.whiteColor()))
.imageByCroppingToRect(inputCIImage.extent)
let finalImage = filter.outputImage!
.imageByCompositingOverImage(background)

Convert UIimage (take a picture) to NSData

I used this method to take a picture.
func convertImageFromCMSampleBufferRef(sampleBuffer:CMSampleBuffer) -> CIImage{
let pixelBuffer:CVPixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!;
let ciImage:CIImage = CIImage(CVPixelBuffer: pixelBuffer)
if done == true {
newImage = UIImage(CIImage:ciImage, scale: CGFloat(1.0), orientation: .DownMirrored)
var imageData = UIImageJPEGRepresentation(newImage, 0.6)
var compressedJPGImage = UIImage(data: imageData)
UIImageWriteToSavedPhotosAlbum(compressedJPGImage!, nil, nil, nil)
}
return ciImage;
}
The code should work, but the variable imageData = nil
I tried converting the image in PNG, but with the same result.
with print
newImage = , {720, 1280} imageData = nil
You must convert CIImage to a CGImage, then CGImage to an UIImage, and then UIImage to NSData.
static let context = CIContext(options:nil);
let tempImage:CGImageRef = context.createCGImage(ciImage, fromRect: ciImage.extent())
let image = UIImage(CGImage: tempImage);
let imageData: NSData? = UIImageJPEGRepresentation(image, 0.6);

Unarchive NSData back to NSColor in swift

I have archived an NSColor to store it in NSUserDefaults:
var data = NSArchiver.archivedDataWithRootObject(NSColor.redColor())
storage.setObject(data, forKey: "color")
storage.synchronize()
But now I need to get the color back from NSData, I have no idea how to do that
You just need to use if let to unwrap your NSData and also you will need a conditional cast as follow:
edit/update:
Swift 3 or later
// archiving
let color: NSColor = .red
let data = NSKeyedArchiver.archivedData(withRootObject: color)
UserDefaults.standard.set(data, forKey: "color")
// unarchiving
if let loadedData = UserDefaults.standard.data(forKey: "color"),
let loadedColor = NSKeyedUnarchiver.unarchiveObject(with: loadedData) as? NSColor {
// you can access loadedColor here
print(loadedColor) // "sRGB IEC61966-2.1 colorspace 1 0 0 1\n"
}
Ran into some errors trying to get Leo's answer working in Swift 5. Came up with this extension which lets UserDefaults store and retrieve colors. Try pasting this into a Playground.
import Cocoa
extension UserDefaults {
func set(_ color: NSColor, forKey: String) {
if let data = try? NSKeyedArchiver.archivedData(withRootObject: color, requiringSecureCoding: false) {
self.set(data, forKey: forKey)
}
}
func color(forKey: String) -> NSColor? {
guard
let storedData = self.data(forKey: forKey),
let unarchivedData = try? NSKeyedUnarchiver.unarchivedObject(ofClass: NSColor.self, from: storedData),
let color = unarchivedData as NSColor?
else {
return nil
}
return color
}
}
// get defaults instance
let defaults = UserDefaults.standard
// create a color
let mycolor = NSColor(red: 0.0, green: 0.5, blue: 0.8, alpha: 0.5)
// save the color
defaults.set(mycolor, forKey: "mycolor")
// read the color back. this returns an optional, may be nil
defaults.color(forKey: "mycolor")

How to capture UIView to UIImage without loss of quality on retina display

My code works fine for normal devices but creates blurry images on retina devices.
Does anybody know a solution for my issue?
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Switch from use of UIGraphicsBeginImageContext to UIGraphicsBeginImageContextWithOptions (as documented on this page). Pass 0.0 for scale (the third argument) and you'll get a context with a scale factor equal to that of the screen.
UIGraphicsBeginImageContext uses a fixed scale factor of 1.0, so you're actually getting exactly the same image on an iPhone 4 as on the other iPhones. I'll bet either the iPhone 4 is applying a filter when you implicitly scale it up or just your brain is picking up on it being less sharp than everything around it.
So, I guess:
#import <QuartzCore/QuartzCore.h>
+ (UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
And in Swift 4:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
The currently accepted answer is now out of date, at least if you are supporting iOS 7.
Here is what you should be using if you are only supporting iOS7+:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0f);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Swift 4:
func imageWithView(view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext()
}
As per this article, you can see that the new iOS7 method drawViewHierarchyInRect:afterScreenUpdates: is many times faster than renderInContext:.
I have created a Swift extension based on #Dima solution:
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
}
EDIT: Swift 4 improved version
extension UIImage {
class func imageWithView(_ view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0)
defer { UIGraphicsEndImageContext() }
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
return UIGraphicsGetImageFromCurrentImageContext() ?? UIImage()
}
}
Usage:
let view = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
let image = UIImage.imageWithView(view)
Using modern UIGraphicsImageRenderer
public extension UIView {
#available(iOS 10.0, *)
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
To improve answers by #Tommy and #Dima, use the following category to render UIView into UIImage with transparent background and without loss of quality. Working on iOS7. (Or just reuse that method in implementation, replacing self reference with your image)
UIView+RenderViewToImage.h
#import <UIKit/UIKit.h>
#interface UIView (RenderToImage)
- (UIImage *)imageByRenderingView;
#end
UIView+RenderViewToImage.m
#import "UIView+RenderViewToImage.h"
#implementation UIView (RenderViewToImage)
- (UIImage *)imageByRenderingView
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
#end
Swift 3
The Swift 3 solution (based on Dima's answer) with UIView extension should be like this:
extension UIView {
public func getSnapshotImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: false)
let snapshotImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
For Swift 5.1 you can use this extension:
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { layer.render(in: $0.cgContext) }
}
}
Drop-in Swift 3.0 extension that supports the new iOS 10.0 API & the previous method.
Note:
iOS version check
Note the use of defer to simplify the context cleanup.
Will also apply the opacity & current scale of the view.
Nothing is just unwrapped using ! which could cause a crash.
extension UIView
{
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage?
{
if #available(iOS 10.0, *)
{
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.scale = self.layer.contentsScale
rendererFormat.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: rendererFormat)
return
renderer.image
{
_ in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
}
}
else
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, self.layer.contentsScale)
defer
{
UIGraphicsEndImageContext()
}
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
}
Swift 2.0:
Using extension method:
extension UIImage{
class func renderUIViewToImage(viewToBeRendered:UIView?) -> UIImage
{
UIGraphicsBeginImageContextWithOptions((viewToBeRendered?.bounds.size)!, false, 0.0)
viewToBeRendered!.drawViewHierarchyInRect(viewToBeRendered!.bounds, afterScreenUpdates: true)
viewToBeRendered!.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Usage:
override func viewDidLoad() {
super.viewDidLoad()
//Sample View To Self.view
let sampleView = UIView(frame: CGRectMake(100,100,200,200))
sampleView.backgroundColor = UIColor(patternImage: UIImage(named: "ic_120x120")!)
self.view.addSubview(sampleView)
//ImageView With Image
let sampleImageView = UIImageView(frame: CGRectMake(100,400,200,200))
//sampleView is rendered to sampleImage
var sampleImage = UIImage.renderUIViewToImage(sampleView)
sampleImageView.image = sampleImage
self.view.addSubview(sampleImageView)
}
Swift 3.0 implementation
extension UIView {
func getSnapshotImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let snapshotImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
All Swift 3 answers did not worked for me so I have translated the most accepted answer:
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let img: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
}
Here's a Swift 4 UIView extension based on the answer from #Dima.
extension UIView {
func snapshotImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: false)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
UIGraphicsImageRenderer is a relatively new API, introduced in iOS 10. You construct a UIGraphicsImageRenderer by specifying a point size. The image method takes a closure argument and returns a bitmap that results from executing the passed closure. In this case, the result is the original image scaled down to draw within the specified bounds.
https://nshipster.com/image-resizing/
So be sure the size you are passing into UIGraphicsImageRenderer is points, not pixels.
If your images are larger than you are expecting, you need to divide your size by the scale factor.
Some times drawRect Method makes problem so I got these answers more appropriate. You too may have a look on it
Capture UIImage of UIView stuck in DrawRect method
- (UIImage*)screenshotForView:(UIView *)view
{
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// hack, helps w/ our colors when blurring
NSData *imageData = UIImageJPEGRepresentation(image, 1); // convert to jpeg
image = [UIImage imageWithData:imageData];
return image;
}
In this method just pass a view object and it will returns a UIImage object.
-(UIImage*)getUIImageFromView:(UIView*)yourView
{
UIGraphicsBeginImageContext(yourView.bounds.size);
[yourView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Add this to method to UIView Category
- (UIImage*) capture {
UIGraphicsBeginImageContext(self.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

Resources