Blend two NSImages using Swift on OS X - macos

I have two instances of NSImage. I want image2 to be placed on top of image1, with a certain level of opacity. They are of matching dimensions. Neither of the images need to be visible in the UI.
How can I correctly set up a graphics context and draw images to it, one with full opacity and another semi-transparent?
I have been reading several answers here but I find it complicated, especially since most seem to be for either Objective-C or only apply to iOS. Any pointers are appriciated. If this can be accomplished without needing a CGContext at all that would be even better.
func blendImages(image1: NSImage, image2: NSImage, alpha: CGFloat) -> CGImage {
// Create context
var ctx: CGContextRef = CGBitmapContextCreate(0, inputImage.size.width, inputImage.size.height, 8, inputImage.size.width*4, NSColorSpace.genericRGBColorSpace(), PremultipliedLast)
let area = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height)
CGContextScaleCTM(ctx, 1, -1)
// Draw image1 in context
// Draw image2 with alpha opacity
CGContextSetAlpha(ctx, CGFloat(0.5))
// Create CGImage from context
let outputImage = CGBitmapContextCreateImage(ctx)
return outputImage
}
Elsewhere I have this extension to get CGImages from my NSImages:
extension NSImage {
var CGImage: CGImageRef {
get {
let imageData = self.TIFFRepresentation
let source = CGImageSourceCreateWithData(imageData as! CFDataRef, nil)
let maskRef = CGImageSourceCreateImageAtIndex(source, 0, nil)
return maskRef
}
}
}

I managed to solve this with NSGraphicsContext.
func mergeImagesAB(pathA: String, pathB: String, fraction: CGFloat) -> CGImage {
guard let imgA = NSImage(byReferencingFile: pathA),
let imgB = NSImage(byReferencingFile: pathB),
let bitmap = imgA.representations[0] as? NSBitmapImageRep,
let ctx = NSGraphicsContext(bitmapImageRep: bitmap)
else {fatalError("Failed to load images.")}
let rect = NSRect(origin: CGPoint(x: 0, y: 0), size: bitmap.size)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(ctx)
imgB.drawInRect(rect, fromRect: rect, operation: .CompositeSourceOver, fraction: fraction)
NSGraphicsContext.restoreGraphicsState()
guard let result = bitmap.CGImage
else {fatalError("Failed to create image.")}
return result;
}

Thanks Henrik, I needed this for Obj-C. Here's that version in case someone also needs:
-(CGImageRef) mergeImage:(NSImage*)a andB:(NSImage*)b fraction:(float)fraction{
NSBitmapImageRep *bitmap = (NSBitmapImageRep*)[[a representations] objectAtIndex:0];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmap];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
CGRect rect = CGRectMake(0, 0, bitmap.size.width, bitmap.size.height);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
[b drawInRect:rect fromRect:rect operation:NSCompositeSourceOver fraction:fraction];
[NSGraphicsContext restoreGraphicsState];
return [bitmap CGImage];
}

Related

How to trim an NSImage by giving bounds

I have an NSImage object, I have an CIDetector object that detects QR codes on that image. After it detects, I wants to trim that image so it only has the QR code in it. This is how I've got the bounds of the QR code:
NSArray *features = [myQRDetector featureInImage:myCIImage];
CIQRCodeFeature *qrFeature = features[0];
CGRect qrBounds = qrFeature.bounds;
Now how can I trim the image so it only contains the area described by qrBounds variable.
In Swift 5
func trim(image: NSImage, rect: CGRect) -> NSImage {
let result = NSImage(size: rect.size)
result.lockFocus()
let destRect = CGRect(origin: .zero, size: result.size)
image.draw(in: destRect, from: rect, operation: .copy, fraction: 1.0)
result.unlockFocus()
return result
}
The answer from onmyway133 is great, but it doesn't preserve the datatype of the source image. For instance, if your source is an .hdr image, each color channel will be floats, but the cropped image will be an 8-bit integer RGBA image.
For preserving the format of the source, it seems you have to go down to the associated CGImage. I do this:
extension NSImage {
func cropping(to rect: CGRect) -> NSImage {
var imageRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
guard let imageRef = self.cgImage(forProposedRect: &imageRect, context: nil, hints: nil) else {
return NSImage(size: rect.size)
}
guard let crop = imageRef.cropping(to: rect) else {
return NSImage(size: rect.size)
}
return NSImage(cgImage: crop, size: NSZeroSize)
}
}
You need to create a new NSImage and draw the part of the original image you want to it.
NSImage* newImage = [[NSImage alloc] initWithSize:NSSizeFromCGSize(qrBounds.size)];
[newImage lockFocus];
NSRect dest = { NSZeroPoint, newImage.size };
[origImage drawInRect:dest fromRect:NSRectFromCGRect(qrBounds) operation:NSCompositeCopy fraction:1];
[newImage unlockFocus];

How to save PNG file from NSImage (retina issues)

I'm doing some operations on images and after I'm done, I want to save the image as PNG on disk. I'm doing the following:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
[image lockFocus] ;
NSBitmapImageRep *imageRepresentation = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0.0, 0.0, image.size.width, image.size.height)] ;
[image unlockFocus] ;
NSData *data = [imageRepresentation representationUsingType:NSPNGFileType properties:nil];
[data writeToFile:path atomically:YES];
}
This code is working, but the problem is with retina mac, if I print the NSBitmapImageRep object I get a different size and pixels rect and when my image is saved on disk, it's twice the size:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=600x600 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
I tied to force the pixel size to not take care about the retina scale, as I want to preserve the original size:
imageRepresentation.pixelsWide = image.size.width;
imageRepresentation.pixelsHigh = image.size.height;
This time I get the right size when I print the NSBitmapImageRep object, but when I save my file I still get the same issue:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=300x300 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
Any idea how to fix this, and preserve the original pixel size?
If you have an NSImage and want to save it as an image file to the filesystem, you should never use lockFocus! lockFocus creates a new image which is determined for getting shown an the screen and nothing else. Therefore lockFocus uses the properties of the screen: 72 dpi for normal screens and 144 dpi for retina screens. For what you want I propose the following code:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
[newRep setSize:[image size]]; // if you want the same resolution
NSData *pngData = [newRep representationUsingType:NSPNGFileType properties:nil];
[pngData writeToFile:path atomically:YES];
[newRep autorelease];
}
NSImage is resolution aware and uses a HiDPI graphics context when you lockFocus on a system with retina screen.
The image dimensions you pass to your NSBitmapImageRep initializer are in points (not pixels). An 150.0 point-wide image therefore uses 300 horizontal pixels in a #2x context.
You could use convertRectToBacking: or backingScaleFactor: to compensate for the #2x context. (I didn't try that), or you can use the following NSImage category, that creates a drawing context with explicit pixel dimensions:
#interface NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error;
#end
#implementation NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error
{
BOOL result = YES;
NSImage* scalingImage = [NSImage imageWithSize:[self size] flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[self drawAtPoint:NSMakePoint(0.0, 0.0) fromRect:dstRect operation:NSCompositeSourceOver fraction:1.0];
return YES;
}];
NSRect proposedRect = NSMakeRect(0.0, 0.0, outputSizePx.width, outputSizePx.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef cgContext = CGBitmapContextCreate(NULL, proposedRect.size.width, proposedRect.size.height, 8, 4*proposedRect.size.width, colorSpace, kCGBitmapByteOrderDefault|kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO];
CGContextRelease(cgContext);
CGImageRef cgImage = [scalingImage CGImageForProposedRect:&proposedRect context:context hints:nil];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)(URL), kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, cgImage, nil);
if(!CGImageDestinationFinalize(destination))
{
NSDictionary* details = #{NSLocalizedDescriptionKey:#"Error writing PNG image"};
[details setValue:#"ran out of money" forKey:NSLocalizedDescriptionKey];
*error = [NSError errorWithDomain:#"SSWPNGAdditionsErrorDomain" code:10 userInfo:details];
result = NO;
}
CFRelease(destination);
return result;
}
#end
I found this code on web , and it works on retina. Paste here, hope can help someone.
NSImage *computerImage = [NSImage imageNamed:NSImageNameComputer];
NSInteger size = 256;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size
pixelsHigh:size
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(size, size)];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[computerImage drawInRect:NSMakeRect(0, 0, size, size) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSData *data = [rep representationUsingType:NSPNGFileType properties:nil];
Just incase anyone stumbles up on this thread. Here is certainly flawed solution that does the job of saving image at 1x size (image.size) regardless of device in swift
public func writeToFile(path: String, atomically: Bool = true) -> Bool{
let bitmap = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(self.size.width), pixelsHigh: Int(self.size.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)!
bitmap.size = self.size
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(NSGraphicsContext(bitmapImageRep: bitmap))
self.drawAtPoint(CGPoint.zero, fromRect: NSRect.zero, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
if let imagePGNData = bitmap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [NSImageCompressionFactor: 1.0]) {
return imagePGNData.writeToFile((path as NSString).stringByStandardizingPath, atomically: atomically)
} else {
return false
}
}
Here's a Swift 5 version based on Heinrich Giesen's answer:
static func saveImage(_ image: NSImage, atUrl url: URL) {
guard
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)
else { return } // TODO: handle error
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size // if you want the same size
guard
let pngData = newRep.representation(using: .png, properties: [:])
else { return } // TODO: handle error
do {
try pngData.write(to: url)
}
catch {
print("error saving: \(error)")
}
}
My 2 cents for OS X including write that handles extensions + offscreen image drawing (method 2); one can verify with NSGraphicsContext.currentContextDrawingToScreen()
func createCGImage() -> CGImage? {
//method 1
let image = NSImage(size: NSSize(width: bounds.width, height: bounds.height), flipped: true, drawingHandler: { rect in
self.drawRect(self.bounds)
return true
})
var rect = CGRectMake(0, 0, bounds.size.width, bounds.size.height)
return image.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
//method 2
if let pdfRep = NSPDFImageRep(data: dataWithPDFInsideRect(bounds)) {
return pdfRep.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
}
return nil
}
func PDFImageData(filter: QuartzFilter?) -> NSData? {
return dataWithPDFInsideRect(bounds)
}
func bitmapContext() -> NSGraphicsContext? {
var context : NSGraphicsContext? = nil
if let imageRep = NSBitmapImageRep(bitmapDataPlanes: nil,
pixelsWide: Int(bounds.size.width),
pixelsHigh: Int(bounds.size.height), bitsPerSample: 8,
samplesPerPixel: 4, hasAlpha: true, isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: Int(bounds.size.width) * 4,
bitsPerPixel: 32) {
imageRep.size = NSSize(width: bounds.size.width, height: bounds.size.height)
context = NSGraphicsContext(bitmapImageRep: imageRep)
}
return context
}
func writeImageData(view: MyView, destination: NSURL) {
if let dest = CGImageDestinationCreateWithURL(destination, imageUTType, 1, nil) {
let properties = imageProperties
let image = view.createCGImage()!
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue) {
CGImageDestinationAddImage(dest, image, properties)
CGImageDestinationFinalize(dest)
}
}
}

What is the best Core Image filter to produce black and white effects?

I am using Core Image and would like to produce a black and white effect on the chosen image.
Ideally I would like to have access to the same sort of options that are available on Photoshop i.e. Reds, Cyan, Greens, Blues and Magenta. The goal being to create different types of the black and white effect.
Does anyone know what filter would be best to manipulate these sort of options? If not does anyone know of a good approach to creating the black and white effect using other filters?
Thanks
Oliver
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, #"inputEV", [NSNumber numberWithFloat:0.7], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
//UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
Upd.: For iOS6 there is CIColorMonochrome filter, but I played with it and found it not so good as mine.
here is example with CIColorMonochrome
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *output = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:kCIInputImageKey, beginImage, #"inputIntensity", [NSNumber numberWithFloat:1.0], #"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]], nil].outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:self.scale orientation:self.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
To create a pure monochrome effect, I’ve used CIColorMatrix with the R, G and B vector parameters all set to (0.2125, 0.7154, 0.0721, 0), and the alpha and bias vectors left with their defaults.
The values are RGB to greyscale conversion coefficients I looked up on the internets at some point. By changing these coefficients, you can change the contribution of the input channels. By scaling each copy of the vector, and optionally setting a bias vector, you can colourize the output.
Here is the top rated solution converted to Swift (iOS 7 and above):
func blackAndWhiteImage(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let ciImage = CoreImage.CIImage(image: image)!
// Set image color to b/w
let bwFilter = CIFilter(name: "CIColorControls")!
bwFilter.setValuesForKeysWithDictionary([kCIInputImageKey:ciImage, kCIInputBrightnessKey:NSNumber(float: 0.0), kCIInputContrastKey:NSNumber(float: 1.1), kCIInputSaturationKey:NSNumber(float: 0.0)])
let bwFilterOutput = (bwFilter.outputImage)!
// Adjust exposure
let exposureFilter = CIFilter(name: "CIExposureAdjust")!
exposureFilter.setValuesForKeysWithDictionary([kCIInputImageKey:bwFilterOutput, kCIInputEVKey:NSNumber(float: 0.7)])
let exposureFilterOutput = (exposureFilter.outputImage)!
// Create UIImage from context
let bwCGIImage = context.createCGImage(exposureFilterOutput, fromRect: ciImage.extent)
let resultImage = UIImage(CGImage: bwCGIImage, scale: 1.0, orientation: image.imageOrientation)
return resultImage
}
With regard to the answers suggesting to use CIColorMonochrome: There now are a few dedicated grayscale filters available from iOS7 (and OS X 10.9):
CIPhotoEffectTonal
imitate black-and-white photography film without significantly altering contrast.
CIPhotoEffectNoir:
imitate black-and-white photography film with exaggerated contrast
Source: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Here is the most liked answer from #Shmidt written as an UIImage extension with a performance update in Swift:
import CoreImage
extension UIImage
{
func imageBlackAndWhite() -> UIImage?
{
if let beginImage = CoreImage.CIImage(image: self)
{
let paramsColor: [String : AnyObject] = [kCIInputBrightnessKey: NSNumber(double: 0.0),
kCIInputContrastKey: NSNumber(double: 1.1),
kCIInputSaturationKey: NSNumber(double: 0.0)]
let blackAndWhite = beginImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(double: 0.7)]
let output = blackAndWhite.imageByApplyingFilter("CIExposureAdjust", withInputParameters: paramsExposure)
let processedCGImage = CIContext().createCGImage(output, fromRect: output.extent)
return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
}
macOS (NSImage) Swift 3 version of #FBente's conversion of #Shmidt's answer:
extension NSImage
{
func imageBlackAndWhite() -> NSImage?
{
if let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil)
{
let beginImage = CIImage.init(cgImage: cgImage)
let paramsColor: [String : AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0),
kCIInputContrastKey: NSNumber(value: 1.1),
kCIInputSaturationKey: NSNumber(value: 0.0)]
let blackAndWhite = beginImage.applyingFilter("CIColorControls", withInputParameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(value: 0.7)]
let output = blackAndWhite.applyingFilter("CIExposureAdjust", withInputParameters: paramsExposure)
if let processedCGImage = CIContext().createCGImage(output, from: output.extent) {
return NSImage(cgImage: processedCGImage, size: self.size)
}
}
return nil
}
}
I have tried Shmidt solution but it appears to me too over exposed on iPad Pro. I am using just the first part of his solution, without the exposure filter:
- (UIImage *)imageBlackAndWhite
{
CIImage *beginImage = [CIImage imageWithCGImage:self.CGImage];
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, beginImage, #"inputBrightness", [NSNumber numberWithFloat:0.0], #"inputContrast", [NSNumber numberWithFloat:1.1], #"inputSaturation", [NSNumber numberWithFloat:0.0], nil].outputImage;
CIImage *output = [blackAndWhite valueForKey:#"outputImate"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgiimage = [context createCGImage:output fromRect:output.extent];
//UIImage *newImage = [UIImage imageWithCGImage:cgiimage];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}

UIImage+Resize crashes without memory warning

Herer is my resize routine...
in the line CGContextDrawImage the app crashes after 2 or 3 calls of the routine.
there is no memory warning or something.
it just chrashes
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
////////////AFTER THAT IT CRASHS!!!
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
someone a hint?
regards, phil
Maybe you should add autorelease pool or #autorelease{} for ARC?
I think, it would help.

UISegmentedControl image scaling

How can I scale down the images used in a UISegmentedControl? I am creating the segmented control programmatically:
UISegmentedControl * segmentButton;
segmentButton = [UISegmentedControl segmentedControlWithItems:
[NSArray arrayWithObjects:
[UIImage imageNamed:#"option_one.png"],
[UIImage imageNamed:#"option_two.png"],
nil]];
segmentButton.contentMode = UIViewContentModeScaleToFill;
segmentButton.frame = CGRectMake(10, 10, 200, 32);
[view addSubview:segmentButton];
The result is not what I expect. The original .png images are about 100 pixels high, and they are not scaled down to fit the 32-pixel height of the segmented control. This results in a segmented control being drawn with enormous images overlapping it:
How can I tell the control to scale down those images?
You should never use a "big" image to display only a small picto. The full image will be loaded in memory, and only 10% of its pixels will be displayed, so you will use a lot of memory for nothing.
What you can do if you really want to use this resource is create a thumbnail with code before, and use this new generated thumbnail.
The following method returns a new image you can use in your UISegmentedControl, and you can release the big one.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
With your code:
UISegmentedControl * segmentButton;
segmentButton = [UISegmentedControl segmentedControlWithItems: [NSArray arrayWithObjects:
[self imageWithImage:[UIImage imageNamed:#"option_one.png"] scaledToSize:CGSizeMake(32, 32)],
[self imageWithImage:[UIImage imageNamed:#"option_two.png"] scaledToSize:CGSizeMake(32, 32)],
nil]];
segmentButton.contentMode = UIViewContentModeScaleToFill;
segmentButton.frame = CGRectMake(10, 10, 200, 32);
[view addSubview:segmentButton];
In swift3,
extension UIImage {
func scaleImage(scaleToSize: CGSize) -> UIImage {
UIGraphicsBeginImageContext(scaleToSize)
self.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: scaleToSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
For iOS 14 and later
UIGraphicsGetImageFromCurrentImageContext make some image not that clear, so I change to UIGraphicsImageRenderer, it works fine to me.
extension UIImage {
func resizedImage(to targetSize: CGSize) -> UIImage? {
let render = UIGraphicsImageRenderer(size: targetSize)
return render.image { ctx in
self.draw(in: .init(origin: .zero, size: targetSize))
}
}
}
class SomeView: UIView {
func initSegementedControl() {
exampleSegmenedControl = UISegmentedControl(items:[
leftImage.resizedImage(to: .init(width: 15, height: 15))!,
rightImage.resizedImage(to: .init(width: 15, height: 15))!
])
}
}

Resources