How to trim an NSImage by giving bounds - macos

I have an NSImage object, I have an CIDetector object that detects QR codes on that image. After it detects, I wants to trim that image so it only has the QR code in it. This is how I've got the bounds of the QR code:
NSArray *features = [myQRDetector featureInImage:myCIImage];
CIQRCodeFeature *qrFeature = features[0];
CGRect qrBounds = qrFeature.bounds;
Now how can I trim the image so it only contains the area described by qrBounds variable.

In Swift 5
func trim(image: NSImage, rect: CGRect) -> NSImage {
let result = NSImage(size: rect.size)
result.lockFocus()
let destRect = CGRect(origin: .zero, size: result.size)
image.draw(in: destRect, from: rect, operation: .copy, fraction: 1.0)
result.unlockFocus()
return result
}

The answer from onmyway133 is great, but it doesn't preserve the datatype of the source image. For instance, if your source is an .hdr image, each color channel will be floats, but the cropped image will be an 8-bit integer RGBA image.
For preserving the format of the source, it seems you have to go down to the associated CGImage. I do this:
extension NSImage {
func cropping(to rect: CGRect) -> NSImage {
var imageRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
guard let imageRef = self.cgImage(forProposedRect: &imageRect, context: nil, hints: nil) else {
return NSImage(size: rect.size)
}
guard let crop = imageRef.cropping(to: rect) else {
return NSImage(size: rect.size)
}
return NSImage(cgImage: crop, size: NSZeroSize)
}
}

You need to create a new NSImage and draw the part of the original image you want to it.
NSImage* newImage = [[NSImage alloc] initWithSize:NSSizeFromCGSize(qrBounds.size)];
[newImage lockFocus];
NSRect dest = { NSZeroPoint, newImage.size };
[origImage drawInRect:dest fromRect:NSRectFromCGRect(qrBounds) operation:NSCompositeCopy fraction:1];
[newImage unlockFocus];

Related

set image color of a template image

I have an image like this:
(Rendered as a template image)
I tried this code:
#IBOutlet weak var imgAdd: NSImageView!
imgAdd.layer?.backgroundColor = CGColor.white
Which only changes the background color of course.
Is there a way to change the color of this image programmatically?
So far I've tried the code below which doesn't work. (The image color doesn't change.)
func tintedImage(_ image: NSImage, tint: NSColor) -> NSImage {
guard let tinted = image.copy() as? NSImage else { return image }
tinted.lockFocus()
tint.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
NSRectFillUsingOperation(imageRect, .sourceAtop)
tinted.unlockFocus()
return tinted
}
imgDok.image = tintedImage(NSImage(named: "myImage")!, tint: NSColor.red)
Swift 4
Updated answer for Swift 4
Please note, this NSImage extension is based on #Ghost108 and #Taehyung_Cho's answers, so a larger credit goes to them.
extension NSImage {
func tint(color: NSColor) -> NSImage {
let image = self.copy() as! NSImage
image.lockFocus()
color.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
imageRect.fill(using: .sourceAtop)
image.unlockFocus()
return image
}
}
Swift 4 version
extension NSImage {
func image(withTintColor tintColor: NSColor) -> NSImage {
guard isTemplate else { return self }
guard let copiedImage = self.copy() as? NSImage else { return self }
copiedImage.lockFocus()
tintColor.set()
let imageBounds = NSMakeRect(0, 0, copiedImage.size.width, copiedImage.size.height)
imageBounds.fill(using: .sourceAtop)
copiedImage.unlockFocus()
copiedImage.isTemplate = false
return copiedImage
}
}
I found the solution with everyone's help:
(Swift 3)
func tintedImage(_ image: NSImage, tint: NSColor) -> NSImage {
guard let tinted = image.copy() as? NSImage else { return image }
tinted.lockFocus()
tint.set()
let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
NSRectFillUsingOperation(imageRect, .sourceAtop)
tinted.unlockFocus()
return tinted
}
imgDok.image = tintedImage(NSImage(named: "myImage")!, tint: NSColor.red)
Important: in interface builder I had to set the "render as" setting of the image to "Default".
The other solutions don't work when the user wants to change between light and dark mode, this method solves that:
extension NSImage {
func tint(color: NSColor) -> NSImage {
return NSImage(size: size, flipped: false) { (rect) -> Bool in
color.set()
rect.fill()
self.draw(in: rect, from: NSRect(origin: .zero, size: self.size), operation: .destinationIn, fraction: 1.0)
return true
}
}
}
Be aware that if you use .withAlphaComponent(0.5) on an NSColor instance, that color loses support for switching between light/dark mode. I recommend using color assets to avoid that issue.
Had to modify #Ghost108's answer little bit for Xcode 9.2.
NSRectFillUsingOperation(imageRect, .sourceAtop)
to
imageRect.fill(using: .sourceAtop)
Thanks.
Since your image is inside an NSImageView, the following should work fine (available since macOS 10.14):
let image = NSImage(named: "myImage")!
image.isTemplate = true
let imageView = NSImageView(image: image)
imageView.contentTintColor = .green
The solution is to apply "contentTintColor" to your NSImageView instead of the NSImage.
See: Documentation
no need to copt:
extension NSImage {
func tint(with color: NSColor) -> NSImage {
self.lockFocus()
color.set()
let srcSpacePortionRect = NSRect(origin: CGPoint(), size: self.size)
srcSpacePortionRect.fill(using: .sourceAtop)
self.unlockFocus()
return self
}
}
Since you can't use the UIImage functions, you can try using CoreImage (CI). I don't know if there is an easier version but this one will work fore sure!
First you create the CIImage
let image = CIImage(data: inputImage.tiffRepresentation!)
Now you can apply all kinds of filters and other stuff to the image, it's a really powerful tool.
The documentation for CI: https://developer.apple.com/documentation/coreimage
The Filter List: https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Here is a simple filter example, you basically initialise a filter and then set the values for it, output it and repeat.
let yourFilterName = CIFilter(name: "FilterName")
yourFilterName!.setValue(SomeInputImage, forKey: kCIInputImageKey)
yourFilterName!.setValue(10, forKey: kCIInputRadiusKey)
let yourFilterName = yourFilterName!.outputImage
Now you can just convert the output back as NSImage.
let cgimg = context.createCGImage(yourFilterName!, from: yourFilterName!.extent)
let processedImage = NSImage(cgImage: cgimg!, size: NSSize(width: 0, height: 0))
Try this code it helps.
Swift 3
let theImageView = UIImageView(image: UIImage(named:"foo")!.withRenderingMode(.alwaysTemplate))
theImageView.tintColor = UIColor.red

Blend two NSImages using Swift on OS X

I have two instances of NSImage. I want image2 to be placed on top of image1, with a certain level of opacity. They are of matching dimensions. Neither of the images need to be visible in the UI.
How can I correctly set up a graphics context and draw images to it, one with full opacity and another semi-transparent?
I have been reading several answers here but I find it complicated, especially since most seem to be for either Objective-C or only apply to iOS. Any pointers are appriciated. If this can be accomplished without needing a CGContext at all that would be even better.
func blendImages(image1: NSImage, image2: NSImage, alpha: CGFloat) -> CGImage {
// Create context
var ctx: CGContextRef = CGBitmapContextCreate(0, inputImage.size.width, inputImage.size.height, 8, inputImage.size.width*4, NSColorSpace.genericRGBColorSpace(), PremultipliedLast)
let area = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height)
CGContextScaleCTM(ctx, 1, -1)
// Draw image1 in context
// Draw image2 with alpha opacity
CGContextSetAlpha(ctx, CGFloat(0.5))
// Create CGImage from context
let outputImage = CGBitmapContextCreateImage(ctx)
return outputImage
}
Elsewhere I have this extension to get CGImages from my NSImages:
extension NSImage {
var CGImage: CGImageRef {
get {
let imageData = self.TIFFRepresentation
let source = CGImageSourceCreateWithData(imageData as! CFDataRef, nil)
let maskRef = CGImageSourceCreateImageAtIndex(source, 0, nil)
return maskRef
}
}
}
I managed to solve this with NSGraphicsContext.
func mergeImagesAB(pathA: String, pathB: String, fraction: CGFloat) -> CGImage {
guard let imgA = NSImage(byReferencingFile: pathA),
let imgB = NSImage(byReferencingFile: pathB),
let bitmap = imgA.representations[0] as? NSBitmapImageRep,
let ctx = NSGraphicsContext(bitmapImageRep: bitmap)
else {fatalError("Failed to load images.")}
let rect = NSRect(origin: CGPoint(x: 0, y: 0), size: bitmap.size)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(ctx)
imgB.drawInRect(rect, fromRect: rect, operation: .CompositeSourceOver, fraction: fraction)
NSGraphicsContext.restoreGraphicsState()
guard let result = bitmap.CGImage
else {fatalError("Failed to create image.")}
return result;
}
Thanks Henrik, I needed this for Obj-C. Here's that version in case someone also needs:
-(CGImageRef) mergeImage:(NSImage*)a andB:(NSImage*)b fraction:(float)fraction{
NSBitmapImageRep *bitmap = (NSBitmapImageRep*)[[a representations] objectAtIndex:0];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmap];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
CGRect rect = CGRectMake(0, 0, bitmap.size.width, bitmap.size.height);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:ctx];
[b drawInRect:rect fromRect:rect operation:NSCompositeSourceOver fraction:fraction];
[NSGraphicsContext restoreGraphicsState];
return [bitmap CGImage];
}

How to save PNG file from NSImage (retina issues)

I'm doing some operations on images and after I'm done, I want to save the image as PNG on disk. I'm doing the following:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
[image lockFocus] ;
NSBitmapImageRep *imageRepresentation = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0.0, 0.0, image.size.width, image.size.height)] ;
[image unlockFocus] ;
NSData *data = [imageRepresentation representationUsingType:NSPNGFileType properties:nil];
[data writeToFile:path atomically:YES];
}
This code is working, but the problem is with retina mac, if I print the NSBitmapImageRep object I get a different size and pixels rect and when my image is saved on disk, it's twice the size:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=600x600 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
I tied to force the pixel size to not take care about the retina scale, as I want to preserve the original size:
imageRepresentation.pixelsWide = image.size.width;
imageRepresentation.pixelsHigh = image.size.height;
This time I get the right size when I print the NSBitmapImageRep object, but when I save my file I still get the same issue:
$0 = 0x0000000100413890 NSBitmapImageRep 0x100413890 Size={300, 300} ColorSpace=sRGB IEC61966-2.1 colorspace BPS=8 BPP=32 Pixels=300x300 Alpha=YES Planar=NO Format=0 CurrentBacking=<CGImageRef: 0x100414830>
Any idea how to fix this, and preserve the original pixel size?
If you have an NSImage and want to save it as an image file to the filesystem, you should never use lockFocus! lockFocus creates a new image which is determined for getting shown an the screen and nothing else. Therefore lockFocus uses the properties of the screen: 72 dpi for normal screens and 144 dpi for retina screens. For what you want I propose the following code:
+ (void)saveImage:(NSImage *)image atPath:(NSString *)path {
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
[newRep setSize:[image size]]; // if you want the same resolution
NSData *pngData = [newRep representationUsingType:NSPNGFileType properties:nil];
[pngData writeToFile:path atomically:YES];
[newRep autorelease];
}
NSImage is resolution aware and uses a HiDPI graphics context when you lockFocus on a system with retina screen.
The image dimensions you pass to your NSBitmapImageRep initializer are in points (not pixels). An 150.0 point-wide image therefore uses 300 horizontal pixels in a #2x context.
You could use convertRectToBacking: or backingScaleFactor: to compensate for the #2x context. (I didn't try that), or you can use the following NSImage category, that creates a drawing context with explicit pixel dimensions:
#interface NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error;
#end
#implementation NSImage (SSWPNGAdditions)
- (BOOL)writePNGToURL:(NSURL*)URL outputSizeInPixels:(NSSize)outputSizePx error:(NSError*__autoreleasing*)error
{
BOOL result = YES;
NSImage* scalingImage = [NSImage imageWithSize:[self size] flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[self drawAtPoint:NSMakePoint(0.0, 0.0) fromRect:dstRect operation:NSCompositeSourceOver fraction:1.0];
return YES;
}];
NSRect proposedRect = NSMakeRect(0.0, 0.0, outputSizePx.width, outputSizePx.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef cgContext = CGBitmapContextCreate(NULL, proposedRect.size.width, proposedRect.size.height, 8, 4*proposedRect.size.width, colorSpace, kCGBitmapByteOrderDefault|kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:cgContext flipped:NO];
CGContextRelease(cgContext);
CGImageRef cgImage = [scalingImage CGImageForProposedRect:&proposedRect context:context hints:nil];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)(URL), kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, cgImage, nil);
if(!CGImageDestinationFinalize(destination))
{
NSDictionary* details = #{NSLocalizedDescriptionKey:#"Error writing PNG image"};
[details setValue:#"ran out of money" forKey:NSLocalizedDescriptionKey];
*error = [NSError errorWithDomain:#"SSWPNGAdditionsErrorDomain" code:10 userInfo:details];
result = NO;
}
CFRelease(destination);
return result;
}
#end
I found this code on web , and it works on retina. Paste here, hope can help someone.
NSImage *computerImage = [NSImage imageNamed:NSImageNameComputer];
NSInteger size = 256;
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size
pixelsHigh:size
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[rep setSize:NSMakeSize(size, size)];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:rep]];
[computerImage drawInRect:NSMakeRect(0, 0, size, size) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
NSData *data = [rep representationUsingType:NSPNGFileType properties:nil];
Just incase anyone stumbles up on this thread. Here is certainly flawed solution that does the job of saving image at 1x size (image.size) regardless of device in swift
public func writeToFile(path: String, atomically: Bool = true) -> Bool{
let bitmap = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(self.size.width), pixelsHigh: Int(self.size.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSDeviceRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)!
bitmap.size = self.size
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.setCurrentContext(NSGraphicsContext(bitmapImageRep: bitmap))
self.drawAtPoint(CGPoint.zero, fromRect: NSRect.zero, operation: NSCompositingOperation.CompositeSourceOver, fraction: 1.0)
NSGraphicsContext.restoreGraphicsState()
if let imagePGNData = bitmap.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [NSImageCompressionFactor: 1.0]) {
return imagePGNData.writeToFile((path as NSString).stringByStandardizingPath, atomically: atomically)
} else {
return false
}
}
Here's a Swift 5 version based on Heinrich Giesen's answer:
static func saveImage(_ image: NSImage, atUrl url: URL) {
guard
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)
else { return } // TODO: handle error
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size // if you want the same size
guard
let pngData = newRep.representation(using: .png, properties: [:])
else { return } // TODO: handle error
do {
try pngData.write(to: url)
}
catch {
print("error saving: \(error)")
}
}
My 2 cents for OS X including write that handles extensions + offscreen image drawing (method 2); one can verify with NSGraphicsContext.currentContextDrawingToScreen()
func createCGImage() -> CGImage? {
//method 1
let image = NSImage(size: NSSize(width: bounds.width, height: bounds.height), flipped: true, drawingHandler: { rect in
self.drawRect(self.bounds)
return true
})
var rect = CGRectMake(0, 0, bounds.size.width, bounds.size.height)
return image.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
//method 2
if let pdfRep = NSPDFImageRep(data: dataWithPDFInsideRect(bounds)) {
return pdfRep.CGImageForProposedRect(&rect, context: bitmapContext(), hints: nil)
}
return nil
}
func PDFImageData(filter: QuartzFilter?) -> NSData? {
return dataWithPDFInsideRect(bounds)
}
func bitmapContext() -> NSGraphicsContext? {
var context : NSGraphicsContext? = nil
if let imageRep = NSBitmapImageRep(bitmapDataPlanes: nil,
pixelsWide: Int(bounds.size.width),
pixelsHigh: Int(bounds.size.height), bitsPerSample: 8,
samplesPerPixel: 4, hasAlpha: true, isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: Int(bounds.size.width) * 4,
bitsPerPixel: 32) {
imageRep.size = NSSize(width: bounds.size.width, height: bounds.size.height)
context = NSGraphicsContext(bitmapImageRep: imageRep)
}
return context
}
func writeImageData(view: MyView, destination: NSURL) {
if let dest = CGImageDestinationCreateWithURL(destination, imageUTType, 1, nil) {
let properties = imageProperties
let image = view.createCGImage()!
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue) {
CGImageDestinationAddImage(dest, image, properties)
CGImageDestinationFinalize(dest)
}
}
}

NSImage size not real size with some pictures?

I see that sometimes NSImage size is not real size (with some pictures) and CIImage size is always real. I was testing with this image.
This is source code which I wrote for testing:
NSImage *_imageNSImage = [[NSImage alloc]initWithContentsOfFile:#"<path to image>"];
NSSize _dimensions = [_imageNSImage size];
[_imageNSImage release];
NSLog(#"Width from CIImage: %f",_dimensions.width);
NSLog(#"Height from CIImage: %f",_dimensions.height);
NSURL *_myURL = [NSURL fileURLWithPath:#"<path to image>"];
CIImage *_imageCIImage = [CIImage imageWithContentsOfURL:_myURL];
NSRect _rectFromCIImage = [_imageCIImage extent];
NSLog(#"Width from CIImage: %f",_rectFromCIImage.size.width);
NSLog(#"Height from CIImage: %f",_rectFromCIImage.size.height);
And output is:
So how that can be?? Maybe I'm doing something wrong?
NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep. You can get an NSImageRep from an NSImage using the representations method. Alternatively you can create a NSBitmapImageRep subclass instance directly like this:
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:#"<path to image>"];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSLog(#"Width from NSBitmapImageRep: %f",(CGFloat)width);
NSLog(#"Height from NSBitmapImageRep: %f",(CGFloat)height);
The loop takes into account that some image formats may contain more than a single image (such as TIFFs for example).
You can create an NSImage at this size by using the following:
NSImage * imageNSImage = [[NSImage alloc] initWithSize:NSMakeSize((CGFloat)width, (CGFloat)height)];
[imageNSImage addRepresentations:imageReps];
NSImage size method return size in points. To get size represented in pixels you need inspect NSImage.representations property that contains an array of NSImageRep objects with pixelWide/pixelHigh properties and simple change size NSImage object:
#implementation ViewController {
__weak IBOutlet NSImageView *imageView;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do view setup here.
NSImage *image = [[NSImage alloc] initWithContentsOfFile:#"/Users/username/test.jpg"];
if (image.representations && image.representations.count > 0) {
long lastSquare = 0, curSquare;
NSImageRep *imageRep;
for (imageRep in image.representations) {
curSquare = imageRep.pixelsWide * imageRep.pixelsHigh;
if (curSquare > lastSquare) {
image.size = NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh);
lastSquare = curSquare;
}
}
imageView.image = image;
NSLog(#"%.0fx%.0f", image.size.width, image.size.height);
}
}
#end
Thanks to Zenopolis for the original ObjC code, here's a nice concise Swift version:
func sizeForImageAtURL(url: NSURL) -> CGSize? {
guard let imageReps = NSBitmapImageRep.imageRepsWithContentsOfURL(url) else { return nil }
return imageReps.reduce(CGSize.zero, combine: { (size: CGSize, rep: NSImageRep) -> CGSize in
return CGSize(width: max(size.width, CGFloat(rep.pixelsWide)), height: max(size.height, CGFloat(rep.pixelsHigh)))
})
}
If your file contains only one image, you can just use this :
let rep = image.representations[0]
let imageSize = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
image is your NSImage, imageSize is the image size in pixels.
Copied and updated here: https://stackoverflow.com/a/13228091/3608824
NSImage's size param returns size information dependent to screen resolution and scaling configuration.
Real size of image you can get with the following extension:
extension NSImage {
var sizeReal: NSSize {
guard representations.count > 0 else { return NSSize(width: 0, height: 0) }
let rep = self.representations[0]
return NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
}
}

UISegmentedControl image scaling

How can I scale down the images used in a UISegmentedControl? I am creating the segmented control programmatically:
UISegmentedControl * segmentButton;
segmentButton = [UISegmentedControl segmentedControlWithItems:
[NSArray arrayWithObjects:
[UIImage imageNamed:#"option_one.png"],
[UIImage imageNamed:#"option_two.png"],
nil]];
segmentButton.contentMode = UIViewContentModeScaleToFill;
segmentButton.frame = CGRectMake(10, 10, 200, 32);
[view addSubview:segmentButton];
The result is not what I expect. The original .png images are about 100 pixels high, and they are not scaled down to fit the 32-pixel height of the segmented control. This results in a segmented control being drawn with enormous images overlapping it:
How can I tell the control to scale down those images?
You should never use a "big" image to display only a small picto. The full image will be loaded in memory, and only 10% of its pixels will be displayed, so you will use a lot of memory for nothing.
What you can do if you really want to use this resource is create a thumbnail with code before, and use this new generated thumbnail.
The following method returns a new image you can use in your UISegmentedControl, and you can release the big one.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
With your code:
UISegmentedControl * segmentButton;
segmentButton = [UISegmentedControl segmentedControlWithItems: [NSArray arrayWithObjects:
[self imageWithImage:[UIImage imageNamed:#"option_one.png"] scaledToSize:CGSizeMake(32, 32)],
[self imageWithImage:[UIImage imageNamed:#"option_two.png"] scaledToSize:CGSizeMake(32, 32)],
nil]];
segmentButton.contentMode = UIViewContentModeScaleToFill;
segmentButton.frame = CGRectMake(10, 10, 200, 32);
[view addSubview:segmentButton];
In swift3,
extension UIImage {
func scaleImage(scaleToSize: CGSize) -> UIImage {
UIGraphicsBeginImageContext(scaleToSize)
self.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: scaleToSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
For iOS 14 and later
UIGraphicsGetImageFromCurrentImageContext make some image not that clear, so I change to UIGraphicsImageRenderer, it works fine to me.
extension UIImage {
func resizedImage(to targetSize: CGSize) -> UIImage? {
let render = UIGraphicsImageRenderer(size: targetSize)
return render.image { ctx in
self.draw(in: .init(origin: .zero, size: targetSize))
}
}
}
class SomeView: UIView {
func initSegementedControl() {
exampleSegmenedControl = UISegmentedControl(items:[
leftImage.resizedImage(to: .init(width: 15, height: 15))!,
rightImage.resizedImage(to: .init(width: 15, height: 15))!
])
}
}

Resources