Is there a best way to size an NSImage to a maximum filesize? - cocoa

Here's what I've got so far:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:
[file.image TIFFRepresentation]];
// Resize image to 200x200
CGFloat maxSize = 200.0;
NSSize imageSize = imageRep.size;
if (imageSize.height>maxSize || imageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = imageSize.height/imageSize.width;
CGSize newImageSize;
if (aspectRatio > 1.0) {
newImageSize = CGSizeMake(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = CGSizeMake(maxSize, maxSize * aspectRatio);
} else {
newImageSize = CGSizeMake(maxSize, maxSize);
}
[imageRep setSize:NSSizeFromCGSize(newImageSize)];
}
NSData *imageData = [imageRep representationUsingType:NSPNGFileType properties:nil];
NSString *outputFilePath = [#"~/Desktop/output.png" stringByExpandingTildeInPath];
[imageData writeToFile:outputFilePath atomically:NO];
The code assumes that a 200x200 PNG will be less than 128K, which is my size limit. 200x200 is big enough, but I'd prefer to max out the size if at all possible.
Here are my two problems:
The code doesn't work. I check the size of the exported file and it's the same size as the original.
Is there a way to predict the size of the output file before I export, so I can max out the dimensions but still get an image that's less than 128K?
Here's the working code. It's pretty sloppy and could probably use some optimizations, but at this point it runs fast enough that I don't care. It iterates over 100x for most pictures, and it's over in milliseconds. Also, this method is declared in a category on NSImage.
- (NSData *)resizeImageWithBitSize:(NSInteger)size
andImageType:(NSBitmapImageFileType)fileType {
CGFloat maxSize = 500.0;
NSSize originalImageSize = self.size;
NSSize newImageSize;
NSData *returnImageData;
NSInteger imageIsTooBig = 1000;
while (imageIsTooBig > 0) {
if (originalImageSize.height>maxSize || originalImageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = originalImageSize.height/originalImageSize.width;
if (aspectRatio > 1.0) {
newImageSize = NSMakeSize(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = NSMakeSize(maxSize, maxSize * aspectRatio);
} else {
newImageSize = NSMakeSize(maxSize, maxSize);
}
} else {
newImageSize = originalImageSize;
}
NSImage *resizedImage = [[NSImage alloc] initWithSize:newImageSize];
[resizedImage lockFocus];
[self drawInRect:NSMakeRect(0, 0, newImageSize.width, newImageSize.height)
fromRect: NSMakeRect(0, 0, originalImageSize.width, originalImageSize.height)
operation: NSCompositeSourceOver
fraction: 1.0];
[resizedImage unlockFocus];
NSData *tiffData = [resizedImage TIFFRepresentation];
[resizedImage release];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:tiffData];
NSDictionary *imagePropDict = [NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.85]
forKey:NSImageCompressionFactor];
returnImageData = [imageRep representationUsingType:fileType properties:imagePropDict];
[imageRep release];
if ([returnImageData length] > size) {
maxSize = maxSize * 0.99;
imageIsTooBig--;
} else {
imageIsTooBig = 0;
}
}
return returnImageData;
}

For 1.
As another poster mentioned, setSize only alters display sizes of the image, and not the actual pixel data of the underlying image file.
To resize, you may want to redraw the source image onto another NSImageRep and then write that to file.
This blog post contains some sample code on how to do the resize in this manner.
How to Resize an NSImage
For 2.
I don't think you can without at least creating the object in memory and checking the length of the image. The actual bytes used will be dependent on the image type. Bitmaps with ARGB will be easy to predict their size, but PNG and JPEG would be much harder.

[imageData length] should give you the length of the NSData's contents, which I understand to be the final file size when written to disk. That should give you a chance to maximize the size of the file before actually writing it.
As to why the image is not shrinking or growing, according to the docs for setSize:
The size of an image representation combined with the physical dimensions of the image data determine the resolution of the image.
So it may be that by setting the size and not altering the resolution you're not modifying any pixels, just the way in which the pixels should be interpreted.

Related

Need sample code to swing needle in Cocoa/Quartz 2d Speedometer for Mac App

I'm building this to run on the Mac, not iOS - which is quit different. I'm almost there with the speedo, but the math of making the needle move up and down the scale as data is input eludes me.
I'm measuring wind speed live, and want to display it as a gauge - speedometer, with the needle moving as the windspeed changes. I have the fundamentals ok. I can also - and will - load the images into holders, but later. For now I want to get it working ...
- (void)drawRect:(NSRect)rect
{
NSRect myRect = NSMakeRect ( 21, 21, 323, 325 ); // set the Graphics class square size to match the guage image
[[NSColor blueColor] set]; // colour it in in blue - just because you can...
NSRectFill ( myRect );
[[NSGraphicsContext currentContext] // set up the graphics context
setImageInterpolation: NSImageInterpolationHigh]; // highres image
//-------------------------------------------
NSSize viewSize = [self bounds].size;
NSSize imageSize = { 320, 322 }; // the actual image rectangle size. You can scale the image here if you like. x and y remember
NSPoint viewCenter;
viewCenter.x = viewSize.width * 0.50; // set the view center, both x & y
viewCenter.y = viewSize.height * 0.50;
NSPoint imageOrigin = viewCenter;
imageOrigin.x -= imageSize.width * 0.50; // set the origin of the first point
imageOrigin.y -= imageSize.height * 0.50;
NSRect destRect;
destRect.origin = imageOrigin; // set the image origin
destRect.size = imageSize; // and size
NSString * file = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/Gauge_mph_320x322.png"; // stuff in the image
NSImage * image = [[NSImage alloc] initWithContentsOfFile:file];
//-------------------------------------------
NSSize view2Size = [self bounds].size;
NSSize image2Size = { 149, 17 }; // the orange needle
NSPoint view2Center;
view2Center.x = view2Size.width * 0.50; // set the view center, both x & y
view2Center.y = view2Size.height * 0.50;
NSPoint image2Origin = view2Center;
//image2Origin.x -= image2Size.width * 0.50; // set the origin of the first point
image2Origin.x = 47;
image2Origin.y -= image2Size.height * 0.50;
NSRect dest2Rect;
dest2Rect.origin = image2Origin; // set the image origin
dest2Rect.size = image2Size; // and size now is needle size
NSString * file2 = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/orange-needle01.png";
NSImage * image2 = [[NSImage alloc] initWithContentsOfFile:file2];
// do image 1
[image setFlipped:YES]; // flip it because everything else is in this exerecise
// do image 2
[image2 setFlipped:YES]; // flip it because everything else is in this exerecise
[image drawInRect: destRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[image2 drawInRect: dest2Rect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
NSBezierPath * path = [NSBezierPath bezierPathWithRect:destRect]; // draw a red border around the whole thing
[path setLineWidth:3];
[[NSColor redColor] set];
[path stroke];
}
// flip the ocords
- (BOOL) isFlipped { return YES; }
#end
The result is here. The gauge part that is. Now all I have to do is make the needle move in response to input.
Apple has some sample code, called SpeedometerView, which does exactly what you're asking. It'll surely take some doing to adapt it for your use, but it's probably a decent starting point.

UIImageview programmilly manage for iphone5 and iphone4

I have an Issue about UIImageView Manage by XIB file for iphone5 screen Height and Iphone4 Screen Height.
I Try to Manage code for UIImageView like this
~
CGFloat screenHeight = [UIScreen mainScreen].bounds.size.height;
if ([UIScreen mainScreen].scale == 2.f && screenHeight == 568.0f) {
backgroundImage.autoresizingMask=UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth;
frameView.autoresizingMask=UIViewAutoresizingFlexibleHeight;
backgroundImage.image = [UIImage imageNamed:#"bg-568h#2x.png"];
//frameView.frame=CGRectMake(16, 0, 288, 527);
frameView.image = [UIImage imageNamed:#"setframe-568h#2x.png"];
}
else
{
backgroundImage.image = [UIImage imageNamed:#"bg#2x.png"];
frameView.image = [UIImage imageNamed:#"setframe#2x.png"];
} ;
Please suggest me about Issues, FrameView is a UIImageView which have white Image,
Please
THanks
I had the same issue and below is what I did to make it work for me.
I have images used in a couple of apps which needed to be resized for the new 4 inch display. I wrote the code below to automatically resize images as needed without specifics on the height of the view. This code assumes the height of the given image was sized in the NIB to be the full height of the given frame, like it is a background image that fills the whole view. In the NIB the UIImageView should not be set to stretch, which would do the work of stretching the image for you and distort the image since only the height changes while the width stays the same. What you need to do is adjust the height and the width by the same delta and then shift the image to the left by the same delta to center it again. This chops off a little on both sides while making it expand to the full height of the given frame.
I call it this way...
[self resizeImageView:self.backgroundImageView intoFrame:self.view.frame];
I do this in viewDidLoad normally if the image is set in the NIB. But I also have images which are downloaded at runtime and displayed that way. These images are cached with EGOCache, so I have to call the resize method either after setting the cached image into the UIImageView or after the image is downloaded and set into the UIImageView.
The code below does not specifically care what the height of the display is. It actually could work with any display size, perhaps to handle resizing images for rotation as well, thought it assumes each time the change in height is greater than the original height. To support a greater width this code would need to be adjusted to respond to that scenario as well.
- (void)resizeImageView:(UIImageView *)imageView intoFrame:(CGRect)frame {
// resizing is not needed if the height is already the same
if (frame.size.height == imageView.frame.size.height) {
return;
}
CGFloat delta = frame.size.height / imageView.frame.size.height;
CGFloat newWidth = imageView.frame.size.width * delta;
CGFloat newHeight = imageView.frame.size.height * delta;
CGSize newSize = CGSizeMake(newWidth, newHeight);
CGFloat newX = (imageView.frame.size.width - newWidth) / 2; // recenter image with broader width
CGRect imageViewFrame = imageView.frame;
imageViewFrame.size.width = newWidth;
imageViewFrame.size.height = newHeight;
imageViewFrame.origin.x = newX;
imageView.frame = imageViewFrame;
// now resize the image
assert(imageView.image != nil);
imageView.image = [self imageWithImage:imageView.image scaledToSize:newSize];
}
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

NSAffineTransform running out of memory

I am using NSAffineTransform to rotate/ reflect an NSImage and when using larger images i run into an error:
NSImage: Insufficient memory to allocate pixel data buffer of 4496342739800064 bytes
The image i am transforming here is 6,998,487 bytes at 4110px x 2735. Does NSAffineTransform really need this much memory to do this transformation or am i going wrong somewhere? Heres my rotate code:
-(NSImage *)rotateLeft:(NSImage *)img{
NSImage *existingImage = img;
NSSize existingSize;
existingSize.width = existingImage.size.width;
existingSize.height = existingImage.size.height;
NSSize newSize = NSMakeSize(existingSize.height, existingSize.width);
NSImage *rotatedImage = [[NSImage alloc] initWithSize:newSize];
[rotatedImage lockFocus];
NSAffineTransform *rotateTF = [NSAffineTransform transform];
NSPoint centerPoint = NSMakePoint(newSize.width / 2, newSize.height / 2);
[rotateTF translateXBy: centerPoint.x yBy: centerPoint.y];
[rotateTF rotateByDegrees: 90];
[rotateTF translateXBy: -centerPoint.y yBy: -centerPoint.x];
[rotateTF concat];
NSRect r1 = NSMakeRect(0, 0, newSize.height, newSize.width);
[existingImage drawAtPoint:NSMakePoint(0,0)
fromRect:r1
operation:NSCompositeCopy fraction:1.0];
[rotatedImage unlockFocus];
return rotatedImage;
}
I am using ARC in my project.
Thanks in advance, Ben

How can I calculate (without search) a font size to fit a rect?

I want my text to fit within a specific rect, so I need something to determine a font size. Questions have already tackled this to an extent, but they do a search, which seems horribly inefficient, especially if you want to be able to calculate during a live dragging resize. The following example could be improved to binary search and by constraining to the height, but it is still a search. Instead of searching, how can I calculate a font size to fit a rect?
#define kMaxFontSize 10000
- (CGFloat)fontSizeForAreaSize:(NSSize)areaSize withString:(NSString *)stringToSize usingFont:(NSString *)fontName;
{
NSFont * displayFont = nil;
NSSize stringSize = NSZeroSize;
NSMutableDictionary * fontAttributes = [[NSMutableDictionary alloc] init];
if (areaSize.width == 0.0 && areaSize.height == 0.0)
return 0.0;
NSUInteger fontLoop = 0;
for (fontLoop = 1; fontLoop <= kMaxFontSize; fontLoop++) {
displayFont = [[NSFontManager sharedFontManager] convertWeight:YES ofFont:[NSFont fontWithName:fontName size:fontLoop]];
[fontAttributes setObject:displayFont forKey:NSFontAttributeName];
stringSize = [stringToSize sizeWithAttributes:fontAttributes];
if (stringSize.width > areaSize.width)
break;
if (stringSize.height > areaSize.height)
break;
}
[fontAttributes release], fontAttributes = nil;
return (CGFloat)fontLoop - 1.0;
}
Pick any font size and measure the text at that size. Divide each of its dimensions (width and height) by the same dimension of your target rectangle, then divide the font size by the larger factor.
Note that the text will measure on one line, since there is no maximum width for it to wrap to. For a long line/string, this may result in an unusefully-small font size. For a text field, you should simply enforce a minimum size (such as the small system font size), and set the field's truncation behavior. If you intend to wrap the text, you'll need to measure it with something that takes a bounding rectangle or size.
Code by asker roughly based on this idea:
-(float)scaleToAspectFit:(CGSize)source into:(CGSize)into padding:(float)padding
{
return MIN((into.width-padding) / source.width, (into.height-padding) / source.height);
}
-(NSFont*)fontSizedForAreaSize:(NSSize)size withString:(NSString*)string usingFont:(NSFont*)font;
{
NSFont* sampleFont = [NSFont fontWithDescriptor:font.fontDescriptor size:12.];//use standard size to prevent error accrual
CGSize sampleSize = [string sizeWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:sampleFont, NSFontAttributeName, nil]];
float scale = [self scaleToAspectFit:sampleSize into:size padding:10];
return [NSFont fontWithDescriptor:font.fontDescriptor size:scale * sampleFont.pointSize];
}
-(void)windowDidResize:(NSNotification*)notification
{
text.font = [self fontSizedForAreaSize:text.frame.size withString:text.stringValue usingFont:text.font];
}

Getting into pixel data of NSImage

I'm writing application that operates on black&white images. I'm doing it by passing a NSImage object into my method and then making NSBitmapImageRep from NSImage. All works but quite slow. Here's my code:
- (NSImage *)skeletonization: (NSImage *)image
{
int x = 0, y = 0;
NSUInteger pixelVariable = 0;
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
[myHelpText setIntValue:[bitmapImageRep pixelsWide]];
[myHelpText2 setIntValue:[bitmapImageRep pixelsHigh]];
NSColor *black = [NSColor blackColor];
NSColor *white = [NSColor whiteColor];
[myColor set];
[myColor2 set];
for (x=0; x<=[bitmapImageRep pixelsWide]; x++) {
for (y=0; y<=[bitmapImageRep pixelsHigh]; y++) {
// This is only to see if it's working
[bitmapImageRep setColor:myColor atX:x y:y];
}
}
[myColor release];
[myColor2 release];
NSImage *producedImage = [[NSImage alloc] init];
[producedImage addRepresentation:bitmapImageRep];
[bitmapImageRep release];
return [producedImage autorelease];
}
So I tried to use CIImage but I don't know how to get into each pixel by (x,y) coordinates. That is really important.
Use the representations array property from NSImage, to get your NSBitmapImageRep. It should be faster than serializing your image to a TIFF and then back.
Use the bitmapData property of the NSBitmapImageRep to access the image bytes directly.
eg
unsigned char black = 0;
unsigned char white = 255;
NSBitmapImageRep* bitmapImageRep = [[image representations] firstObject];
// you will need to do checks here to determine the pixelformat of your bitmap data
unsigned char* imageData = [bitmapImageRep bitmapData];
int rowBytes = [bitmapImageRep bytesPerRow];
int bpp = [bitmapImageRep bitsPerPixel] / 8;
for (x=0; x<[bitmapImageRep pixelsWide]; x++) { // don't use <=
for (y=0; y<[bitmapImageRep pixelsHigh]; y++) {
*(imageData + y * rowBytes + x * bpp ) = black; // Red
*(imageData + y * rowBytes + x * bpp +1) = black; // Green
*(imageData + y * rowBytes + x * bpp +2) = black; // Blue
*(imageData + y * rowBytes + x * bpp +3) = 255; // Alpha
}
}
You will need to know what pixel format you are using in your images before you can go playing with its data, look at the bitsPerPixel property of NSBitmapImageRep to help determine if your image is in RGBA format.
You could be working with a gray scale image, or an RGB image, or possibly CMYK. And either convert the image to what you want first. Or handle the data in the loop differently.

Resources