NSAffineTransform running out of memory - cocoa

I am using NSAffineTransform to rotate/ reflect an NSImage and when using larger images i run into an error:
NSImage: Insufficient memory to allocate pixel data buffer of 4496342739800064 bytes
The image i am transforming here is 6,998,487 bytes at 4110px x 2735. Does NSAffineTransform really need this much memory to do this transformation or am i going wrong somewhere? Heres my rotate code:
-(NSImage *)rotateLeft:(NSImage *)img{
NSImage *existingImage = img;
NSSize existingSize;
existingSize.width = existingImage.size.width;
existingSize.height = existingImage.size.height;
NSSize newSize = NSMakeSize(existingSize.height, existingSize.width);
NSImage *rotatedImage = [[NSImage alloc] initWithSize:newSize];
[rotatedImage lockFocus];
NSAffineTransform *rotateTF = [NSAffineTransform transform];
NSPoint centerPoint = NSMakePoint(newSize.width / 2, newSize.height / 2);
[rotateTF translateXBy: centerPoint.x yBy: centerPoint.y];
[rotateTF rotateByDegrees: 90];
[rotateTF translateXBy: -centerPoint.y yBy: -centerPoint.x];
[rotateTF concat];
NSRect r1 = NSMakeRect(0, 0, newSize.height, newSize.width);
[existingImage drawAtPoint:NSMakePoint(0,0)
fromRect:r1
operation:NSCompositeCopy fraction:1.0];
[rotatedImage unlockFocus];
return rotatedImage;
}
I am using ARC in my project.
Thanks in advance, Ben

Related

OpenGL ES 1 + iOS 8 = layer.bounds messed?

Since I've installed the Xcode 6.0.1 I'm having my OpenGL ES 1 layer displayed incorrectly on any simulated device (as well as on real hardware: iPhone 4S with iOS 8) – wrong size and position of the layer.
Changing the glViewport parameters doesn't make any difference. I can actually comment it out and it'll look the same.
PARTIAL SOLUTION:
I've checked and then unchecked the "Use Auto Layout" box so that Xcode updated my window to newer version requirements. Now everything looks okay on iPhone 4S, but still the size of the window on other devices is messed.
Anyone got their OpenGL ES 1 code updated to new devices?
One possible workaround is to get the dimensions (width & height) and decide on real width depending on which dimension is bigger something like this:
CGRect screenBounds = [[UIScreen mainScreen] bounds];
float scale = [UIScreen mainScreen].scale;
float width = screenBounds.size.width;
float height = screenBounds.size.height;
NSLog(#"scale: %f, width: %f, height: %f", scale, width, height);
float w = width > height ? width : height;
if (scale == 2.0f && w == 568.0f) { ...
I have the similar problem with my OpenGL ES 1 app.
The following code always returns the renderbuffer size in Portrait mode (shouldAutorotate is NO, so autorotate is disabled in my app):
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth );
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight );
But now (Xcode 6.0.1, iOS8) this size depends on the device orientation. So i get the wrong renderbuffer size.
"Use Auto Layout" check+uncheck didn't help me.
I've managed to display the render buffer properly by minding the fact that [[UIScreen mainScreen] bounds].size is orientation dependent on iOS 8 and coding the view programmatically. So my delegate looks like this:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *) {
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
screenHeight = screenSize.height;
screenWidth = screenSize.width;
window = [[UIWindow alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
window.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController = [[UIViewController alloc] init];
glView = [[EAGLView alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
glView.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController.view = glView;
window.rootViewController = MainViewController;
[window makeKeyAndVisible];
[glView setupGame];
[glView startAnimation];
return YES;
}

Need sample code to swing needle in Cocoa/Quartz 2d Speedometer for Mac App

I'm building this to run on the Mac, not iOS - which is quit different. I'm almost there with the speedo, but the math of making the needle move up and down the scale as data is input eludes me.
I'm measuring wind speed live, and want to display it as a gauge - speedometer, with the needle moving as the windspeed changes. I have the fundamentals ok. I can also - and will - load the images into holders, but later. For now I want to get it working ...
- (void)drawRect:(NSRect)rect
{
NSRect myRect = NSMakeRect ( 21, 21, 323, 325 ); // set the Graphics class square size to match the guage image
[[NSColor blueColor] set]; // colour it in in blue - just because you can...
NSRectFill ( myRect );
[[NSGraphicsContext currentContext] // set up the graphics context
setImageInterpolation: NSImageInterpolationHigh]; // highres image
//-------------------------------------------
NSSize viewSize = [self bounds].size;
NSSize imageSize = { 320, 322 }; // the actual image rectangle size. You can scale the image here if you like. x and y remember
NSPoint viewCenter;
viewCenter.x = viewSize.width * 0.50; // set the view center, both x & y
viewCenter.y = viewSize.height * 0.50;
NSPoint imageOrigin = viewCenter;
imageOrigin.x -= imageSize.width * 0.50; // set the origin of the first point
imageOrigin.y -= imageSize.height * 0.50;
NSRect destRect;
destRect.origin = imageOrigin; // set the image origin
destRect.size = imageSize; // and size
NSString * file = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/Gauge_mph_320x322.png"; // stuff in the image
NSImage * image = [[NSImage alloc] initWithContentsOfFile:file];
//-------------------------------------------
NSSize view2Size = [self bounds].size;
NSSize image2Size = { 149, 17 }; // the orange needle
NSPoint view2Center;
view2Center.x = view2Size.width * 0.50; // set the view center, both x & y
view2Center.y = view2Size.height * 0.50;
NSPoint image2Origin = view2Center;
//image2Origin.x -= image2Size.width * 0.50; // set the origin of the first point
image2Origin.x = 47;
image2Origin.y -= image2Size.height * 0.50;
NSRect dest2Rect;
dest2Rect.origin = image2Origin; // set the image origin
dest2Rect.size = image2Size; // and size now is needle size
NSString * file2 = #"/Users/robert/Documents/XCode Projects/xWeather Graphics/orange-needle01.png";
NSImage * image2 = [[NSImage alloc] initWithContentsOfFile:file2];
// do image 1
[image setFlipped:YES]; // flip it because everything else is in this exerecise
// do image 2
[image2 setFlipped:YES]; // flip it because everything else is in this exerecise
[image drawInRect: destRect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
[image2 drawInRect: dest2Rect
fromRect: NSZeroRect
operation: NSCompositeSourceOver
fraction: 1.0];
NSBezierPath * path = [NSBezierPath bezierPathWithRect:destRect]; // draw a red border around the whole thing
[path setLineWidth:3];
[[NSColor redColor] set];
[path stroke];
}
// flip the ocords
- (BOOL) isFlipped { return YES; }
#end
The result is here. The gauge part that is. Now all I have to do is make the needle move in response to input.
Apple has some sample code, called SpeedometerView, which does exactly what you're asking. It'll surely take some doing to adapt it for your use, but it's probably a decent starting point.

Getting into pixel data of NSImage

I'm writing application that operates on black&white images. I'm doing it by passing a NSImage object into my method and then making NSBitmapImageRep from NSImage. All works but quite slow. Here's my code:
- (NSImage *)skeletonization: (NSImage *)image
{
int x = 0, y = 0;
NSUInteger pixelVariable = 0;
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
[myHelpText setIntValue:[bitmapImageRep pixelsWide]];
[myHelpText2 setIntValue:[bitmapImageRep pixelsHigh]];
NSColor *black = [NSColor blackColor];
NSColor *white = [NSColor whiteColor];
[myColor set];
[myColor2 set];
for (x=0; x<=[bitmapImageRep pixelsWide]; x++) {
for (y=0; y<=[bitmapImageRep pixelsHigh]; y++) {
// This is only to see if it's working
[bitmapImageRep setColor:myColor atX:x y:y];
}
}
[myColor release];
[myColor2 release];
NSImage *producedImage = [[NSImage alloc] init];
[producedImage addRepresentation:bitmapImageRep];
[bitmapImageRep release];
return [producedImage autorelease];
}
So I tried to use CIImage but I don't know how to get into each pixel by (x,y) coordinates. That is really important.
Use the representations array property from NSImage, to get your NSBitmapImageRep. It should be faster than serializing your image to a TIFF and then back.
Use the bitmapData property of the NSBitmapImageRep to access the image bytes directly.
eg
unsigned char black = 0;
unsigned char white = 255;
NSBitmapImageRep* bitmapImageRep = [[image representations] firstObject];
// you will need to do checks here to determine the pixelformat of your bitmap data
unsigned char* imageData = [bitmapImageRep bitmapData];
int rowBytes = [bitmapImageRep bytesPerRow];
int bpp = [bitmapImageRep bitsPerPixel] / 8;
for (x=0; x<[bitmapImageRep pixelsWide]; x++) { // don't use <=
for (y=0; y<[bitmapImageRep pixelsHigh]; y++) {
*(imageData + y * rowBytes + x * bpp ) = black; // Red
*(imageData + y * rowBytes + x * bpp +1) = black; // Green
*(imageData + y * rowBytes + x * bpp +2) = black; // Blue
*(imageData + y * rowBytes + x * bpp +3) = 255; // Alpha
}
}
You will need to know what pixel format you are using in your images before you can go playing with its data, look at the bitsPerPixel property of NSBitmapImageRep to help determine if your image is in RGBA format.
You could be working with a gray scale image, or an RGB image, or possibly CMYK. And either convert the image to what you want first. Or handle the data in the loop differently.

Drawing on UIImage / Memory issue - objective-c

Greetings,
I'm trying to draw a circle on a map. all the separate pieces of this project work independently but when I put them all together it breaks.
I setup my UI in my viewDidLoad, retaining most of it.
I then use touch events to call a my refresh map method:
-(void)refreshMap{
NSString *thePath = [NSString stringWithFormat:#"http://maps.google.com/staticmap?center=%f,%f&zoom=%i&size=640x640&maptype=hybrid",viewLatitude, viewLongitude, zoom];
NSURL *url = [NSURL URLWithString:thePath];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
NSLog(#"-- mapimageview retaincount %i",[mapImage retainCount]);
mapImageView.image = mapImage;
[mapImage release];}
Setup like this it will load the map with a circle once, but if the map is refreshed again it crashes.
If I comment out the mapImage release it works repeatedly but causes a memory leak.
The addCircle method I'm using:
-(UIImage *)addCircle:(UIImage *)img radius:(CGFloat)radius latCon:(CGFloat)lat lonCon:(CGFloat)lon{
int w = img.size.width;
int h = img.size.height;
lon = h - lon;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
//draw the circle
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGRect leftOval = {lat- radius/2, lon - radius/2, radius, radius};
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 0.3);
CGContextAddEllipseInRect(context, leftOval);
CGContextFillPath(context);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];}
Any insight/advise is greatly appreciated!
UIImage *mapImage = [[UIImage alloc] initWithData:data];
mapImage = [self addCircle:(mapImage) influence:(70) latCon:(320) lonCon:(320)];
That's not good. You're losing the reference to the contents of mapImage when you reassign it on the second line. The easiest way to fix this if probably to just add an additional variable, so you can keep track of both images.

Is there a best way to size an NSImage to a maximum filesize?

Here's what I've got so far:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:
[file.image TIFFRepresentation]];
// Resize image to 200x200
CGFloat maxSize = 200.0;
NSSize imageSize = imageRep.size;
if (imageSize.height>maxSize || imageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = imageSize.height/imageSize.width;
CGSize newImageSize;
if (aspectRatio > 1.0) {
newImageSize = CGSizeMake(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = CGSizeMake(maxSize, maxSize * aspectRatio);
} else {
newImageSize = CGSizeMake(maxSize, maxSize);
}
[imageRep setSize:NSSizeFromCGSize(newImageSize)];
}
NSData *imageData = [imageRep representationUsingType:NSPNGFileType properties:nil];
NSString *outputFilePath = [#"~/Desktop/output.png" stringByExpandingTildeInPath];
[imageData writeToFile:outputFilePath atomically:NO];
The code assumes that a 200x200 PNG will be less than 128K, which is my size limit. 200x200 is big enough, but I'd prefer to max out the size if at all possible.
Here are my two problems:
The code doesn't work. I check the size of the exported file and it's the same size as the original.
Is there a way to predict the size of the output file before I export, so I can max out the dimensions but still get an image that's less than 128K?
Here's the working code. It's pretty sloppy and could probably use some optimizations, but at this point it runs fast enough that I don't care. It iterates over 100x for most pictures, and it's over in milliseconds. Also, this method is declared in a category on NSImage.
- (NSData *)resizeImageWithBitSize:(NSInteger)size
andImageType:(NSBitmapImageFileType)fileType {
CGFloat maxSize = 500.0;
NSSize originalImageSize = self.size;
NSSize newImageSize;
NSData *returnImageData;
NSInteger imageIsTooBig = 1000;
while (imageIsTooBig > 0) {
if (originalImageSize.height>maxSize || originalImageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = originalImageSize.height/originalImageSize.width;
if (aspectRatio > 1.0) {
newImageSize = NSMakeSize(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = NSMakeSize(maxSize, maxSize * aspectRatio);
} else {
newImageSize = NSMakeSize(maxSize, maxSize);
}
} else {
newImageSize = originalImageSize;
}
NSImage *resizedImage = [[NSImage alloc] initWithSize:newImageSize];
[resizedImage lockFocus];
[self drawInRect:NSMakeRect(0, 0, newImageSize.width, newImageSize.height)
fromRect: NSMakeRect(0, 0, originalImageSize.width, originalImageSize.height)
operation: NSCompositeSourceOver
fraction: 1.0];
[resizedImage unlockFocus];
NSData *tiffData = [resizedImage TIFFRepresentation];
[resizedImage release];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:tiffData];
NSDictionary *imagePropDict = [NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.85]
forKey:NSImageCompressionFactor];
returnImageData = [imageRep representationUsingType:fileType properties:imagePropDict];
[imageRep release];
if ([returnImageData length] > size) {
maxSize = maxSize * 0.99;
imageIsTooBig--;
} else {
imageIsTooBig = 0;
}
}
return returnImageData;
}
For 1.
As another poster mentioned, setSize only alters display sizes of the image, and not the actual pixel data of the underlying image file.
To resize, you may want to redraw the source image onto another NSImageRep and then write that to file.
This blog post contains some sample code on how to do the resize in this manner.
How to Resize an NSImage
For 2.
I don't think you can without at least creating the object in memory and checking the length of the image. The actual bytes used will be dependent on the image type. Bitmaps with ARGB will be easy to predict their size, but PNG and JPEG would be much harder.
[imageData length] should give you the length of the NSData's contents, which I understand to be the final file size when written to disk. That should give you a chance to maximize the size of the file before actually writing it.
As to why the image is not shrinking or growing, according to the docs for setSize:
The size of an image representation combined with the physical dimensions of the image data determine the resolution of the image.
So it may be that by setting the size and not altering the resolution you're not modifying any pixels, just the way in which the pixels should be interpreted.

Resources