Rescale image in imageview - uiimageview

In my app I can take a photo from the camera and display it in an imageview, but what I want to be able to do is rescale/resize the resolution of the image so it's not as big and does not take up so much memory in the app.
Here is the code I use for displaying the image in the imageview:
image = [info objectForKey:UIImagePickerControllerOriginalImage];
[ImageView1 setImage:image]; // "ImageView1" name of any UImageView.
[self dismissViewControllerAnimated:YES completion:NULL];
I want the image scaled down as the image can also be sent via email, so it must be nice and small.

You can try this code:
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As far as storage of the image, the fastest image format to use with the iPhone is PNG, because it has optimizations for that format. However, if you want to store these images as JPEGs, you can take your UIImage and do the following:
NSData *dataForJPEGFile = UIImageJPEGRepresentation(theImage, 0.6);
This creates an NSData instance containing the raw bytes for a JPEG image at a 60% quality setting. The contents of that NSData instance can then be written to disk or cached in memory.
You can refer this link also: UIImage: Resize, then Crop
Hope this will helps you.

Related

hope to save NSImage without image size change

what I hope to get is
1.load image to a NSImage
2.draw some text on the NSImage
3.save the NSImage to a image file
so I used code below to load image to NSImage
NSData *data = [NSData dataWithContentsOfFile:[asset fullFilename]];
NSImage *img = [[NSImage alloc] initWithData:data] ;
if the original image size is
imageOriginalWidth, imageOriginalHeight
the size(width, height) of img will be smaller than original image size, it means that
imgWidth<imageOriginalWidth
imgHeight<imageOriginalHeight
it also means that if I save the NSImage with text to a image file, the new image is smaller than the original one
Your comment welcome

How to get CGImageForProposedRect to provide 1:1 pixel data on Retina Mac

In our app, we're creating an PDF NSImage (therefore scalable) and then using CGImage routines to write that data to a TIFF file. This works fine on non-retina display Macintoshes, but on retina machines, the data that is returned is twice the resolution we expect (just like the screen).
The code we're using works takes a newly formed NSView subclass referencing the data to draw (not the original on-screen view) as printingMapView.
NSData *pdfData = [printingMapView dataWithPDFInsideRect: frame];
NSImage *image = [[NSImage alloc] initWithData: pdfData];
[image setSize: size];
NSRect pRect = NSMakeRect( 0, 0, [image size].width, [image size].height);
CGImageRef cgImage = [image CGImageForProposedRect: &pRect context: NULL hints:NULL];
I have looked around for any hints that could be handed to the CGImageForProposedRect:context:hints call, but there's nothing in the Apple documentation relating to content scale.
Is there any way to do this other than creating an NSBitmapImageRep of the full size and passing that in as the context parameter to CGImageForProposedRect:context:hints?
That seems like it's likely to use a lot of memory during the operation.
So CGImageForProposedRect does return 1:1 pixel data, if you are getting a CGImage out of the function that is doubled in size the NSImageRep of that NSImage must also be doubled in size. Check your code to see if you have any calls to NSImage drawInRect where you are writing to an retina context. That is what was happening to me.

Combine more than one image to get a single image

I've to capture the contents of a UIView. There are plenty of methods to capture contents.The problem is,, if the size of the view becomes too big, the app crashes as it takes huge amount of memory.So is there any possibility to merge the raw image data (performing byte-by-byte operation) in a single file and make image from that??
`enter code here`Merging two images to create one single image file
Suppose we have two images with name file1.png and file2.png The code below merges two images.
#implementation LandscapeImage
- (void) createLandscapeImages
{
CGSize offScreenSize = CGSizeMake(2048, 1380);
UIGraphicsBeginImageContext(offScreenSize);
//Fetch First image and draw in rect
UIImage* imageLeft = [UIImage imageNamed:#"file1.png"];
CGRect rect = CGRectMake(0, 0, 1024, 1380);
[imageLeft drawInRect:rect];
[imageLeft release];
//Fetch second image and draw in rect
UIImage* imageRight = [UIImage imageNamed:#"file2.jpg"];
rect.origin.x += 1024;
[imageRight drawInRect:rect];
[imageRight release];
UIImage* imagez = UIGraphicsGetImageFromCurrentImageContext();// it will returns an image based on the contents of the current bitmap-based graphics context.
if (imageLeft && imageRight)
{
//Write code for save imagez in local cache
}
UIGraphicsEndImageContext();
}
#end

How to- NSAttributedString to CGImageRef

I'm writing a QuickLook plugin. Well, everything works. Just want to try it make better ;).
Thus the question.
Here is a function that returns thumbnail image and that I'm using now.
QLThumbnailRequestSetImageWithData(
QLThumbnailRequestRef thumbnail,
CFDataRef data,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImageWithData
Right now I'm creating a TIFF -> encapsulated it into NSData. An example
// Setting CFDataRef
CGSize thumbnailMaxSize = QLThumbnailRequestGetMaximumSize(thumbnail);
NSMutableAttributedString *attributedString = [[[NSMutableAttributedString alloc]
initWithString:#"dummy"
attributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"Monaco" size:10], NSFontAttributeName,
[NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:1.0], NSForegroundColorAttributeName,
nil]
] autorelease];
NSImage *thumbnailImage = [[[NSImage alloc] initWithSize:NSMakeSize(thumbnailMaxSize.width, thumbnailMaxSize.height)] autorelease];
[thumbnailImage lockFocus];
[[NSColor whiteColor] set];
NSRectFill(NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height));
[attributedString drawInRect:NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height)];
[thumbnailImage unlockFocus];
(CFDataRef)[thumbnailImage TIFFRepresentation]; // This is data
// Setting CFDictionaryRef
(CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:#"kUTTypeTIFF", (NSString *)kCGImageSourceTypeIdentifierHint, nil ]; // this is properties
However QuickLook provides another function to return thumbnail image, namely
QLThumbnailRequestSetImage(
QLThumbnailRequestRef thumbnail,
CGImageRef image,
CFDictionaryRef properties);
);
http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImage
I have a feeling that passing CGImage to the QL instead of TIFF data would help in speeding things up.
However- I have never worked with CG context before. I know, the documentation is there :), but anyways- could anyone give an example how to turn that NSAttributed string into CGImageRef. An example is worth 10 times reading the documentation ;)
Any help appreciated. Thanks in advance!
could anyone give an example how to turn that NSAttributed string into CGImageRef.
You can't turn a string into an image; they're two completely different kinds of data, and one is two dimensional (characters over time) while the other is at-least-three dimensional (color over x and y).
What you need to do is draw the string and produce an image of the drawing. That's what you're doing now with NSImage: Creating an image and drawing the string into it.
You're asking about creating a CGImage. Creating a bitmap context, using Core Text to draw the string into it, and creating an image of the contents of the bitmap context is one way to do that.
However, you're already much closer to another solution, assuming you can require Snow Leopard. Instead of asking the NSImage for a TIFF representation, ask it for a CGImage.

Problem exporting NSOpenGLView pixel data to some image file formats using ImageKit & CGImageDestination

Summary: exporting pixel data from NSOpenGLView to some file formats gives incorrect colours
I am developing an application to visualise some experimental data. One of its functions is to render the data in an NSOpenGLView subclass, and allow the resulting image to be exported to a file or copied to the clipboard.
The view exports the data as an NSImage, generated like this:
- (NSImage*) image
{
NSBitmapImageRep* imageRep;
NSImage* image;
NSSize viewSize = [self bounds].size;
int width = viewSize.width;
int height = viewSize.height;
[self lockFocus];
[self drawRect:[self bounds]];
[self unlockFocus];
imageRep=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:width*4
bitsPerPixel:32] autorelease];
[[self openGLContext] makeCurrentContext];
glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE,[imageRep bitmapData]);
image=[[[NSImage alloc] initWithSize:NSMakeSize(width,height)] autorelease];
[image addRepresentation:imageRep];
[image setFlipped:YES]; // this is deprecated in 10.6
[image lockFocusOnRepresentation:imageRep]; // this will flip the rep
[image unlockFocus];
return image;
}
Copying uses this image very simply, like this:
- (IBAction) copy:(id) sender
{
NSImage* img = [self image];
NSPasteboard* pb = [NSPasteboard generalPasteboard];
[pb clearContents];
NSArray* copied = [NSArray arrayWithObject:img];
[pb writeObjects:copied];
}
For file writing, I use the ImageKit IKSaveOptions accessory panel to set the output file type and associated options, then use the following code to do the writing:
NSImage* glImage = [glView image];
NSRect rect = [glView bounds];
rect.origin.x = rect.origin.y = 0;
img = [glImage CGImageForProposedRect:&rect
context:[NSGraphicsContext currentContext]
hints:nil];
if (img)
{
NSURL* url = [NSURL fileURLWithPath: path];
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((CFURLRef)url,
(CFStringRef)newUTType,
1,
NULL);
if (dest)
{
CGImageDestinationAddImage(dest,
img,
(CFDictionaryRef)[imgSaveOptions imageProperties]);
CGImageDestinationFinalize(dest);
CFRelease(dest);
}
}
(I've trimmed a bit of extraneous code here, but nothing that would affect the outcome as far as I can see. The newUTType comes from the IKSaveOptions panel.)
This works fine when the file is exported as GIF, JPEG, PNG, PSD or TIFF, but exporting to PDF, BMP, TGA, ICNS and JPEG-2000 produces a red colour artefact on part of the image. Example images are below, the first exported as JPG, the second as PDF.
(source: walkytalky.net)
(source: walkytalky.net)
Copy to clipboard does not exhibit this red stripe with the current implementation of image, but it did with the original implementation, which generated the imageRep using NSCalibratedRGBColorSpace rather than NSDeviceRGBColorSpace. So I'm guessing there's some issue with the colour representation in the pixels I get from OpenGL that doesn't get through the subsequent conversions properly, but I'm at a loss as to what to do about it.
So, can anyone tell me (i) what is causing this, and (ii) how can I make it go away? I don't care so much about all of the formats but I'd really like at least PDF to work.
OK. As evidenced by the deafening silence which met this question, the problem turns out to be a bit obscure. But the workaround is nice and simple, so I'm describing it here just in case anyone ever wants to know.
Summary: some file export formats do not cope well with translucency in the rendered pixels.
I don't understand the exact reasons for this, although it might possibly have something to do with the presence or absence of alpha pre-multiplication. All the formats seem to be fine with completely transparent pixels, rendering them either transparent or as white if the format doesn't support transparency. But pixels that have a partial alpha, plus something in the colour channels, may get mangled.
As it happens, I did not even want any parts of the image to be translucent, and indeed set glDisable(GL_BLEND) before the relevant rendering code. However, objects were rendered with the materials from this seemingly-canonical collection at the OpenGL home site, some of which include alpha values other than 1.0 in their specular, diffuse and ambient colours. I had slavishly copied this without paying attention to that fact that it might lead to some unwanted translucency.
For my purposes, then, the solution is straightforward: change the material definitions so that the alpha component is always 1.0.
Note that some image formats, such as PNG and TIFF, do fully support the translucency, so if you need that then those are the ones to go for.
This was, in fact what tipped me off to the answer. However it was not obvious at first because I was using OS X Preview to view the files, and the translucency is not obvious with the default view settings:
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
(source: walkytalky.net)
So, a second lesson from this whole episode is: enable View | Show Image Background in Preview to get the checkerboard and show up any stray transparency.

Resources