So I have been at it all day to no luck and it has been needless to say quite frustrating, I have looked up many examples and downloadable categories which all tout being able to crop images flawlessly. Which they do, However the minute i try to do it from an image genrated via AVCaptureSession it does not work as well. I consulted both these sources
http://codefuel.wordpress.com/2011/04/22/image-cropping-from-a-uiscrollview/
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
and the project from the first link seems to work directly as advertised but as soon as i hack it to do the same magic on an av capture image...nope...
does anyone have insight into this? Also here is my code for reference.
- (IBAction)TakePhotoPressed:(id)sender
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
//NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
//NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
NSLog(#"%f",image.size.width);
NSLog(#"%f",image.size.height);
float scale = 1.0f/_scrollView.zoomScale;
CGRect visibleRect;
visibleRect.origin.x = _scrollView.contentOffset.x * scale;
visibleRect.origin.y = _scrollView.contentOffset.x * scale;
visibleRect.size.width = _scrollView.bounds.size.width * scale;
visibleRect.size.height = _scrollView.bounds.size.height * scale;
UIImage* cropped = [self cropImage:image withRect:visibleRect];
[croppedImage setImage:cropped];
[image release];
}
];
[croppedImage setHidden:NO];
}
cropImage function used above.
-(UIImage*)cropImage :(UIImage*)originalImage withRect :(CGRect) rect
{
CGRect transformedRect=rect;
if(originalImage.imageOrientation==UIImageOrientationRight)
{
transformedRect.origin.x = rect.origin.y;
transformedRect.origin.y = originalImage.size.width-(rect.origin.x+rect.size.width);
transformedRect.size.width = rect.size.height;
transformedRect.size.height = rect.size.width;
}
CGImageRef cr = CGImageCreateWithImageInRect(originalImage.CGImage, transformedRect);
UIImage* cropped = [UIImage imageWithCGImage:cr scale:originalImage.scale orientation:originalImage.imageOrientation];
[croppedImage setFrame:CGRectMake(croppedImage.frame.origin.x,
croppedImage.frame.origin.y,
cropped.size.width,
cropped.size.height)];
CGImageRelease(cr);
return cropped;
}
I am also tempted for verbosity and arming whomever might help me in my plight with as much information as possible to post my init of my scrollView and avcapture session. However That may be a bit too much so if you want to see it just ask.
Now as for results of what the code actually does?..
What it looks like before i take the picture
And After...
EDIT:
Well I have a few views now and no comment's so either no one has figured it out or it's so simple they thought i would have figured it out again...In any case i have not made any progress. So for anyone interested here is a small sample app with the code all set up and you can see what i am doing
https://docs.google.com/open?id=0Bxr4V3a9QFM_NnoxMkhzZTVNVEE
It seems that this little conundrum did not only have me stumped as after nearly a week,but a scant few of whoever viewed my question had no suggestions either. I must say for this particular problem i could not get it to work in this way, I pondered and tinkered and mused for a while to no avail. Until i did this
[self HideElements];
UIGraphicsBeginImageContext(chosenPhotoView.frame.size);
[chosenPhotoView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self ShowElements];
And that's it, less code and it worked pretty much instantly. So instead of trying to crop an image via the scrollview I take a screenshot of the screen at that time then crop the image using the scrollviews frame variables. And the hide/show element functions hide any overlapping elements on the picture i want.
Related
Setting the scene
I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.
Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
A music audio track.
An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.
I have this all working, can composite it and export it correctly using AVExportSession.
The problem
What I now want to do is apply filters and gradients to the ugcVideoComposition.
My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.
I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.
BOOL done = NO;
while (!done)
{
while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Let's try create an image....
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
// < Apply filters and transformations to the CIImage here
// < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
// Write things back out.
[assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.
The question
Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?
Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.
Much appreciated and thanks!
I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.
The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.
Setting up your AssetWriter
The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.
// Create the asset writer input and add it to the asset writer.
AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
// Now create an adaptor that writes pixels too!
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
sourcePixelBufferAttributes:nil];
assetWriterVideoInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:assetWriterVideoInput];
Reading and Writing
The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.
Note that you should include the QuartzCore.Framework into your XCode Project as well.
Creating the context on OSX.
CIContext *context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil]; // We don't want to always create a context so we put it outside the loop
Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.
while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
{
CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
if (sampleBuffer)
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// GRAB AN IMAGE FROM THE SAMPLE BUFFER
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
nil];
CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];
//-----------------
// FILTER IMAGE - APPLY ANY FILTERS IN HERE
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"];
[filter setDefaults];
[filter setValue: inputImage forKey: kCIInputImageKey];
[filter setValue: #1.0f forKey: kCIInputIntensityKey];
CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
// 1. Firstly render the image
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
// pixelBufferFromCGImage is documented below.
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.
// Finally write out using the adaptor.
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why we couldn't get another sample buffer....
if (assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = assetReader.error;
// Do something with this error.
}
else
{
// Some kind of success....
done = YES;
[assetWriter finishWriting];
}
}
}
}
Creating the PixelBuffer
There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Try using: SDAVAssetExportSession
SDAVAssetExportSession on GITHub
and then implementing a delegate to process the pixels
- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer
{ Do CIImage and CIFilter inside here }
Until iOS7 update I was using...
UIImage *image = [moviePlayer thumbnailImageAtTime:1.0 timeOption:MPMovieTimeOptionNearestKeyFrame];
...with great success, so that my app could show a still of the video that the user had just taken.
I understand this method, as of iOS7 has now deprecated and I need an alternative. I see there's a method of
- (void)requestThumbnailImagesAtTimes:(NSArray *)playbackTimes timeOption:(MPMovieTimeOption)option
though how do I return the image from it so I can place it within the videoReview button image?
Thanks in advance, Jim.
****Edited question, after trying notification centre method***
I used the following code -
[moviePlayer requestThumbnailImagesAtTimes:times timeOption:MPMovieTimeOptionNearestKeyFrame];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification::) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
I made the NSArray times of two NSNumber objects 1 & 2.
I then tried to capture the notification in the following method
-(void)MPMoviePlayerThumbnailImageRequestDidFinishNotification: (NSDictionary*)info{
UIImage *image = [info objectForKey:MPMoviePlayerThumbnailImageKey];
Then proceeded to use this thumbnail image as the button image as a preview.... but it didn't work.
If you can see from my coding where I've went wrong your help would be appreciated again. Cheers
Managed to find a great way using AVAssetImageGenerator, please see code below...
AVURLAsset *asset1 = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *generate1 = [[AVAssetImageGenerator alloc] initWithAsset:asset1];
generate1.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef oneRef = [generate1 copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:oneRef];
[_firstImage setImage:one];
_firstImage.contentMode = UIViewContentModeScaleAspectFit;
Within header file, please import
#import <AVFoundation/AVFoundation.h>
It works perfect and I've been able to call it from viewDidLoad, which was quicker than calling the deprecated thumbNailImageAtTime: from the viewDidAppear.
Hope this helps anyone else who had the same problem.
* **Update for Swift 5.1 ****
Useful function...
func createThumbnailOfVideoUrl(url: URL) -> UIImage? {
let asset = AVAsset(url: url)
let assetImgGenerate = AVAssetImageGenerator(asset: asset)
assetImgGenerate.appliesPreferredTrackTransform = true
let time = CMTimeMakeWithSeconds(1.0, preferredTimescale: 600)
do {
let img = try assetImgGenerate.copyCGImage(at: time, actualTime: nil)
let thumbnail = UIImage(cgImage: img)
return thumbnail
} catch {
print(error.localizedDescription)
return nil
}
}
The requestThumbnailImagesAtTimes:timeOption: method will post a MPMoviePlayerThumbnailImageRequestDidFinishNotification notification when an image request completes. Your code that needs the thumbnail image should subscribe to this notification using NSNotificationCenter, and use the image when it receives the notification.
The problem is that you have to specify float values in requestThumbnailImagesAtTimes.
For example, this will work
[self.moviePlayer requestThumbnailImagesAtTimes:#[#14.f] timeOption:MPMovieTimeOptionNearestKeyFrame];
but this won't work:
[self.moviePlayer requestThumbnailImagesAtTimes:#[#14] timeOption:MPMovieTimeOptionNearestKeyFrame];
The way to do it, at least in iOS7 is to use floats for your times
NSNumber *timeStamp = #1.f;
[moviePlayer requestThumbnailImagesAtTimes:timeStamp timeOption:MPMovieTimeOptionNearestKeyFrame];
Hope this helps
Jeely provides a good work around but it requires an additional library that isn't necessary when the MPMoviePlayer already provides functions for this task. I noticed a syntax error in the original poster's code. The thumbnail notification handler expects an object of type NSNotification, not a dictionary object. Here's a corrected example:
-(void)MPMoviePlayerThumbnailImageRequestDidFinishNotification: (NSNotification*)note
{
NSDictionary * userInfo = [note userInfo];
UIImage *image = (UIImage *)[userInfo objectForKey:MPMoviePlayerThumbnailImageKey];
if(image!=NULL)
[thumbView setImage:image];
}
I've just looked for a solution for this problem myself and got good help from your question.
Got your code above to work with one small change, removed a colon...
Change
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification::) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
to
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification:) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
Got this to work nicely. Also, I've found that you can't call a method the rely on NotificationCenter if you're already in a notification selector. Something I tried at first - I tried calling requestThumbnailImagesAtTimes inside the notification selector for MPMoviePlayerPlaybackDidFinishNotification - something that won't work. I think because the notification won't fire.
The code in Swift 2.1 would look like this:
do{
let asset1 = AVURLAsset(URL: url)
let generate1: AVAssetImageGenerator = AVAssetImageGenerator(asset: asset1)
generate1.appliesPreferredTrackTransform = true
let time: CMTime = CMTimeMake(3, 1) //TO CATCH THE THIRD SECOND OF THE VIDEO
let oneRef: CGImageRef = try generate1.copyCGImageAtTime(time, actualTime: nil)
let resultImage = UIImage(CGImage: oneRef)
}
catch let error as NSError{
print(error)
}
Let me tell you about the problem I am having and how I tried to solve it. I have a UIScrollView which loads subviews as one scrolls from left to right. Each subview has 10-20 images around 400x200 each. When I scroll from view to view, I experience quite a bit of lag.
After investigating, I discovered that after unloading all the views and trying it again, the lag was gone. I figured that the synchronous caching of the images was the cause of the lag. So I created a subclass of UIImageView which loaded the images asynchronously. The loading code looks like the following (self.dispatchQueue returns a serial dispatch queue).
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
UIImage *image = [UIImage imageNamed:name];
dispatch_sync(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
However, after changing all of my UIImageViews to this subclass, I still experienced lag (I'm not sure if it was lessened or not). I boiled down the cause of the problem to self.image = image;. Why is this causing so much lag (but only on the first load)?
Please help me. =(
EDIT 3: iOS 15 now offers UIImage.prepareForDisplay(completionHandler:).
image.prepareForDisplay { decodedImage in
imageView.image = decodedImage
}
or
imageView.image = await image.byPreparingForDisplay()
EDIT 2: Here is a Swift version that contains a few improvements. (Untested.)
https://gist.github.com/fumoboy007/d869e66ad0466a9c246d
EDIT: Actually, I believe all that is necessary is the following. (Untested.)
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:pathToImage];
// Decompress image
if (image) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
ORIGINAL ANSWER: It turns out that it wasn't self.image = image; directly. The UIImage image loading methods don't decompress and process the image data right away; they do it when the view refreshes its display. So the solution was to go a level lower to Core Graphics and decompress and process the image data myself. The new code looks like the following.
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *uiImage = nil;
if (pathToImage) {
// Load the image
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithFilename([pathToImage fileSystemRepresentation]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imageDataProvider, NULL, NO, kCGRenderingIntentDefault);
// Create a bitmap context from the image's specifications
// (Note: We need to specify kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
// because PNGs are optimized by Xcode this way.)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetWidth(image) * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
// Draw the image into the bitmap context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
// Extract the decompressed image
CGImageRef decompressedImage = CGBitmapContextCreateImage(bitmapContext);
// Create a UIImage
uiImage = [[UIImage alloc] initWithCGImage:decompressedImage];
// Release everything
CGImageRelease(decompressedImage);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
CGDataProviderRelease(imageDataProvider);
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = uiImage;
});
});
}
I think, the problem could be Images themselves. For example - I got in one of my projects 10 images 640x600 layered with alpha transparency on each other. When I try to push or pop viewcontroller from this viewcontroller.. it lags a lot.
when I leave only few images or use quite smaller images - no lag.
P.S. tested on sdk 4.2 ios5 iphone 4.
I'm trying to get a screenshot of a MKMapView.
and I'm using the following code:
UIGraphicsBeginImageContext(myMapView.frame.size);
[myMapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
And I'm getting and almost blank image with the map current location icon and a Google logo in it.
What could be causing that?
I should tell you that myMapView is actually on another viewController's view but since I'm getting the blue spot showing the location and the google logo I assume the reference I have is the correct one.
Thank you.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];
As mentioned here, you can try this
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I'd like to take a UITextView and allow the user to enter text into it and then trigger a copy of the contents onto a quartz bitmap context. Does anyone know how I can perform this copy action? Should I override the drawRect method and call [super drawRect] and then take the resulting context and copy it? If so, does anyone have any reference to sample code to copy from one context to another?
Update: from reading the link in the answer below, I put together this much to attempt to copy my UIView contents into a bitmap context, but something is still not right. I get my contents mirrored across the X axis (i.e. upside down). I tried using CGContextScaleCTM() but that seems to have no effect.
I've verified that the created UIImage from the first four lines do properly create a UIImage that isn't strangely rotated/flipped, so there is something I'm doing wrong with the later calls.
// copy contents to bitmap context
UIGraphicsBeginImageContext(mTextView.bounds.size);
[mTextView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];
// render the created image to the bitmap context
CGImageRef cgImage = [image CGImage];
CGContextScaleCTM(mContext, 1.0, -1.0); // doesn't seem to change result
CGContextDrawImage(mContext, CGRectMake(
mTextView.frame.origin.x,
mTextView.frame.origin.y,
[image size].width, [image size].height), cgImage);
Any suggestions?
Here is the code I used to get a UIImage of UIView:
#implementation UIView (Sreenshot)
- (UIImage *)screenshot{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale);
/* iOS 7 */
BOOL visible = !self.hidden && self.superview;
CGFloat alpha = self.alpha;
BOOL animating = self.layer.animationKeys != nil;
BOOL success = YES;
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]){
//only works when visible
if (!animating && alpha == 1 && visible) {
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}else{
self.alpha = 1;
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
self.alpha = alpha;
}
}
if(!success){ /* iOS 6 */
self.alpha = 1;
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.alpha = alpha;
}
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end
You can use in iOS 7 and later:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates