I'm trying to get a screenshot of a MKMapView.
and I'm using the following code:
UIGraphicsBeginImageContext(myMapView.frame.size);
[myMapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
And I'm getting and almost blank image with the map current location icon and a Google logo in it.
What could be causing that?
I should tell you that myMapView is actually on another viewController's view but since I'm getting the blue spot showing the location and the google logo I assume the reference I have is the correct one.
Thank you.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];
As mentioned here, you can try this
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Related
Until iOS7 update I was using...
UIImage *image = [moviePlayer thumbnailImageAtTime:1.0 timeOption:MPMovieTimeOptionNearestKeyFrame];
...with great success, so that my app could show a still of the video that the user had just taken.
I understand this method, as of iOS7 has now deprecated and I need an alternative. I see there's a method of
- (void)requestThumbnailImagesAtTimes:(NSArray *)playbackTimes timeOption:(MPMovieTimeOption)option
though how do I return the image from it so I can place it within the videoReview button image?
Thanks in advance, Jim.
****Edited question, after trying notification centre method***
I used the following code -
[moviePlayer requestThumbnailImagesAtTimes:times timeOption:MPMovieTimeOptionNearestKeyFrame];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification::) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
I made the NSArray times of two NSNumber objects 1 & 2.
I then tried to capture the notification in the following method
-(void)MPMoviePlayerThumbnailImageRequestDidFinishNotification: (NSDictionary*)info{
UIImage *image = [info objectForKey:MPMoviePlayerThumbnailImageKey];
Then proceeded to use this thumbnail image as the button image as a preview.... but it didn't work.
If you can see from my coding where I've went wrong your help would be appreciated again. Cheers
Managed to find a great way using AVAssetImageGenerator, please see code below...
AVURLAsset *asset1 = [[AVURLAsset alloc] initWithURL:partOneUrl options:nil];
AVAssetImageGenerator *generate1 = [[AVAssetImageGenerator alloc] initWithAsset:asset1];
generate1.appliesPreferredTrackTransform = YES;
NSError *err = NULL;
CMTime time = CMTimeMake(1, 2);
CGImageRef oneRef = [generate1 copyCGImageAtTime:time actualTime:NULL error:&err];
UIImage *one = [[UIImage alloc] initWithCGImage:oneRef];
[_firstImage setImage:one];
_firstImage.contentMode = UIViewContentModeScaleAspectFit;
Within header file, please import
#import <AVFoundation/AVFoundation.h>
It works perfect and I've been able to call it from viewDidLoad, which was quicker than calling the deprecated thumbNailImageAtTime: from the viewDidAppear.
Hope this helps anyone else who had the same problem.
* **Update for Swift 5.1 ****
Useful function...
func createThumbnailOfVideoUrl(url: URL) -> UIImage? {
let asset = AVAsset(url: url)
let assetImgGenerate = AVAssetImageGenerator(asset: asset)
assetImgGenerate.appliesPreferredTrackTransform = true
let time = CMTimeMakeWithSeconds(1.0, preferredTimescale: 600)
do {
let img = try assetImgGenerate.copyCGImage(at: time, actualTime: nil)
let thumbnail = UIImage(cgImage: img)
return thumbnail
} catch {
print(error.localizedDescription)
return nil
}
}
The requestThumbnailImagesAtTimes:timeOption: method will post a MPMoviePlayerThumbnailImageRequestDidFinishNotification notification when an image request completes. Your code that needs the thumbnail image should subscribe to this notification using NSNotificationCenter, and use the image when it receives the notification.
The problem is that you have to specify float values in requestThumbnailImagesAtTimes.
For example, this will work
[self.moviePlayer requestThumbnailImagesAtTimes:#[#14.f] timeOption:MPMovieTimeOptionNearestKeyFrame];
but this won't work:
[self.moviePlayer requestThumbnailImagesAtTimes:#[#14] timeOption:MPMovieTimeOptionNearestKeyFrame];
The way to do it, at least in iOS7 is to use floats for your times
NSNumber *timeStamp = #1.f;
[moviePlayer requestThumbnailImagesAtTimes:timeStamp timeOption:MPMovieTimeOptionNearestKeyFrame];
Hope this helps
Jeely provides a good work around but it requires an additional library that isn't necessary when the MPMoviePlayer already provides functions for this task. I noticed a syntax error in the original poster's code. The thumbnail notification handler expects an object of type NSNotification, not a dictionary object. Here's a corrected example:
-(void)MPMoviePlayerThumbnailImageRequestDidFinishNotification: (NSNotification*)note
{
NSDictionary * userInfo = [note userInfo];
UIImage *image = (UIImage *)[userInfo objectForKey:MPMoviePlayerThumbnailImageKey];
if(image!=NULL)
[thumbView setImage:image];
}
I've just looked for a solution for this problem myself and got good help from your question.
Got your code above to work with one small change, removed a colon...
Change
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification::) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
to
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(MPMoviePlayerThumbnailImageRequestDidFinishNotification:) name:MPMoviePlayerThumbnailImageRequestDidFinishNotification object:moviePlayer];
Got this to work nicely. Also, I've found that you can't call a method the rely on NotificationCenter if you're already in a notification selector. Something I tried at first - I tried calling requestThumbnailImagesAtTimes inside the notification selector for MPMoviePlayerPlaybackDidFinishNotification - something that won't work. I think because the notification won't fire.
The code in Swift 2.1 would look like this:
do{
let asset1 = AVURLAsset(URL: url)
let generate1: AVAssetImageGenerator = AVAssetImageGenerator(asset: asset1)
generate1.appliesPreferredTrackTransform = true
let time: CMTime = CMTimeMake(3, 1) //TO CATCH THE THIRD SECOND OF THE VIDEO
let oneRef: CGImageRef = try generate1.copyCGImageAtTime(time, actualTime: nil)
let resultImage = UIImage(CGImage: oneRef)
}
catch let error as NSError{
print(error)
}
So I have been at it all day to no luck and it has been needless to say quite frustrating, I have looked up many examples and downloadable categories which all tout being able to crop images flawlessly. Which they do, However the minute i try to do it from an image genrated via AVCaptureSession it does not work as well. I consulted both these sources
http://codefuel.wordpress.com/2011/04/22/image-cropping-from-a-uiscrollview/
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
and the project from the first link seems to work directly as advertised but as soon as i hack it to do the same magic on an av capture image...nope...
does anyone have insight into this? Also here is my code for reference.
- (IBAction)TakePhotoPressed:(id)sender
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
//NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
//NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
NSLog(#"%f",image.size.width);
NSLog(#"%f",image.size.height);
float scale = 1.0f/_scrollView.zoomScale;
CGRect visibleRect;
visibleRect.origin.x = _scrollView.contentOffset.x * scale;
visibleRect.origin.y = _scrollView.contentOffset.x * scale;
visibleRect.size.width = _scrollView.bounds.size.width * scale;
visibleRect.size.height = _scrollView.bounds.size.height * scale;
UIImage* cropped = [self cropImage:image withRect:visibleRect];
[croppedImage setImage:cropped];
[image release];
}
];
[croppedImage setHidden:NO];
}
cropImage function used above.
-(UIImage*)cropImage :(UIImage*)originalImage withRect :(CGRect) rect
{
CGRect transformedRect=rect;
if(originalImage.imageOrientation==UIImageOrientationRight)
{
transformedRect.origin.x = rect.origin.y;
transformedRect.origin.y = originalImage.size.width-(rect.origin.x+rect.size.width);
transformedRect.size.width = rect.size.height;
transformedRect.size.height = rect.size.width;
}
CGImageRef cr = CGImageCreateWithImageInRect(originalImage.CGImage, transformedRect);
UIImage* cropped = [UIImage imageWithCGImage:cr scale:originalImage.scale orientation:originalImage.imageOrientation];
[croppedImage setFrame:CGRectMake(croppedImage.frame.origin.x,
croppedImage.frame.origin.y,
cropped.size.width,
cropped.size.height)];
CGImageRelease(cr);
return cropped;
}
I am also tempted for verbosity and arming whomever might help me in my plight with as much information as possible to post my init of my scrollView and avcapture session. However That may be a bit too much so if you want to see it just ask.
Now as for results of what the code actually does?..
What it looks like before i take the picture
And After...
EDIT:
Well I have a few views now and no comment's so either no one has figured it out or it's so simple they thought i would have figured it out again...In any case i have not made any progress. So for anyone interested here is a small sample app with the code all set up and you can see what i am doing
https://docs.google.com/open?id=0Bxr4V3a9QFM_NnoxMkhzZTVNVEE
It seems that this little conundrum did not only have me stumped as after nearly a week,but a scant few of whoever viewed my question had no suggestions either. I must say for this particular problem i could not get it to work in this way, I pondered and tinkered and mused for a while to no avail. Until i did this
[self HideElements];
UIGraphicsBeginImageContext(chosenPhotoView.frame.size);
[chosenPhotoView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self ShowElements];
And that's it, less code and it worked pretty much instantly. So instead of trying to crop an image via the scrollview I take a screenshot of the screen at that time then crop the image using the scrollviews frame variables. And the hide/show element functions hide any overlapping elements on the picture i want.
Im trying to implement your implementation to my project as a button called 'Gallery" but receive an error when trying to run the project.
My implementation file looks like this.
-(IBAction) gallery{
NSMutableArray *photos = [[NSMutableArray alloc] init];
MWPhoto *photo;
{
photo = [MWPhoto photoWithFilePath:[[NSBundle mainBundle] pathForResource:#"fans1" ofType:#"jpg"]];
photo.caption = #"My fans 1";
[photos addObject:photo];
photo = [MWPhoto photoWithFilePath:[[NSBundle mainBundle] pathForResource:#"fans2" ofType:#"jpg"]];
photo.caption = #"My fans 2";
[photos addObject:photo];
photo = [MWPhoto photoWithFilePath:[[NSBundle mainBundle] pathForResource:#"fans3" ofType:#"jpg"]];
photo.caption = #"Fans3";
[photos addObject:photo];
photo = [MWPhoto photoWithFilePath:[[NSBundle mainBundle] pathForResource:#"fans4" ofType:#"jpg"]];
photo.caption = #"Fans4";
[photos addObject:photo];
}
self.photos = photos;
// Create browser
MWPhotoBrowser *browser = [[MWPhotoBrowser alloc] initWithDelegate:self];
browser.displayActionButton = YES;
//browser.wantsFullScreenLayout = NO;
//[browser setInitialPageIndex:2];
// Show
if (_segmentedControl.selectedSegmentIndex == 0) {
// Push
[self.navigationController pushViewController:browser animated:YES];
} else {
// Modal
UINavigationController *nc = [[UINavigationController alloc] initWithRootViewController:browser];
nc.modalTransitionStyle = UIModalTransitionStyleCrossDissolve;
[self presentModalViewController:nc animated:YES];
}
}
#pragma mark - MWPhotoBrowserDelegate
- (NSUInteger)numberOfPhotosInPhotoBrowser:(MWPhotoBrowser *)photoBrowser {
return _photos.count;
}
- (MWPhoto *)photoBrowser:(MWPhotoBrowser *)photoBrowser photoAtIndex: (NSUInteger)index {
if (index < _photos.count)
return [_photos objectAtIndex:index];
return nil;
}
When I run the project but the image gallery is not displayed. Im fairly new to ioS development.If you can please point me in the right direction i will deeply appreciate it. The main goal is to have the image gallery displayed when the user touches the Gallery button.
I copied and edited the code from the MWPhotoBrowser Demo Project File I don't receive any errors but I can't get the gallery to appear once button is touched. (BTW I assigned the IBACTION to the buttons). If there is another source code or alternative Framework I can use please advise. Thanks!
Some Ideas:
1) try using 'initWithPhotos'.
2) put a breakpoint at the place you are pushing, check that the push is called.
3) check that the navigationController is not null (0x0) in debugging.
4) remember to release 'browser'.
I want to obtain the referenceURL to the image that I saved into camera roll using UIImageWriteToSavedPhotosAlbum().
iOS 4.1 or above can do it easily by using AssetLibrary.
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL* url, NSError* error) {
if (error == nil) {
savedURL = url;
}
};
UIImage * originalImage = [info objectForKey:UIImagePickerControllerOriginalImage];
NSMutableDictionary * metadata = (NSMutableDictionary *)[info objectForKey:UIImagePickerControllerMediaMetadata];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:originalImage.CGImage
metadata:metadata
completionBlock:completionBlock];
But, I cannot figure out a smart way in case of earlier iOS where the only way of saving an image to the camera library is UIImageWriteToSavedPhotosAlbum(). One way I think about is looking around the saved image using ALAssetsGroup etc. This is not smart for me, and it only helps iOS 4.0.
Thank you in advance,
Kiyo
Use writeImageToSavedPhotosAlbum instead:
[library writeImageToSavedPhotosAlbum:[originalImage CGImage] orientation:(ALAssetOrientation)[originalImage imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
NSLog(#"error"); // oops, error !
} else {
NSLog(#"url %#", assetURL); // assetURL is the url you looking for
}
}];
I'd like to take a UITextView and allow the user to enter text into it and then trigger a copy of the contents onto a quartz bitmap context. Does anyone know how I can perform this copy action? Should I override the drawRect method and call [super drawRect] and then take the resulting context and copy it? If so, does anyone have any reference to sample code to copy from one context to another?
Update: from reading the link in the answer below, I put together this much to attempt to copy my UIView contents into a bitmap context, but something is still not right. I get my contents mirrored across the X axis (i.e. upside down). I tried using CGContextScaleCTM() but that seems to have no effect.
I've verified that the created UIImage from the first four lines do properly create a UIImage that isn't strangely rotated/flipped, so there is something I'm doing wrong with the later calls.
// copy contents to bitmap context
UIGraphicsBeginImageContext(mTextView.bounds.size);
[mTextView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];
// render the created image to the bitmap context
CGImageRef cgImage = [image CGImage];
CGContextScaleCTM(mContext, 1.0, -1.0); // doesn't seem to change result
CGContextDrawImage(mContext, CGRectMake(
mTextView.frame.origin.x,
mTextView.frame.origin.y,
[image size].width, [image size].height), cgImage);
Any suggestions?
Here is the code I used to get a UIImage of UIView:
#implementation UIView (Sreenshot)
- (UIImage *)screenshot{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [UIScreen mainScreen].scale);
/* iOS 7 */
BOOL visible = !self.hidden && self.superview;
CGFloat alpha = self.alpha;
BOOL animating = self.layer.animationKeys != nil;
BOOL success = YES;
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]){
//only works when visible
if (!animating && alpha == 1 && visible) {
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}else{
self.alpha = 1;
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
self.alpha = alpha;
}
}
if(!success){ /* iOS 6 */
self.alpha = 1;
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
self.alpha = alpha;
}
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
#end
You can use in iOS 7 and later:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates