iOS 8 UIImage Metadata - uiimage

This is my first question
I've a a "little" problem:
when i read UIImage metadata on iOS 7 I use this code and it works great
#pragma mark - Image Picker Controller delegate methods
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage];
[self metaDataFromAssetLibrary:info];
[picker dismissViewControllerAnimated:YES completion:NULL];
}
From imagePickerController i choose the image and call metaDataFromAssetLibrary method
- (void) metaDataFromAssetLibrary:(NSDictionary*)info
{
NSURL *assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:assetURL
resultBlock:^(ALAsset *asset) {
NSMutableDictionary *imageMetadata = nil;
NSDictionary *metadata = asset.defaultRepresentation.metadata;
imageMetadata = [[NSMutableDictionary alloc] initWithDictionary:metadata];
NSLog (#"imageMetaData from AssetLibrary %#",imageMetadata);
}
failureBlock:^(NSError *error) {
NSLog (#"error %#",error);
}];
}
on Xcode5 and iOS7 console return something like this
imageMetaData from AssetLibrary {
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 1;
PixelHeight = 2448;
PixelWidth = 3264;
"{Exif}" = {
ApertureValue = "2.52606882168926";
BrightnessValue = "2.211389961389961";
ColorSpace = 1;
ComponentsConfiguration = (
1,
2,
3,
0
);
DateTimeDigitized = "2014:06:05 08:54:09";
DateTimeOriginal = "2014:06:05 08:54:09";
ExifVersion = (
2,
2,
1
);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.05";
FNumber = "2.4";
Flash = 24;
FlashPixVersion = (
1,
0
);
FocalLenIn35mmFilm = 35;
FocalLength = "4.28";
ISOSpeedRatings = (
125
);
LensMake = Apple;
LensModel = "iPhone 4S back camera 4.28mm f/2.4";
LensSpecification = (
"4.28",
"4.28",
"2.4",
"2.4"
);
MeteringMode = 3;
PixelXDimension = 3264;
PixelYDimension = 2448;
SceneCaptureType = 0;
SceneType = 1;
SensingMethod = 2;
ShutterSpeedValue = "4.321928460342146";
SubjectArea = (
1643,
1079,
610,
612
);
SubsecTimeDigitized = 347;
SubsecTimeOriginal = 347;
WhiteBalance = 0;
};
"{GPS}" = {
Altitude = 26;
AltitudeRef = 1;
DateStamp = "2014:06:05";
DestBearing = "177.086387434555";
DestBearingRef = M;
ImgDirection = "357.0864197530864";
ImgDirectionRef = M;
Latitude = "43.80268";
LatitudeRef = N;
Longitude = "11.0635195";
LongitudeRef = E;
Speed = 0;
SpeedRef = K;
TimeStamp = "06:54:08";
};
"{MakerApple}" = {
1 = 0;
3 = {
epoch = 0;
flags = 1;
timescale = 1000000000;
value = 27688938393500;
};
4 = 1;
5 = 186;
6 = 195;
7 = 1;
8 = (
"-0.6805536",
"0.02519802",
"-0.755379"
);
};
"{TIFF}" = {
DateTime = "2014:06:05 08:54:09";
Make = Apple;
Model = "iPhone 4S";
Orientation = 1;
ResolutionUnit = 2;
Software = "8.0";
XResolution = 72;
YResolution = 72;
};
}
But on Xcode6 and iOS 8 console return only this
imageMetaData from AssetLibrary {
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 1;
PixelHeight = 768;
PixelWidth = 1020;
"{Exif}" = {
ColorSpace = 1;
ComponentsConfiguration = (
1,
2,
3,
0
);
ExifVersion = (
2,
2,
1
);
FlashPixVersion = (
1,
0
);
PixelXDimension = 1020;
PixelYDimension = 768;
SceneCaptureType = 0;
};
"{TIFF}" = {
Orientation = 1;
ResolutionUnit = 2;
XResolution = 72;
YResolution = 72;
};
}
someone knows this problem?
any solution or suggestion?
Thank you so much
P.S.: excuse me for my terrible english ;-)

After upgrade to Xcode6, you have to get the image in dismissViewController's completion block, or more complex method: get the image from asset.
Please try the following code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[picker dismissViewControllerAnimated:YES completion::^{
UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage];
[self metaDataFromAssetLibrary:info];
}];
Also you can reference john.k.doe's answer in
didFinishPickingMediaWithInfo return nil photo

Related

How implementa antialiasing in Mac app using Metal?

I'm making an app for MacOS, but I'm having problems with the antialiasing implementation in Metal.
When compiling I get the following error:
validateAttachmentOnDevice:540: failed assertion
MTLRenderPassDescriptor render targets have inconsistent sample
counts.
This is the code I use to implement the textures:
- (void)makeTextures {
if ([depthTexture width] != drawableSize.width || [depthTexture height] != drawableSize.height)
{
MTLTextureDescriptor *colorDescriptor = [MTLTextureDescriptor new];
colorDescriptor.textureType = MTLTextureType2DMultisample;
colorDescriptor.sampleCount = 4;
colorDescriptor.pixelFormat = vista.colorPixelFormat;
colorDescriptor.width = drawableSize.width;
colorDescriptor.height = drawableSize.height;
colorDescriptor.usage = MTLTextureUsageRenderTarget;
colorDescriptor.storageMode = MTLStorageModePrivate;
msaaColorTexture = [view.device newTextureWithDescriptor:colorDescriptor];
MTLTextureDescriptor *resolveDescriptor = [MTLTextureDescriptor new];
resolveDescriptor.textureType = MTLTextureType2D;
resolveDescriptor.pixelFormat = vista.colorPixelFormat;
resolveDescriptor.width = drawableSize.width;
resolveDescriptor.height = drawableSize.height;
resolveDescriptor.storageMode = MTLStorageModePrivate;
msaaResolveColorTexture = [view.device newTextureWithDescriptor:resolveDescriptor];
MTLTextureDescriptor *desc = [MTLTextureDescriptor new];
desc.textureType = MTLTextureType2DMultisample;
desc.sampleCount = 4;
desc.pixelFormat = MTLPixelFormatDepth32Float;
desc.width = drawableSize.width;
desc.height = drawableSize.height;
desc.usage = MTLTextureUsageRenderTarget;
desc.storageMode = MTLStorageModePrivate;
depthTexture = [view.device newTextureWithDescriptor:desc];
}
}
If at the time of creating the MTLRenderPipelineDescriptor I add .samplecount = 4;, the error disappears, but nothing is drawing in the view.
This are de PipelineDescriptor creation:
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [MTLRenderPipelineDescriptor new];
pipelineStateDescriptor.label = #"Simple Pipeline";
pipelineStateDescriptor.vertexFunction = vertexFunction;
pipelineStateDescriptor.fragmentFunction = fragmentFunction;
pipelineStateDescriptor.colorAttachments[0].pixelFormat = vista.colorPixelFormat;
pipelineStateDescriptor.depthAttachmentPixelFormat = MTLPixelFormatDepth32Float;
// No error but black view.
//pipelineStateDescriptor.sampleCount = 4;
NSError *error;
pipelineState = [view.device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
// Depth stencil.
MTLDepthStencilDescriptor *depthStencilDescriptor = [MTLDepthStencilDescriptor new];
depthStencilDescriptor.depthCompareFunction = MTLCompareFunctionLess;
depthStencilDescriptor.depthWriteEnabled = YES;
depthStencilState = [view.device newDepthStencilStateWithDescriptor:depthStencilDescriptor];
And passDescriptor:
if (passDescriptor != nil) {
passDescriptor.colorAttachments[0].texture = msaaColorTexture;
passDescriptor.colorAttachments[0].resolveTexture = msaaResolveColorTexture;
passDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(1.0f, 1.0f, 1.0f, 1.0f);
passDescriptor.colorAttachments[0].storeAction = MTLStoreActionMultisampleResolve;
passDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
passDescriptor.depthAttachment.texture = depthTexture;
passDescriptor.depthAttachment.clearDepth = 1.0;
passDescriptor.depthAttachment.loadAction = MTLLoadActionClear;
passDescriptor.depthAttachment.storeAction = MTLStoreActionStore;

Xamarin iOS: How to change the color of a UIImage pixel by pixel

Sorry if this has already been answered somewhere but I could not find it.
Basically, I am receiving a QR code where the code itself is black and the background is white (this is a UIImage). I would like to change to the color of the white background to transparent or a custom color and change the QRCode color from black to white. (In Xamarin iOS)
I already know how to get the color of a specific Pixel using the following code:
static UIColor GetPixelColor(CGBitmapContext context, byte[] rawData,
UIImage barcode, CGPoint pt)
{
var handle = GCHandle.Alloc(rawData);
UIColor resultColor = null;
using (context)
{
context.DrawImage(new CGRect(-pt.X, pt.Y - barcode.Size.Height,
barcode.Size.Width, barcode.Size.Height), barcode.CGImage);
float red = (rawData[0]) / 255.0f;
float green = (rawData[1]) / 255.0f;
float blue = (rawData[2]) / 255.0f;
float alpha = (rawData[3]) / 255.0f;
resultColor = UIColor.FromRGBA(red, green, blue, alpha);
}
return resultColor;
}
This is currently my function:
static UIImage GetRealQRCode(UIImage barcode)
{
int width = (int)barcode.Size.Width;
int height = (int)barcode.Size.Height;
var bytesPerPixel = 4;
var bytesPerRow = bytesPerPixel * width;
var bitsPerComponent = 8;
var colorSpace = CGColorSpace.CreateDeviceRGB();
var rawData = new byte[bytesPerRow * height];
CGBitmapContext context = new CGBitmapContext(rawData, width,
height, bitsPerComponent, bytesPerRow, colorSpace,
CGImageAlphaInfo.PremultipliedLast);
for (int i = 0; i < rawData.Length; i++)
{
for (int j = 0; j < bytesPerRow; j++)
{
CGPoint pt = new CGPoint(i, j);
UIColor currentColor = GetPixelColor(context, rawData,
barcode, pt);
}
}
}
Anyone know how to do this ?
Thanks in advance !
Assuming your UIImage is backed by a CGImage (and not a CIImage):
var cgImage = ImageView1.Image.CGImage; // Your UIImage with a CGImage backing image
var bytesPerPixel = 4;
var bitsPerComponent = 8;
var bytesPerUInt32 = sizeof(UInt32) / sizeof(byte);
var width = cgImage.Width;
var height = cgImage.Height;
var bytesPerRow = bytesPerPixel * cgImage.Width;
var numOfBytes = cgImage.Height * cgImage.Width * bytesPerUInt32;
IntPtr pixelPtr = IntPtr.Zero;
try
{
pixelPtr = Marshal.AllocHGlobal((int)numOfBytes);
using (var colorSpace = CGColorSpace.CreateDeviceRGB())
{
CGImage newCGImage;
using (var context = new CGBitmapContext(pixelPtr, width, height, bitsPerComponent, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new CGRect(0, 0, width, height), cgImage);
unsafe
{
var currentPixel = (byte*)pixelPtr.ToPointer();
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
// RGBA8888 pixel format
if (*currentPixel == byte.MinValue)
{
*currentPixel = byte.MaxValue;
*(currentPixel + 1) = byte.MaxValue;
*(currentPixel + 2) = byte.MaxValue;
}
else
{
*currentPixel = byte.MinValue;
*(currentPixel + 1) = byte.MinValue;
*(currentPixel + 2) = byte.MinValue;
*(currentPixel + 3) = byte.MinValue;
}
currentPixel += 4;
}
}
}
newCGImage = context.ToImage();
}
var uiimage = new UIImage(newCGImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
finally
{
if (pixelPtr != IntPtr.Zero)
Marshal.FreeHGlobal(pixelPtr);
}
If you do not actually need access to the individual pixels but the end result only, using CoreImage pre-exisitng filters you can first invert the colors and then use the black pixels as an alpha mask. Otherwise see my other answer using Marshal.AllocHGlobal and pointers.
using (var coreImage = new CIImage(ImageView1.Image))
using (var invertFilter = CIFilter.FromName("CIColorInvert"))
{
invertFilter.Image = coreImage;
using (var alphaMaskFiter = CIFilter.FromName("CIMaskToAlpha"))
{
alphaMaskFiter.Image = invertFilter.OutputImage;
var newCoreImage = alphaMaskFiter.OutputImage;
var uiimage = new UIImage(newCoreImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
The plus side is this is blazing fast ;-) and the results are the same:
If you need even faster processing assuming you are batch converting a number of these images, you can write a custom CIKernel that incorporates these two filters into one kernel and thus only process the image once.
Xamarin.IOS with this method you can convert all color white to transparent for me only works with files ".jpg" with .png doesn't work but you can convert the files to jpg and call this method.
public static UIImage ProcessImage (UIImage image)
{
CGImage rawImageRef = image.CGImage;
nfloat[] colorMasking = new nfloat[6] { 222, 255, 222, 255, 222, 255 };
CGImage imageRef = rawImageRef.WithMaskingColors(colorMasking);
UIImage imageB = UIImage.FromImage(imageRef);
return imageB;
}
Regards

SKPhysicsJoint is joining at what appears to be incorrect coordinates

In my simple archery game, I have defined an arrow node and a target node with associated physics bodies.
When I attempt to join them together using an SKPhysicsJointFixed (I have also tried other types), the behaviour is inaccurate with the joint seemingly created at random points before actually hitting the target node.
I have played with friction and restitution values and also SKShapeNode (with a CGPath) and SKSpriteNode (with a rectangle around the sprite) to define the target but the problem occurs with both.
When just using collisions, the arrows bounce off the correct locations of the target, which seems OK. It is only when the join occurs that the results become random on-screen, usually 10-20 points away from the target node "surface".
static const uint32_t arrowCategory = 0x1 << 1;
static const uint32_t targetCategory = 0x1 << 2;
- (SKSpriteNode *)createArrowNode
{
SKSpriteNode *arrow = [[SKSpriteNode alloc] initWithImageNamed:#"Arrow.png"];
arrow.position = CGPointMake(165, 110);
arrow.name = #"arrowNode";
arrow.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:arrow.frame.size];
arrow.physicsBody.angularVelocity = -0.25;
arrow.physicsBody.usesPreciseCollisionDetection = YES;
arrow.physicsBody.restitution = 0.0;
arrow.physicsBody.friction = 0.0;
arrow.physicsBody.categoryBitMask = arrowCategory;
arrow.physicsBody.collisionBitMask = targetCategory;
arrow.physicsBody.contactTestBitMask = /*arrowCategory | */targetCategory | bullseyeCategory;
return arrow;
}
-void(createTargetNode)
{
SKSpriteNode *sprite = [[SKSpriteNode alloc] initWithImageNamed:#"TargetOutline.png"];
sprite.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:sprite.size];
sprite.position = CGPointMake(610, 100);
sprite.name = #"targetNode";
sprite.physicsBody.usesPreciseCollisionDetection = NO;
// sprite.physicsBody.affectedByGravity = NO;
// sprite.physicsBody.mass = 20000;
sprite.physicsBody.dynamic = NO;
sprite.physicsBody.friction = 0.0;
sprite.physicsBody.restitution = 0.0;
sprite.physicsBody.categoryBitMask = targetCategory;
sprite.physicsBody.contactTestBitMask = targetCategory | arrowCategory;
[self addChild:sprite];
}
- (void)didBeginContact:(SKPhysicsContact *)contact
{
SKPhysicsBody *firstBody, *secondBody;
if (contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask)
{
firstBody = contact.bodyA;
secondBody = contact.bodyB;
}
else
{
firstBody = contact.bodyB;
secondBody = contact.bodyA;
}
if ((firstBody.categoryBitMask & arrowCategory) != 0 &&
(secondBody.categoryBitMask & targetCategory) != 0)
{
CGPoint contactPoint = contact.contactPoint;
NSLog(#"contactPoint is %#", NSStringFromCGPoint(contactPoint));
float contact_x = contactPoint.x;
float contact_y = contactPoint.y;
SKPhysicsJoint *joint = [SKPhysicsJointFixed jointWithBodyA:firstBody bodyB:secondBody anchor:(CGPoint)contactPoint ];
[self.physicsWorld addJoint:joint];
CGPoint bullseye = CGPointMake(590, 102.5);
NSLog(#"Center is %#", NSStringFromCGPoint(bullseye));
CGFloat distance = SDistanceBetweenPoints(contactPoint, bullseye);
NSLog(#"Distance to bullseye is %f", distance);
}

UInt8 EXC_BAD_ACCESS

I have a method that will add a filter to an image. This worked fine until a couple of months ago, now when I try to use this method the application will crash on the images buffer. I create the buffer and set it to the image's data, accessing the specific index later causes a bad access crash. I have looked for the past hour or two, and now I am convinced there is something im overlooking. I think something is being released that should not be. I am using the ios DP 4 preview of xcode, and I think this problem started with the update to the beta, but I am really not sure.
This is the line it crashes on located near the middle of the first for loop
m_PixelBuf[index+2] = m_PixelBuf[index+2]/*aRed*/;
Normally it is set to aRed Which I have checked, and it should not go out of the buffers boundaries.
-(void)contrastWithContrast:(float )contrast colorWithColor:(float )color{
drawImage.image = original;
UIImage * unfilteredImage2 = [[[UIImage alloc]initWithCGImage:drawImage.image.CGImage] autorelease];
CGImageRef inImage = unfilteredImage2.CGImage;
CGContextRef ctx;
CFDataRef m_DataRef;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
NSLog(#"Photo Length: %i",length);
//////Contrast/////////////
//NSLog(#"Contrast:%f",contrast);
int aRed;
int aGreen;
int aBlue;
for (int index = 0; index < length; index += 4){
aRed = m_PixelBuf[index+2];
aGreen = m_PixelBuf[index+1];
aBlue = m_PixelBuf[index];
aRed = (((aRed-128)*(contrast+100) )/100) + 128;
if (aRed < 0) aRed = 0; if (aRed>255) aRed=255;
m_PixelBuf[index+2] = m_PixelBuf[index+2]/*aRed*/;//Always crashes here
aGreen = (((aGreen-128)*(contrast+100) )/100) + 128;
if (aGreen < 0) aGreen = 0; if (aGreen>255) aGreen=255;
m_PixelBuf[index+1] = aGreen;
aBlue = (((aBlue-128)*(contrast+100) )/100) + 128;
if (aBlue < 0) aBlue = 0; if (aBlue>255) aBlue=255;
m_PixelBuf[index] = aBlue;
}
ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth( inImage ),
CGImageGetHeight( inImage ),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage ),
CGImageGetColorSpace(inImage ),
CGImageGetBitmapInfo(inImage) );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [[UIImage alloc]initWithCGImage:imageRef];
drawImage.image = rawImage;
[rawImage release];
CGContextRelease(ctx);
CFRelease(imageRef);
CFRelease(m_DataRef);
unfilteredImage2 = [[[UIImage alloc]initWithCGImage:drawImage.image.CGImage] autorelease];
inImage = unfilteredImage2.CGImage;
m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
length = CFDataGetLength(m_DataRef);
///////Color////////////////
for (int index = 0; index < length; index += 4)
{
//Blue
if((m_PixelBuf[index] + ((int)color * 2))>255){
m_PixelBuf[index] = 255;
}else if((m_PixelBuf[index] + ((int)color * 2))<0){
m_PixelBuf[index] = 0;
}
else{
m_PixelBuf[index]=m_PixelBuf[index] + ((int)color * 2);
}
//Green
if((m_PixelBuf[index+1] + ((int)color * 2))>255){
m_PixelBuf[index+1] = 255;
}else if((m_PixelBuf[index+1] + ((int)color * 2))<0){
m_PixelBuf[index+1] = 0;
}
else{
m_PixelBuf[index+1]=m_PixelBuf[index+1] + ((int)color * 2);
}
//Red
if((m_PixelBuf[index+2] + ((int)color * 2))>255){
m_PixelBuf[index+2] = 255;
}else if((m_PixelBuf[index+2] + ((int)color * 2))<0){
m_PixelBuf[index+2] = 0;
}
else{
m_PixelBuf[index+2]=m_PixelBuf[index+2] + ((int)color * 2);
}
//m_PixelBuf[index+3]=255;//Alpha
}
ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth( inImage ),
CGImageGetHeight( inImage ),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage ),
CGImageGetColorSpace(inImage ),
CGImageGetBitmapInfo(inImage) );
imageRef = CGBitmapContextCreateImage (ctx);
rawImage = [[UIImage alloc]initWithCGImage:imageRef];
drawImage.image = rawImage;
[rawImage release];
CGContextRelease(ctx);
CFRelease(imageRef);
CFRelease(m_DataRef);
//drawImage.image = unfilteredImage2;
willUpdate = YES;
}
sorry for any extra comments/info I just copied the whole method in.
Thanks,
Storealutes
I had same problem.
You should use below code to get pointer to pixel buffer instead of CFDataGetBytePtr().
CGImageRef cgImage = originalImage.CGImage;
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
char *buffer = (char*)malloc(sizeof(char) * width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(buffer, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextSetBlendMode(cgContext, kCGBlendModeCopy);
CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage);
free(buffer);
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);

UIImagePickerController returning incorrect image orientation

I'm using UIImagePickerController to capture an image and then store it. However, when i try to rescale it, the orientation value i get out of this image is incorrect. When i take a snap by holding the phone Up, it gives me orientation of Left. Has anyone experienced this issue?
The UIImagePickerController dictionary shows following information:
{
UIImagePickerControllerMediaMetadata = {
DPIHeight = 72;
DPIWidth = 72;
Orientation = 3;
"{Exif}" = {
ApertureValue = "2.970853654340484";
ColorSpace = 1;
DateTimeDigitized = "2011:02:14 10:26:17";
DateTimeOriginal = "2011:02:14 10:26:17";
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666666666666667";
FNumber = "2.8";
Flash = 32;
FocalLength = "3.85";
ISOSpeedRatings = (
125
);
MeteringMode = 1;
PixelXDimension = 2048;
PixelYDimension = 1536;
SceneType = 1;
SensingMethod = 2;
Sharpness = 1;
ShutterSpeedValue = "3.910431673351467";
SubjectArea = (
1023,
767,
614,
614
);
WhiteBalance = 0;
};
"{TIFF}" = {
DateTime = "2011:02:14 10:26:17";
Make = Apple;
Model = "iPhone 3GS";
Software = "4.2.1";
XResolution = 72;
YResolution = 72;
};
};
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = "<UIImage: 0x40efb50>";
}
However picture returns imageOrientation == 1;
UIImage *picture = [info objectForKey:UIImagePickerControllerOriginalImage];
I just started working on this issue in my own app.
I used the UIImage category that Trevor Harmon crafted for resizing an image and fixing its orientation, UIImage+Resize.
Then you can do something like this in -imagePickerController:didFinishPickingMediaWithInfo:
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerEditedImage];
UIImage *resized = [pickedImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:pickedImage.size interpolationQuality:kCGInterpolationHigh];
This fixed the problem for me. The resized image is oriented correctly visually and the imageOrientation property reports UIImageOrientationUp.
There are several versions of this scale/resize/crop code out there; I used Trevor's because it seems pretty clean and includes some other UIImage manipulators that I want to use later.
This what I have found for fixing orientation issue; Works for me
UIImage *initialImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSData *data = UIImagePNGRepresentation(self.initialImage);
UIImage *tempImage = [UIImage imageWithData:data];
UIImage *fixedOrientationImage = [UIImage imageWithCGImage:tempImage.CGImage
scale:initialImage.scale
orientation:self.initialImage.imageOrientation];
initialImage = fixedOrientationImage;
Here's a Swift snippet that fixes the problem efficiently:
let orientedImage = UIImage(CGImage: initialImage.CGImage, scale: 1, orientation: initialImage.imageOrientation)!
I use the following code that I have put in a separate image utility object file that has a bunch of other editing methods for UIImages:
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor) {
scaleFactor = widthFactor; // scale to fit height
}
else {
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor < heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
// In the right or left cases, we need to switch scaledWidth and scaledHeight,
// and also the thumbnail point
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x);
CGFloat oldScaledWidth = scaledWidth;
scaledWidth = scaledHeight;
scaledHeight = oldScaledWidth;
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
And then I call
UIImage *pickedImage = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *fixedOriginal = [ImageUtil imageWithImage:[mediaInfoDict objectForKey:UIImagePickerControllerOriginalImage] scaledToSizeWithSameAspectRatio:pickedImage.size];
In iOS 7, I needed code dependent on UIImage.imageOrientation to correct for the different orientations. Now, in iOS 8.2, when I pick my old test images from the album via UIImagePickerController, the orientation will be UIImageOrientationUp for ALL images. When I take a photo (UIImagePickerControllerSourceTypeCamera), those images will also always be upwards, regardless of the device orientation.
So between those iOS versions, there obviously has been a fix where UIImagePickerController already rotates the images if neccessary.
You can even notice that when the album images are displayed: for a split second, they will be displayed in the original orientation, before they appear in the new upward orientation.
The only thing that worked for me was to re-render the image again which forces the correct orientation.
if (photo.imageOrientation != .up) {
UIGraphicsBeginImageContextWithOptions(photo.size, false, 1.0);
let rect = CGRect(x: 0, y: 0, width: photo.size.width, height: photo.size.height);
photo.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
photo = newImage;
}

Resources