Taking image using UIImagePickerController - uiimage

I am taking pictures from the iPhone camera using the UIImagePickerController Class.
I am using this delegate method for getting the image.
- (void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary*)info
{
UIImage *image = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
}
But when I am using that image in an ImageView or sending the image data to some url, the image is appearing rotated by 90 degree.
What is the issue ? Am i doing it correctly ?
Thanks

You need to rotate the picture yourself depending on your orientation.
Use this Code (it may also resize your picture) I found it somewhere in the net, but cannot remember where:
#implementation UIImage (Resizing)
static inline double radians (double degrees) {return degrees * M_PI/180;}
- (UIImage*)imageByScalingToSize:(CGSize)targetSize {
UIImage* sourceImage = self;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, radians(90));
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, radians(-90));
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, radians(-180.));
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
//CGContextRelease(bitmap);
//CGImageRelease(ref);
return newImage;
}
#end

Related

GPUImageStillCamera image preview jumps when taking a photo

I am taking a square cropped photo with GPUImageStillCamera and allowing the user to zoom the camera. When the user clicks to take a picture the camera jumps forward for a split second (as if the camera zoomed in even further past the area the user zoomed to and then immediately returns to the correct crop once the image is returned to screen). This only happens when the user has zoomed the camera. If they have not zoomed the camera the flicker/jump does not happen. (The image return has the correct crop whether or not the user has zoomed).
Thoughts?
Creating camera and adding square crop
//Add in filters
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
//Creating a square crop filter
cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0.f, (720.0f/1280.0f)/2.0f, 1.f, (720.0f/1280.0f))];
Image zoom method
-(void)imagePinch:(UIPinchGestureRecognizer *)recognizer{ //Controlling the zoom scale as the user pinches the live preview
if (recognizer.state == UIGestureRecognizerStateBegan) {
zoomOutAdder = 0.0f;
if (currentScale > 2) {
zoomOutAdder = currentScale;
}
}
float addition = (recognizer.scale - lastScale);
if (addition > 0) {
addition = addition *1.7;
}
if (addition < 0) {
addition = addition *(1.7+zoomOutAdder);
}
currentScale = currentScale +addition;
lastScale = recognizer.scale;
if (currentScale < 1) {
currentScale = 1;
}
if (currentScale > 4) {
currentScale =4;
}
if (currentScale == 1) {
zoomOutAdder = 0.0f;
}
cameraImagePreview.transform = CGAffineTransformMakeScale(currentScale, currentScale);
if (recognizer.state == UIGestureRecognizerStateEnded) {
lastScale = 1.0f;
}
Take a photo method
//Adjust crop based on zoom scale of the user
CGFloat zoomReciprocal = 1.0f / currentScale;
CGPoint offset = CGPointMake(((1.0f - zoomReciprocal) / 2.0f), (((1.0f- zoomReciprocal)*(720.0f/1280.0f)) / 2.0f) + ((720.0f/1280.0f)/2)) ;
CGRect newCrop = cropFilter.cropRegion;
newCrop.origin.x = offset.x;
newCrop.origin.y = offset.y;
newCrop.size.width = cropFilter.cropRegion.size.width * zoomReciprocal;
newCrop.size.height = cropFilter.cropRegion.size.height *zoomReciprocal;
cropFilter.cropRegion = newCrop;
*/
//Place photo inside an image preview view for the user to decide if they want to keep it.
[stillCamera capturePhotoAsImageProcessedUpToFilter:cropFilter withOrientation:imageOrientation withCompletionHandler:^(UIImage *processedImage, NSError *error) {
//Pause the current camera
[stillCamera pauseCameraCapture];
//Rest of method
ADDED METHODS
- (void) flipCamera {
if (stillCamera.cameraPosition != AVCaptureDevicePositionFront) {
[UIView animateWithDuration:.65 animations:^{
flipCamera.transform = CGAffineTransformMakeScale(-1, 1);
}];
} else {
[UIView animateWithDuration:.65 animations:^{
flipCamera.transform = CGAffineTransformMakeScale(1, 1);
}];
}
[self performSelector:#selector(rotateCamera) withObject:0 afterDelay:.2];
}
- (void) rotateCamera {
[stillCamera rotateCamera];
//Adjust flash settings as needed
[stillCamera.inputCamera lockForConfiguration:nil];
if (stillCamera.cameraPosition != AVCaptureDevicePositionFront) {
[stillCamera.inputCamera setFlashMode:AVCaptureFlashModeOff];
}
NSAttributedString *attributedFlash =
[[NSAttributedString alloc]
initWithString:#"off"
attributes:
#{
NSFontAttributeName : [UIFont fontWithName:#"Roboto-Regular" size:13.0f],
NSForegroundColorAttributeName : [UIColor colorWithWhite:1 alpha:.55],
NSKernAttributeName : #(.25f)
}];
flashLabel.attributedText = attributedFlash;
[UIView animateWithDuration:.2 animations:^{
[flash setTintColor:[UIColor colorWithWhite:1 alpha:.55]];
}];
[stillCamera.inputCamera unlockForConfiguration];
}
- (void) changeFlash {
if (stillCamera.cameraPosition == AVCaptureDevicePositionFront) {//no flash available on front of camera
return;
}
[stillCamera.inputCamera lockForConfiguration:nil];
if (stillCamera.inputCamera.flashMode == AVCaptureFlashModeOff) {
[stillCamera.inputCamera setFlashMode:AVCaptureFlashModeOn];
[self animateFlashWithTintColor:[UIColor colorWithWhite:1 alpha:1] andString:#"on"];
} else if (stillCamera.inputCamera.flashMode == AVCaptureFlashModeOn) {
[stillCamera.inputCamera setFlashMode:AVCaptureFlashModeOff];
[self animateFlashWithTintColor:[UIColor colorWithWhite:1 alpha:.55] andString:#"off"];
}
[stillCamera.inputCamera unlockForConfiguration];
}
- (void) animateFlashWithTintColor:(UIColor *)color andString:(NSString *)text {
//Set new text
NSAttributedString *attributedFlash =
[[NSAttributedString alloc]
initWithString:text
attributes:
#{
NSFontAttributeName : [UIFont fontWithName:#"Roboto-Regular" size:13.0f],
NSForegroundColorAttributeName : [UIColor colorWithWhite:1 alpha:.55],
NSKernAttributeName : #(.25f)
}];
flashLabel.attributedText = attributedFlash;
float duration = .7;
[UIView animateKeyframesWithDuration:duration delay:0 options:0 animations:^{
[UIView addKeyframeWithRelativeStartTime:0 relativeDuration:duration animations:^{
[flash setTintColor:color];
}];
[UIView addKeyframeWithRelativeStartTime:0 relativeDuration:.7/duration animations:^{
flash.transform = CGAffineTransformMakeRotation(M_PI);
}];
}completion:^(BOOL finished){
flash.transform = CGAffineTransformIdentity;
}];
}
-(void) usePhoto {
if ([ALAssetsLibrary authorizationStatus] != ALAuthorizationStatusAuthorized){
NSLog(#"Do Not Have Right To Save to Photo Library");
}
//Save Image to Phone Album & save image
UIImageWriteToSavedPhotosAlbum(takenPhoto.image, nil, nil, nil);
//Save Image to Delegate
[self.delegate saveImageToDatabase:takenPhoto.image];
[self performSelector:#selector(dismissCamera) withObject:0 afterDelay:.4];
}
Some additional code showing the creation of the the various camera elements used to capture a photo.
centerPoint = CGPointMake(self.view.frame.size.width/2, (cameraHolder.frame.size.height+50+self.view.frame.size.height)/2);
cameraImagePreview = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, cameraHolder.frame.size.width, cameraHolder.frame.size.width)];
[cameraHolder addSubview:cameraImagePreview];
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageTouch:)];
[cameraImagePreview addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(imagePinch:)];
[cameraImagePreview addGestureRecognizer:pinchGesture];
float scaleForView = self.view.frame.size.width/720.0;
fullCameraFocusPoint = [[UIView alloc]initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, 1280*scaleForView)];
fullCameraFocusPoint.center = CGPointMake(cameraHolder.frame.size.width/2, (cameraHolder.frame.size.width/2)+50);
[self.view insertSubview:fullCameraFocusPoint atIndex:0];
takenPhoto = [[UIImageView alloc]initWithFrame:cameraHolder.frame];
takenPhoto.alpha = 0;
[self.view addSubview:takenPhoto];
//Add in filters
stillCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720 cameraPosition:AVCaptureDevicePositionBack];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
//Creating a square crop filter
cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0.f, (720.0f/1280.0f)/2.0f, 1.f, (720.0f/1280.0f))];
//Create standard vignette filter
vignetteFilter = [[GPUImageVignetteFilter alloc] init]; //1
vignetteFilter.vignetteCenter = CGPointMake(.5, .5);
vignetteFilter.vignetteStart = 0.4f;
vignetteFilter.vignetteEnd = 1.08f;
//Add filters to photo
[cropFilter addTarget:vignetteFilter];
[stillCamera addTarget:cropFilter];
[vignetteFilter addTarget:cameraImagePreview];
[stillCamera startCameraCapture];

Implicit conversion from enumeration type 'CGImageAlphaInfo'

ive updated my project to IOS 7 and now i am getting this error when resizing an image once added/taken within the app here is my code
-(UIImage *)resizeImage:(UIImage *)anImage width:(int)width height:(int)height
{
CGImageRef imageRef = [anImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
The error im getting is this
Implicit conversion from enumeration type 'CGImageAlphaInfo' (aka 'enum CGImageAlphaInfo') to different enumeration type 'CGBitmapInfo' (aka 'enum CGBitmapInfo')
I have inserted (CGBitmapInfo) before your variable alphaInfo.
Hope it solves your problem
-(UIImage *)resizeImage:(UIImage *)anImage width:(int)width height:(int)height
{
CGImageRef imageRef = [anImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), (CGBitmapInfo)alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}

Create RGB image from each channel

I have 3 files, one with only a red channel, one with only a green channel, one with only a blue channel. Now i want to combine those 3 images to one, where every image is one color-channel in the finished image.
How can i do this with cocoa? I have a solution that is working but is too slow:
NSBitmapImageRep *rRep = [[rImage representations] objectAtIndex: 0];
NSBitmapImageRep *gRep = [[gImage representations] objectAtIndex: 0];
NSBitmapImageRep *bRep = [[bImage representations] objectAtIndex: 0];
NSBitmapImageRep *finalRep = [rRep copy];
for (NSUInteger i = 0; i < [rRep pixelsWide]; i++) {
for (NSUInteger j = 0; j < [rRep pixelsHigh]; j++) {
CGFloat r = [[rRep colorAtX:i y:j] redComponent];
CGFloat g = [[gRep colorAtX:i y:j] greenComponent];
CGFloat b = [[bRep colorAtX:i y:j] blueComponent];
[finalRep setColor:[NSColor colorWithCalibratedRed:r green:g blue:b alpha:1.0] atX:i y:j];
}
}
NSData *data = [finalRep representationUsingType:NSJPEGFileType properties:[NSDictionary dictionaryWithObject:[NSNumber numberWithDouble:0.7] forKey:NSImageCompressionFactor]];
[data writeToURL:[panel URL] atomically:YES];
The Accelerate.framework provides a function to combine 3 planar images into one destination:
vImageConvert_Planar8toRGB888.
I haven't tried your approach but the vImage based method below is quite fast.
I was able to combine three (R,G,B) planes of a 1680x1050 image in ~0.1s on my Mac. The actual conversion takes ~1/3 of that time - The rest is setup & file IO.
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSDate* start = [NSDate date];
NSURL* redImageURL = [[NSBundle mainBundle] URLForImageResource:#"red"];
NSURL* greenImageURL = [[NSBundle mainBundle] URLForImageResource:#"green"];
NSURL* blueImageURL = [[NSBundle mainBundle] URLForImageResource:#"blue"];
NSData* redImageData = [self newChannelDataFromImageAtURL:redImageURL];
NSData* greenImageData = [self newChannelDataFromImageAtURL:greenImageURL];
NSData* blueImageData = [self newChannelDataFromImageAtURL:blueImageURL];
//We use our "Red" image to measure the dimensions. We assume that all images & the destination have the same size
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)redImageURL, NULL);
NSDictionary* properties = (__bridge NSDictionary*)CGImageSourceCopyPropertiesAtIndex(imageSource, 0, NULL);
CGFloat width = [properties[(id)kCGImagePropertyPixelWidth] doubleValue];
CGFloat height = [properties[(id)kCGImagePropertyPixelHeight] doubleValue];
self.image = [self newImageWithSize:CGSizeMake(width, height) fromRedChannel:redImageData greenChannel:greenImageData blueChannel:blueImageData];
NSLog(#"Combining 3 (R, G, B) planes of size %# took:%fs", NSStringFromSize(CGSizeMake(width, height)), [[NSDate date] timeIntervalSinceDate:start]);
}
- (NSImage*)newImageWithSize:(CGSize)size fromRedChannel:(NSData*)redImageData greenChannel:(NSData*)greenImageData blueChannel:(NSData*)blueImageData
{
vImage_Buffer redBuffer;
redBuffer.data = (void*)redImageData.bytes;
redBuffer.width = size.width;
redBuffer.height = size.height;
redBuffer.rowBytes = [redImageData length]/size.height;
vImage_Buffer greenBuffer;
greenBuffer.data = (void*)greenImageData.bytes;
greenBuffer.width = size.width;
greenBuffer.height = size.height;
greenBuffer.rowBytes = [greenImageData length]/size.height;
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&redBuffer, &greenBuffer, &blueBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
}
- (NSData*)newChannelDataFromImageAtURL:(NSURL*)imageURL
{
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
if(imageSource == NULL){return NULL;}
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
CFRelease(imageSource);
if(image == NULL){return NULL;}
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image);
CGFloat width = CGImageGetWidth(image);
CGFloat height = CGImageGetHeight(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(image);
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, bitmapInfo);
NSData* data = NULL;
if(NULL != bitmapContext)
{
CGContextDrawImage(bitmapContext, CGRectMake(0.0, 0.0, width, height), image);
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
if(NULL != imageRef)
{
data = (NSData*)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
}
CGImageRelease(imageRef);
CGContextRelease(bitmapContext);
}
CGImageRelease(image);
return data;
}
Your program creates many many many many many many color objects.
Although your program could simply access the image reps' bitmapData, it would require your program to know a lot about bitmap representations.
Before taking that approach, you should prefer to let Quartz do the heavy lifting by rendering each image to a CGBitmapContext (e.g. using CGContextDrawImage(gtx, rect, img.CGImage)) and then extracting/copying the rendered component values from the rendered result over to a destination RGB bitmap.
If your inputs are not multicomponent color models (e.g. grayscale), then you should render to the source color model to save a bunch of CPU time and memory.

PDF Creation in Cocoa

Here with below code PDF gets created but i am able to write in PDF.But i am not able to render NSView in PDF
CGContextRef aCgPDFContextRef = [self createPDFContext:CGRectMake(0, 0, 612, 892) path:(CFStringRef)filepath];
CGContextBeginPage(aCgPDFContextRef,nil);
//Draw view into PDF
NSView *aPageNote=[[NSView alloc]initWithFrame:CGRectMake(0,0, 612, 892)];
CGAffineTransform aCgAffTrans = CGAffineTransformIdentity;
aCgAffTrans = CGAffineTransformMakeTranslation(0,892);
aCgAffTrans = CGAffineTransformScale(aCgAffTrans, 1.0, -1.0);
CGContextConcatCTM(aCgPDFContextRef, aCgAffTrans);
NSString *strText = #"Test 123333333333333333";
[strText drawAtPoint:NSMakePoint(100, 200) withAttributes:[NSDictionary dictionaryWithObjectsAndKeys:[NSFont systemFontOfSize:40.0],NSFontAttributeName,nil]];//drawAtPoint:CGPointMake(200, 300) withFont:[NSFont systemFontOfSize:20]];
[aPageNote setWantsLayer:YES];
[aPageNote.layer renderInContext:aCgPDFContextRef];
CGContextEndPage(aCgPDFContextRef);
CGContextRelease (aCgPDFContextRef);
NSLog(#"Pdf Successfully Created");
The Method :
-(CGContextRef) createPDFContext:(CGRect)aCgRectinMediaBox path:(CFStringRef) aCfStrPath
{
CGContextRef aCgContextRefNewPDF = NULL;
CFURLRef aCfurlRefPDF;
aCfurlRefPDF = CFURLCreateWithFileSystemPath (NULL,aCfStrPath,kCFURLPOSIXPathStyle,false);
if (aCfurlRefPDF != NULL) {
aCgContextRefNewPDF = CGPDFContextCreateWithURL (aCfurlRefPDF,&aCgRectinMediaBox,NULL);
CFRelease(aCfurlRefPDF);
}
return aCgContextRefNewPDF;
}
EDIT : I am able to write in PDF using this code
CGContextBeginPage(aCgPDFContextRef,nil);
NSString *strText1 = #"I am iOS Developer";
CGContextSelectFont (aCgPDFContextRef, "Helvetica", 24, kCGEncodingMacRoman);
CGContextSetTextDrawingMode (aCgPDFContextRef, kCGTextFill);
CGContextSetRGBFillColor (aCgPDFContextRef, 0, 0, 0, 1);
const char *text1 = [strText1 UTF8String];
CGContextShowTextAtPoint (aCgPDFContextRef, 50, 375, text1, strlen(text1));
CGContextEndPage(aCgPDFContextRef);
EDIT : Render NSView and add as image into PDF like this:
//Firstly render NS element into NSImage
NSImage *i = [[NSImage alloc] initWithData:[yourView dataWithPDFInsideRect:[yourView bounds]]];
//Now u can add in PDF
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)[i TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGContextDrawImage(aCgPDFContextRef,[yourView bounds], imageRef);

Unable to call a method

I am trying to call the method below using [self changeColor]; and getting
"No visible interface for #MyClassViewController declares the selector "changeColor"
I did declare it like that on both .h and .m:
#interface changeColor
- (UIImage *) changeColor:(UIImage *)image;
#end
What I am doing wrong?
Please help
-------------Code-------------
- (UIImage *) changeColor:(UIImage *)image
{
UIGraphicsBeginImageContext(image.size);
CGRect contextRect;
contextRect.origin.x = 0.0f;
contextRect.origin.y = 0.0f;
contextRect.size = [image size];
// Retrieve source image and begin image context
CGSize itemImageSize = [image size];
CGPoint itemImagePosition;
itemImagePosition.x = ceilf((contextRect.size.width - itemImageSize.width) / 2);
itemImagePosition.y = ceilf((contextRect.size.height - itemImageSize.height) );
UIGraphicsBeginImageContext(contextRect.size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Setup shadow
// Setup transparency layer and clip to mask
CGContextBeginTransparencyLayer(c, NULL);
CGContextScaleCTM(c, 1.0, -1.0);
CGContextClipToMask(c, CGRectMake(itemImagePosition.x, -itemImagePosition.y, itemImageSize.width, -itemImageSize.height), [image CGImage]);
CGContextSetRGBFillColor(c, 0, 0, 1, 1);
NSLog(#"--------- METHOD DETECTED");//>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> TEST
/*
switch (row)
{
case 0:
CGContextSetRGBFillColor(c, 0, 0, 1, 1);
break;
default:
CGContextSetRGBFillColor(c, 1, 0, 0., 1);
break;
}
*/
contextRect.size.height = -contextRect.size.height;
contextRect.size.height -= 15;
// Fill and end the transparency layer
CGContextFillRect(c, contextRect);
CGContextEndTransparencyLayer(c);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
The problem is [self changeColor];. changeColor and changeColor: are two different things. If you define it as taking one parameter, you have to call it that way.

Resources