How to convert jpeg image to PICT image on MAC using Cocoa - cocoa

How to convert JPEG image to PICT image using cocoa.Some script is given below.
NSData *imgData = [NSData datawithContentsOfFile:#"/var/root/Desktop/1.jpeg"];
NSPICTImageRep *imagerep = [NSPICTImageRep imageRepWithData:imgData];
NSData *data = [imageRep PICTRepresentation];
[data writeTofile:#"/var/root/Desktop/save.pict" atomically:No];
This script is not work. and any other alternate method which convert jpeg image to pict image without Applescript.
.

There's a couple problems with your code.
#1) are you certain of the location of that "1.jpeg" file?
#2) you're not looking at the error result of your "writeToFile". On my machine, I can not write to anything inside the "/var/root" directory.
Once you fix up the source and destination paths, you should change your code to something like this:
NSData *imgData = [NSData datawithContentsOfFile:#"/Users/anuj/Desktop/1.jpeg"];
NSPICTImageRep *imagerep = [NSPICTImageRep imageRepWithData:imgData];
NSData *data = [imageRep PICTRepresentation];
NSLog(#"my image data size is %ld", [data length]);
if([data length] > 0)
{
BOOL success = [data writeTofile:#"/Users/anuj/Desktop/save.pict" atomically:NO];
if(success)
NSLog(#"successfully wrote the file");
else
NSLog(#"did not write the file");
}
else
{
NSLog(#"didn't convert the image to a Pict");
}

Related

AVAssetExportSession not exporting metadata

I'm trying to use AVAssetExportSession to change the metadata of the file but any metadata I try to use doesn't seem to work. When I pass an empty array to [AVAssetExportSession setMetadata:Array]; the file gets written with its unedited metadata like it's supposed to but as soon as I put an AVMetadataItem in the array no metadata is written to the new file. Here is the code that I used:
//NSMutableArray *newArray = [NSMutableArray arrayWithArray:[exportSession metadata]];
AVMutableMetadataItem *addingNew = [[AVMutableMetadataItem alloc] init];
[addingNew setKeySpace:AVMetadataKeySpaceiTunes];
[addingNew setKey:AVMetadataiTunesMetadataKeyUserComment];
[addingNew setValue:[NSString stringWithFormat:#"This is my comment"]];
NSArray *newArray = [NSArray arrayWithObject:addingNew];
NSURL *fileURL = [NSURL fileURLWithPath: outputFile];
[exportSession setMetadata:metaMuteArray];
[exportSession setOutputURL:fileURL];
[exportSession setOutputFileType:AVFileTypeMPEG4];
[exportSession shouldOptimizeForNetworkUse:YES]; //false doesn't work either
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([exportSession status])
{
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export sucess");
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed: %#", [[exportSession error] localizedDescription]);
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export canceled");
default:
break;
}
}];
I have answered my own question. The file of which I am changing the file information is in the MP4 format so I set the filetype output to MP4 as well. This wouldn't export the metadata, changing the setOutputFileType to AVFileTypeAppleM4V did the job just fine, interestingly the output file is still and MP4, not an M4V.

NSImage + NSBitmapImageRep = Converting RAW image file from one format to another

I am trying to write a prototype to prove that RAW conversion from one format to another is possible. I have to convert a Nikon's raw file which is of .NEF format to Canon's .CR2 format. With help of various posts I create the original image TIFF representation's BitmapImageRep and use this to write the output file which has a .CR2 extension.
It does work but only problem for me is, the input file is of 21.5 MB but the output am getting is of 144.4 MB. While using NSTIFFCompressionPackBits gives me 142.1 MB.
I want to understand what is happening, I have tried various compression enums available but with no success.
Please help me understanding it. This is the source code:
#interface NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName;
#end
#implementation NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName
{
// Cache the reduced image
NSData *imageData = [self TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
// http://www.cocoabuilder.com/archive/cocoa/151789-nsbitmapimagerep-compressed-tiff-large-files.html
NSDictionary *imageProps = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:NSTIFFCompressionJPEG],NSImageCompressionMethod,
[NSNumber numberWithFloat: 1.0], NSImageCompressionFactor,
nil];
imageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[imageData writeToFile:fileName atomically:NO];
}
#end
How could I get the output file which is in CR2 format but almost around the size of the input file with little variation as required for a CR2 file?
Edit 1:
Done changes based on Peter's suggestion of using CGImageDestinationAddImageFromSource method, but still I am getting the same result. The input source NEF file size 21.5 MB but the destination file size after conversion 144.4 MB.
Please review the code:
-(void)saveAsCR2WithCGImageMethodUsingName:(NSString*)inDestinationfileName withSourceFile:(NSString*)inSourceFileName
{
CGImageSourceRef sourceFile = MyCreateCGImageSourceRefFromFile(inSourceFileName);
CGImageDestinationRef destinationFile = createCGImageDestinationRefFromFile(inDestinationfileName);
CGImageDestinationAddImageFromSource(destinationFile, sourceFile, 0, NULL);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
CGImageDestinationFinalize(destinationFile);
}
CGImageSourceRef MyCreateCGImageSourceRefFromFile (NSString* path)
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
return myImageSource;
}
CGImageDestinationRef createCGImageDestinationRefFromFile (NSString *path)
{
NSURL *url = [NSURL fileURLWithPath:path];
CGImageDestinationRef myImageDestination;
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/imageio_basics/ikpg_basics.html#//apple_ref/doc/uid/TP40005462-CH216-SW3
CFStringRef destFileType = CFSTR("public.tiff");
// CFStringRef destFileType = kUTTypeJPEG;
CFArrayRef types = CGImageDestinationCopyTypeIdentifiers(); CFShow(types);
myImageDestination = CGImageDestinationCreateWithURL((CFURLRef)url, destFileType, 1, myOptions);
return myImageDestination;
}
Edit 2: Used the second approach told by #Peter. This gives interesting result. It's effect is the same as renaming the file in finder something like "example_image.NEF" to "example_image.CR2". Surprisingly what happens when converting both programmatically and in finder is, the source file which is 21.5 MB will turn out to be 59 KB. This is without any compression set in the code. Please see the code and suggest:
-(void)convertNEFWithTiffIntermediate:(NSString*)inNEFFile toCR2:(NSString*)inCR2File
{
NSData *fileData = [[NSData alloc] initWithContentsOfFile:inNEFFile];
if (fileData)
{
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:fileData];
// [imageRep setCompression:NSTIFFCompressionNone
// factor:1.0];
NSDictionary *imageProps = nil;
NSData *destinationImageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[destinationImageData writeToFile:inCR2File atomically:NO];
}
}
The first thing I would try doesn't involve NSImage or NSBitmapImageRep at all. Instead, I would create a CGImageSource for the source file and a CGImageDestination for the destination file, and use CGImageDestinationAddImageFromSource to transfer all of the images from A to B.
You're converting to TIFF twice in this code:
You create an NSImage, I assume from the source file.
You ask the NSImage for its TIFFRepresentation (TIFF conversion #1).
You create an NSBitmapImageRep from the first TIFF data.
You ask the NSBitmapImageRep to generate a second TIFF representation (TIFF conversion #2).
Consider creating an NSBitmapImageRep directly from the source data, and not using NSImage at all. You would then skip directly to step 4 to generate the output data.
(But I still would try CGImageDestinationAddImageFromSource first.)
Raw image files have their own (proprietary) representation.
For example, they may use 14-bit per component, and mosaic patterns, which are not supported by your code.
I think you should use a lower-level API and really reverse engineer the RAW format you are trying to save to.
I would start with DNG, which is relatively easy, as Adobe provides an SDK to write it.

Getting bitmap data from JPEG image using Cocoa

I need to extract the raw RGB bitmap data from a JPEG or PNG file, with all the bits in the file, not a window or color converted version.
I'm new to Cocoa, but it looks like I open an image using NSImage like this:
NSString* imageName=[[NSBundle mainBundle] pathForResource:#"/Users/me/Temp/oxberry.jpg" ofType:#"JPG"];
NSImage* tempImage=[[NSImage alloc] initWithContentsOfFile:imageName];
NSBitmapImageRep* imageRep=[[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes=[imageRep bitmapData];
int bits=[imageRep bitsPerPixel];
Then to get the bitmap data there seems to be lots of options: Bitmapimage, CGImage, etc.
What is the simplest approach and if there was a code snippet, that would be great.
Thanks!
You're on the right track. As you noticed, there are lot of ways to do this.
Once you have an NSImage, you can create a bitmap representation, and access its bytes directly. An easy way to get a NSBitmapImageRep is to do this:
NSBitmapImageRep* imageRep = [[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes = [imageRep bitmapData];
int bitsPerPixel = [imageRep bitsPerPixel];
// etc
Going through the TIFFRepresentation step is safer than accessing the NSImage's representations directly.

Loading a GIF file into CGImage

obviously both creation functions for PNG and JPEG doesn't help for GIF files,
How is it possible (and easy?) to load a gif file into a CGImage.
For a Mac, the easiest way is:
NSData *data = [NSData dataWithContentsOfFile:#"mygif.gif"];
CGImageRef myCGImage = [[NSBitmapImageRep imageRepWithData: data] CGImage];
Building on JG's answer, for reference.
A solid way to get the "name" of the file is:
NSString *fileName = [[NSBundle mainBundle] pathForResource:#"imageName" ofType:#"jpg"];
And then continue with JG's solution:
CGDataProviderRef dataProvider = CGDataProviderCreateWithFilename([fileName cStringUsingEncoding:NSUTF8StringEncoding]);
self.maskedImage = CGImageCreateWithJPEGDataProvider(dataProvider, NULL, NO, kCGRenderingIntentDefault);

NSImage from byte array

I'm trying to display an image in a NSImageView, with an image contained in a Byte array. How can I do this? From what I understand I need to convert my byte[] to an NSData variable and feed that to an NSImage. Is this correct? How do I do it? I've tried casting and that doesn't work, and there doesn't seem to be any conversion built in...
I have tried the following:
Casting:
NSData bytesAsMacVariable = (NSData) imageAsBytes;
Also tried
NSData bytesAsMacVariable = imageAsBytes as NSData;
Finally, tried to pass a byte[] as if it was a NSData.
NSImage imageToShow = new NSImage(imageAsBytes);
None of these will work, and as far as I can see, neither NSImage or NSData has a member function that accepts byte[] for conversion...
You're casting to the object type, but you should cast to pointer-to-object type.
Try something more like
NSData *imageData = [NSData dataWithBytes:byteArray length:arrayLength];
NSImage *image = [[NSImage alloc] initWithData:imageData];
[imageView setImage:image];
[image release];
The pointers are very important.
You can use an NSMutableData, like this:
new NSImage (new NSMutableData (imageAsBytes));

Resources