I have a structure which includes a pointer to a data set, which in this case is a 16-bit grayscale image. I want to convert this data to an NSImage so that I can display it, and then save it as a .TIF file. The route from the manuals appears to be something like:
(Create *myNSImData from frame->image, which is a pointer)
NSImage *TestImage = [[NSImage alloc] initWithData : myNSImData];
(display TestImage, save it, whatever else)
[TestImage release];
I am lost as to how to create the NSData object and assure it contains the array of 16-bit data. Attempts to recast the pointer give warnings and no data. I could simply increment the pointers, transferring one byte at a time from frame->image to the data object, but I don't understand how to communicate the array structure to the data object. Any ideas?
Thanks
MORE ATTEMPTS USING YOUR SUGGESTION
I can convert this data to a .TIF file in the following manner:
for (uint32 row = 0 ; row < MaxHeight ; row++)
{
for (uint32 column = 0;column< MaxWidth;column++)
{
tempData = (uint8_t)*frame->image; //first byte
*frame->image++;
buf[2 * column + 1] = (unsigned char) tempData;
tempData = (uint8_t)*frame->image; //second byte
*frame->image++;
buf[2 * column] = (unsigned char) tempData;
}
TIFFWriteScanline(tiffile,buf,row,0);
}
With the .TIF file thus generated, I can create an NSImage and display it:
NSImage *TestImage = [[[NSImage alloc] initWithContentsOfFile:inFilePath] autorelease];
[viewWindow setImage: TestImage];
My question now becomes - can I create an NSData object that I can display in the same way? I have tried the following (product is the height*width of the image):
NSData *ReadImage = [[[NSData alloc] initWithBytes: frame->image length:2*product] autorelease] ;
NSImage *NewImage = [[[NSImage alloc] initWithData:ReadImage] autorelease];
NSSize newSize;
newSize.height = MaxHeight; //height of the image
newSize.width = MaxWidth; //width of the image
[NewImage setSize:newSize];
[viewWindow setImage: NewImage];
When I try this, nothing displays. I have also tried creating an array of uint16_t which has the data, and serving up the pointer to that - again, nothing displays. Any ideas? E.g. do I have to tell the NSData that I am using 2 bytes per pixel, or something like that? Thanks Monty Wood
To create an NSData object containing a block to which you have a pointer, you should use one of the three methods that start with initWithBytes:, or, to create an autoreleased NSData object, use one of the class methods that start with dataWithBytes:
UPDATE: I think that if you want to create an NSImage directly from an NSData, the data needs to include the appropriate headers/magic numbers so that NSImage can figure out what the representation is. You should look at NSBitmapImageRep and the Images chapter of the Cocoa Drawing Guide for raw image data.
Related
I am working on an OSX app that does some pixel-level image manipulation. I am using the following code to access the pixel color components (RGBA) as regular bytes cast as uint8 pointers.
NSImage *image = self.iv.image;
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(cgImage)));
uint8 *pixels = (uint8 *)[data bytes];
At this point I apply some byte level changes in:
for (int i = 0; i < [data length]; i += 4) { ... }
Changing this region of memory does not appear to have any effect on the original CGImageRef (which is at the time displayed in an NSImageView). I must do the following to see the image update accordingly:
CGImageRef newImageRef = CGImageCreate (width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault);
NSSize size = NSMakeSize(CGImageGetWidth(newImageRef),
CGImageGetHeight(newImageRef));
NSImage * newIm = [[NSImage alloc] initWithCGImage:newImageRef size:size];
self.iv.image = newIm;
In other words, the bytes I get back to modify are just a copy of the original bytes, presumably as a result of CGDataProviderCopyData(CGImageGetDataProvider(cgImage).
My question is as follows. Is there is a way to access the underlying bytes of the CGImageRef directly such that when I modify them the image is updated on screen as I manipulate them?
No. CGImages are immutable. You can't change them once they are created.
In your code, the call to [data bytes] gives a pointer to const void. You have cast away the const which gets it to compile without warnings, but that's a violation of the design contract. Writing to the buffer backing the data provider is not legal and not guaranteed to work, even if you create a new CGImage from it.
I will also point out that the format of the data in the buffer may be quite different from what you were expecting. There's no good reason to expect the data to be 32 bits per pixel, RGBA vs. BGRA vs. ARGB vs. …, or anything.
I strong recommend that you read the sections about the various image objects in the 10.6 AppKit release notes. Scroll down to "NSImage, CGImage, and CoreGraphics impedance matching" and read through all of the following image-related sections until you hit "NSComboBox". The section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" is one of the more important for your purposes.
Beyond what that says, you could just maintain a pixel buffer that you allocated yourself in whatever format you prefer. Then, when you want a CGImage of that, create it from the buffer, draw with it, and discard it. Any pixel manipulations would be done on that buffer.
I'm trying to read an image using OpenCV, get a part of it and show it in an NSImageView object.
Here is how I'm doing.
cv::Mat im = cv::imread("/Users/maddev/Documents/R8.jpg");
cv::Mat reduced = im(cv::Rect(10, 10, 400, 400));
std::vector<unsigned char> data ;
unsigned char *ptr = reduced.data;
unsigned long size = reduced.total() * reduced.elemSize();
data.assign(ptr, ptr + size);
int cols = reduced.cols;
int rows = reduced.rows;
// to NSImage
unsigned char *pimage = &data[0];
NSBitmapImageRep *rep=[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:&pimage
pixelsWide:cols
pixelsHigh:rows
bitsPerSample:8
samplesPerPixel:3
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:cols*3
bitsPerPixel:24
];
NSImage *image = [[NSImage alloc] initWithCGImage:[rep CGImage] size:NSMakeSize(cols,rows)];
imageView.image = image;
which doesn't work, getting some garbage displayed.
But if I use same approach on im object then everything works as expected.
Why reduced cannot be displayed properly? Do I need to check anything else when creating NSBitmapImageRep object?
Thx
P.S. Please ignore the vector stuff, this problem is part of a bigger solution and this is the only way I can bring the image bytes into main application from a cv:Mat picture.
I guess I'll answer to myself.
The solution was actually pretty easy, when a new cv::Mat is created only the image header is copied, but it still point to the original image data, this leads to a wrong picture display.
All I had to is to use copyTo method when creating the cropped version to actually copy needed data form big picture to the small one and then all worked as needed.
something like this
cv::Mat reduced;
orig(roi).copyTo(reduced);
I'm building an UITextView with text and images (Subclassing NSTextstorage for displaying my content)
I'm having textcontent with images URLs.
So my problem is that i need download all the images if they're not cached.
So i want to first insert a placeholder image, download the image and then replace the placeholder image by the downloaded one.
Here's how i do my stuff.
First, i'm formatting my text with images url by replacing all urls with this tag :
[IMG]url[/IMG]
Then i'm using a regex to get all these tags.
I'm testing if there's a cached image or not. If not, i extract all the urls, download them and cache them.
I've created an NSObject class ImageCachingManager and declared a delegate method called when an image has been downloaded :
#protocol ImageCachingManagerDelegate <NSObject>
- (void)managerDidCacheImage:(UIImage *)image forUrl:(NSString *)url;
#end
Like this, I tough that I could use the url of the image got by the delegate method to search the matching url in my NSTextstorage attributedString and replace the current NSTextattachement image by the downloaded one.
But I don't know how to do that...
Thanks for help !
I'm working on something very similar to this at the moment and think this might help. The code is very much alpha but hopefully it will get you to the next step - I'll step through:
Overall Cycle
1. Find you image tags in the full text piece using Reg Ex or XPath - personally i find Hppl to be more powerful but if your content is well structured and reliable, regex is probably fine.
https://github.com/topfunky/hpple
Reduce the space of this match to 1 character and store that range - A textAttachment occupies only 1 character of space within a textview so it's best to reduce this to 1 otherwise when you replace your first match of characters in a range with the first textattachment the next range marker becomes out of date which will lead to issues. Depending on how much processing you need to do this text input during init, this is an important step, i have to do a lot of processing on the text and the ranges change during this parsing so I created an array of special characters that I know is never going to be in the inputs and push these single characters into the reserved space, at the same time i store this special character and the src of the image in an array of a very simple NSObject subclass that stores the SpecialChar, ImgSrc plus has space for the NSRange but i basically find the special character later in the process again because it has been moved about since this point and then set the nsrange at the very end of processing - this may not be necessary in your case but the principle is the same; You need a custom object with NsRange (which will become a text attachment) and the imgSource.
Loop through this array to add placeholder imageAttachments to your attributed string. You can do this by adding a transparent image or a 'loading' image. You could also check your cache for existing images during this point and skipping the placeholder if it exists in cache.
Using your delegate, when the image is successfully downloaded, you need to replace the current attachment with your new one. By replacing the placeholder in the range you've already stored in your object. Create a placeholder attributedString with the NSTextAttachment and then replace that range as below.
Some sample code:
Steps 1 & 2:
specialCharsArray = [[NSArray alloc]initWithObjects:#"Û", #"±", #"¥", #"å", #"æ", #"Æ", #"Ç", #"Ø", #"õ", nil];
//using Hppl
NSString *allImagesXpathQueryString = #"//img/#src";
NSArray *imageArray = [bodyTextParser searchWithXPathQuery:allImagesXpathQueryString];
//
imageRanges = [[NSMutableArray alloc] init];
if([imageArray count]){
for (TFHppleElement *element in imageArray) {
int i = 0;
NSString *imgSource = [[[element children] objectAtIndex:0] content];
NSString *replacementString = [specialCharsArray objectAtIndex:i];
UIImage *srcUIImage = [UIImage imageNamed:imgSource];
[srcUIImage setAccessibilityIdentifier:imgSource]; //only needed if you need to reference the image filename later as it's lost in a UIImage if stored directly
//imagePlacement is NSObject subclass to store the range, replacement and image as above
imagePlacement *foundImage = [[imagePlacement alloc]init] ;
[foundImage initWithSrc:srcUIImage replacement:replacementString];
[imageRanges addObject:foundImage];
i++;
}
Step 3:
-(void)insertImages{
if ([imageRanges count]) {
[self setScrollEnabled:NO]; //seems buggy with scrolling on
int i = 0; //used to track the array placement for tag
for(imagePlacement *myImagePlacement in imageRanges){
// creates a text attachment with an image
NSMutableAttributedString *placeholderAttString = [[NSMutableAttributedString alloc]initWithAttributedString:self.attributedText];
NSTextAttachment *attachment = [[NSTextAttachment alloc] init];
//scales image down to ration of width of view - you probably don't need this
CGSize scaleToView = imagePlacement.imgSrc.size;
scaleToView.width = self.frame.size.width;
scaleToView.height = (self.frame.size.width/imagePlacement.imgSrc.size.width)*imagePlacement.imgSrc.size.height;
attachment.image = [self imageWithColor:[UIColor clearColor] andSize:scaleToView];
NSMutableAttributedString *imageAttrString = [[NSAttributedString attributedStringWithAttachment:attachment] mutableCopy];
[self setAttributedText:placeholderAttString];
i++;
}
}
[self setScrollEnabled:YES];
}
- (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize) size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I am trying to write a prototype to prove that RAW conversion from one format to another is possible. I have to convert a Nikon's raw file which is of .NEF format to Canon's .CR2 format. With help of various posts I create the original image TIFF representation's BitmapImageRep and use this to write the output file which has a .CR2 extension.
It does work but only problem for me is, the input file is of 21.5 MB but the output am getting is of 144.4 MB. While using NSTIFFCompressionPackBits gives me 142.1 MB.
I want to understand what is happening, I have tried various compression enums available but with no success.
Please help me understanding it. This is the source code:
#interface NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName;
#end
#implementation NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName
{
// Cache the reduced image
NSData *imageData = [self TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
// http://www.cocoabuilder.com/archive/cocoa/151789-nsbitmapimagerep-compressed-tiff-large-files.html
NSDictionary *imageProps = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:NSTIFFCompressionJPEG],NSImageCompressionMethod,
[NSNumber numberWithFloat: 1.0], NSImageCompressionFactor,
nil];
imageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[imageData writeToFile:fileName atomically:NO];
}
#end
How could I get the output file which is in CR2 format but almost around the size of the input file with little variation as required for a CR2 file?
Edit 1:
Done changes based on Peter's suggestion of using CGImageDestinationAddImageFromSource method, but still I am getting the same result. The input source NEF file size 21.5 MB but the destination file size after conversion 144.4 MB.
Please review the code:
-(void)saveAsCR2WithCGImageMethodUsingName:(NSString*)inDestinationfileName withSourceFile:(NSString*)inSourceFileName
{
CGImageSourceRef sourceFile = MyCreateCGImageSourceRefFromFile(inSourceFileName);
CGImageDestinationRef destinationFile = createCGImageDestinationRefFromFile(inDestinationfileName);
CGImageDestinationAddImageFromSource(destinationFile, sourceFile, 0, NULL);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
CGImageDestinationFinalize(destinationFile);
}
CGImageSourceRef MyCreateCGImageSourceRefFromFile (NSString* path)
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
return myImageSource;
}
CGImageDestinationRef createCGImageDestinationRefFromFile (NSString *path)
{
NSURL *url = [NSURL fileURLWithPath:path];
CGImageDestinationRef myImageDestination;
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/imageio_basics/ikpg_basics.html#//apple_ref/doc/uid/TP40005462-CH216-SW3
CFStringRef destFileType = CFSTR("public.tiff");
// CFStringRef destFileType = kUTTypeJPEG;
CFArrayRef types = CGImageDestinationCopyTypeIdentifiers(); CFShow(types);
myImageDestination = CGImageDestinationCreateWithURL((CFURLRef)url, destFileType, 1, myOptions);
return myImageDestination;
}
Edit 2: Used the second approach told by #Peter. This gives interesting result. It's effect is the same as renaming the file in finder something like "example_image.NEF" to "example_image.CR2". Surprisingly what happens when converting both programmatically and in finder is, the source file which is 21.5 MB will turn out to be 59 KB. This is without any compression set in the code. Please see the code and suggest:
-(void)convertNEFWithTiffIntermediate:(NSString*)inNEFFile toCR2:(NSString*)inCR2File
{
NSData *fileData = [[NSData alloc] initWithContentsOfFile:inNEFFile];
if (fileData)
{
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:fileData];
// [imageRep setCompression:NSTIFFCompressionNone
// factor:1.0];
NSDictionary *imageProps = nil;
NSData *destinationImageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[destinationImageData writeToFile:inCR2File atomically:NO];
}
}
The first thing I would try doesn't involve NSImage or NSBitmapImageRep at all. Instead, I would create a CGImageSource for the source file and a CGImageDestination for the destination file, and use CGImageDestinationAddImageFromSource to transfer all of the images from A to B.
You're converting to TIFF twice in this code:
You create an NSImage, I assume from the source file.
You ask the NSImage for its TIFFRepresentation (TIFF conversion #1).
You create an NSBitmapImageRep from the first TIFF data.
You ask the NSBitmapImageRep to generate a second TIFF representation (TIFF conversion #2).
Consider creating an NSBitmapImageRep directly from the source data, and not using NSImage at all. You would then skip directly to step 4 to generate the output data.
(But I still would try CGImageDestinationAddImageFromSource first.)
Raw image files have their own (proprietary) representation.
For example, they may use 14-bit per component, and mosaic patterns, which are not supported by your code.
I think you should use a lower-level API and really reverse engineer the RAW format you are trying to save to.
I would start with DNG, which is relatively easy, as Adobe provides an SDK to write it.
I need to extract the raw RGB bitmap data from a JPEG or PNG file, with all the bits in the file, not a window or color converted version.
I'm new to Cocoa, but it looks like I open an image using NSImage like this:
NSString* imageName=[[NSBundle mainBundle] pathForResource:#"/Users/me/Temp/oxberry.jpg" ofType:#"JPG"];
NSImage* tempImage=[[NSImage alloc] initWithContentsOfFile:imageName];
NSBitmapImageRep* imageRep=[[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes=[imageRep bitmapData];
int bits=[imageRep bitsPerPixel];
Then to get the bitmap data there seems to be lots of options: Bitmapimage, CGImage, etc.
What is the simplest approach and if there was a code snippet, that would be great.
Thanks!
You're on the right track. As you noticed, there are lot of ways to do this.
Once you have an NSImage, you can create a bitmap representation, and access its bytes directly. An easy way to get a NSBitmapImageRep is to do this:
NSBitmapImageRep* imageRep = [[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes = [imageRep bitmapData];
int bitsPerPixel = [imageRep bitsPerPixel];
// etc
Going through the TIFFRepresentation step is safer than accessing the NSImage's representations directly.