Convert OpenCV part of image to NSImage - macos

I'm trying to read an image using OpenCV, get a part of it and show it in an NSImageView object.
Here is how I'm doing.
cv::Mat im = cv::imread("/Users/maddev/Documents/R8.jpg");
cv::Mat reduced = im(cv::Rect(10, 10, 400, 400));
std::vector<unsigned char> data ;
unsigned char *ptr = reduced.data;
unsigned long size = reduced.total() * reduced.elemSize();
data.assign(ptr, ptr + size);
int cols = reduced.cols;
int rows = reduced.rows;
// to NSImage
unsigned char *pimage = &data[0];
NSBitmapImageRep *rep=[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:&pimage
pixelsWide:cols
pixelsHigh:rows
bitsPerSample:8
samplesPerPixel:3
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:cols*3
bitsPerPixel:24
];
NSImage *image = [[NSImage alloc] initWithCGImage:[rep CGImage] size:NSMakeSize(cols,rows)];
imageView.image = image;
which doesn't work, getting some garbage displayed.
But if I use same approach on im object then everything works as expected.
Why reduced cannot be displayed properly? Do I need to check anything else when creating NSBitmapImageRep object?
Thx
P.S. Please ignore the vector stuff, this problem is part of a bigger solution and this is the only way I can bring the image bytes into main application from a cv:Mat picture.

I guess I'll answer to myself.
The solution was actually pretty easy, when a new cv::Mat is created only the image header is copied, but it still point to the original image data, this leads to a wrong picture display.
All I had to is to use copyTo method when creating the cropped version to actually copy needed data form big picture to the small one and then all worked as needed.
something like this
cv::Mat reduced;
orig(roi).copyTo(reduced);

Related

Access CGImageRef underlying bytes and modify?

I am working on an OSX app that does some pixel-level image manipulation. I am using the following code to access the pixel color components (RGBA) as regular bytes cast as uint8 pointers.
NSImage *image = self.iv.image;
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
NSData *data = (NSData *)CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(cgImage)));
uint8 *pixels = (uint8 *)[data bytes];
At this point I apply some byte level changes in:
for (int i = 0; i < [data length]; i += 4) { ... }
Changing this region of memory does not appear to have any effect on the original CGImageRef (which is at the time displayed in an NSImageView). I must do the following to see the image update accordingly:
CGImageRef newImageRef = CGImageCreate (width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault);
NSSize size = NSMakeSize(CGImageGetWidth(newImageRef),
CGImageGetHeight(newImageRef));
NSImage * newIm = [[NSImage alloc] initWithCGImage:newImageRef size:size];
self.iv.image = newIm;
In other words, the bytes I get back to modify are just a copy of the original bytes, presumably as a result of CGDataProviderCopyData(CGImageGetDataProvider(cgImage).
My question is as follows. Is there is a way to access the underlying bytes of the CGImageRef directly such that when I modify them the image is updated on screen as I manipulate them?
No. CGImages are immutable. You can't change them once they are created.
In your code, the call to [data bytes] gives a pointer to const void. You have cast away the const which gets it to compile without warnings, but that's a violation of the design contract. Writing to the buffer backing the data provider is not legal and not guaranteed to work, even if you create a new CGImage from it.
I will also point out that the format of the data in the buffer may be quite different from what you were expecting. There's no good reason to expect the data to be 32 bits per pixel, RGBA vs. BGRA vs. ARGB vs. …, or anything.
I strong recommend that you read the sections about the various image objects in the 10.6 AppKit release notes. Scroll down to "NSImage, CGImage, and CoreGraphics impedance matching" and read through all of the following image-related sections until you hit "NSComboBox". The section "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" is one of the more important for your purposes.
Beyond what that says, you could just maintain a pixel buffer that you allocated yourself in whatever format you prefer. Then, when you want a CGImage of that, create it from the buffer, draw with it, and discard it. Any pixel manipulations would be done on that buffer.

Getting bitmap data from JPEG image using Cocoa

I need to extract the raw RGB bitmap data from a JPEG or PNG file, with all the bits in the file, not a window or color converted version.
I'm new to Cocoa, but it looks like I open an image using NSImage like this:
NSString* imageName=[[NSBundle mainBundle] pathForResource:#"/Users/me/Temp/oxberry.jpg" ofType:#"JPG"];
NSImage* tempImage=[[NSImage alloc] initWithContentsOfFile:imageName];
NSBitmapImageRep* imageRep=[[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes=[imageRep bitmapData];
int bits=[imageRep bitsPerPixel];
Then to get the bitmap data there seems to be lots of options: Bitmapimage, CGImage, etc.
What is the simplest approach and if there was a code snippet, that would be great.
Thanks!
You're on the right track. As you noticed, there are lot of ways to do this.
Once you have an NSImage, you can create a bitmap representation, and access its bytes directly. An easy way to get a NSBitmapImageRep is to do this:
NSBitmapImageRep* imageRep = [[[NSBitmapImageRep alloc] initWithData:[tempImage TIFFRepresentation]] autorelease];
unsigned char* bytes = [imageRep bitmapData];
int bitsPerPixel = [imageRep bitsPerPixel];
// etc
Going through the TIFFRepresentation step is safer than accessing the NSImage's representations directly.

imageWithCGImage not being released or is trapped by Cache similar to imageNamed, any work around for generating dynamic images?

I'm generating UIImages with a bit-bucket, creating them on the fly and swapping the UIImageView's image. Is there a way to edit the UIImageView's Image directly? (ie. change the color of a specific pixel, without removing the UIImage from the UIImageView, and get it to redraw.)
Currently, I'm flushing the UIImage and using imageWithCGIImage to make a new one, and assigning it to the UIImageView. This works. Shows no MemLeaks. But on the iPhone (3Gs) after about 100 image replacements, CRASHES. Cache'n issue? The memory summation seems to be hitting the phone's limit if cache not releasing, however, Simulator does not show memory consumption with each image swap. Stays flatlined without leaks.
Note: topologyImage array is the RGBA pixel-bucket. The REF variables are not released. Every attempt to do so, crashes next call. Without, Instruments reports no leaks.
=========
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
UIImage *img=[UIImage imageWithCGImage:imageRef];
if( IMG[NDXtopo].vw ) {
[IMG[NDXtopo].vw setImage:img];
}
else {
IMG[NDXtopo].vw=[[UIImageView alloc] initWithImage:img];
[master.view addSubview:IMG[NDXtopo].vw];
}
Basically you should release your references, especially the CGImageRef since the imageWithCGImage doesn't take ownership of the CGImage but rather seems to copy the data internally.
The docs on this are quite unclear, but from what I have found in my testing if I don't release CGImageRefs and CGDataProviderRefs it will eventually cause the application to get memory warnings... and then crash.
Not sure why you would have a crash, but in doing a quick test with:
UIImageView *view = [[UIImageView alloc] init];
int I = 128;
unsigned char *topologyImage = malloc(I*I*4*sizeof(unsigned char));
for(int i=0; i<I*I*4; i++)
{
topologyImage[i] = 100;
}
for(int i=0; i<1000; i++)
{
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *img=[UIImage imageWithCGImage:imageRef];
view.image = img;
CGImageRelease(imageRef);
}
free(topologyimage);
Seems to work just fine for me, so whatever is causing your crash seems to be because of something outside of your example, like for example how you got the image data into the topologyImage

converting a pointer to unsigned char to an NSImage

I have a structure which includes a pointer to a data set, which in this case is a 16-bit grayscale image. I want to convert this data to an NSImage so that I can display it, and then save it as a .TIF file. The route from the manuals appears to be something like:
(Create *myNSImData from frame->image, which is a pointer)
NSImage *TestImage = [[NSImage alloc] initWithData : myNSImData];
(display TestImage, save it, whatever else)
[TestImage release];
I am lost as to how to create the NSData object and assure it contains the array of 16-bit data. Attempts to recast the pointer give warnings and no data. I could simply increment the pointers, transferring one byte at a time from frame->image to the data object, but I don't understand how to communicate the array structure to the data object. Any ideas?
Thanks
MORE ATTEMPTS USING YOUR SUGGESTION
I can convert this data to a .TIF file in the following manner:
for (uint32 row = 0 ; row < MaxHeight ; row++)
{
for (uint32 column = 0;column< MaxWidth;column++)
{
tempData = (uint8_t)*frame->image; //first byte
*frame->image++;
buf[2 * column + 1] = (unsigned char) tempData;
tempData = (uint8_t)*frame->image; //second byte
*frame->image++;
buf[2 * column] = (unsigned char) tempData;
}
TIFFWriteScanline(tiffile,buf,row,0);
}
With the .TIF file thus generated, I can create an NSImage and display it:
NSImage *TestImage = [[[NSImage alloc] initWithContentsOfFile:inFilePath] autorelease];
[viewWindow setImage: TestImage];
My question now becomes - can I create an NSData object that I can display in the same way? I have tried the following (product is the height*width of the image):
NSData *ReadImage = [[[NSData alloc] initWithBytes: frame->image length:2*product] autorelease] ;
NSImage *NewImage = [[[NSImage alloc] initWithData:ReadImage] autorelease];
NSSize newSize;
newSize.height = MaxHeight; //height of the image
newSize.width = MaxWidth; //width of the image
[NewImage setSize:newSize];
[viewWindow setImage: NewImage];
When I try this, nothing displays. I have also tried creating an array of uint16_t which has the data, and serving up the pointer to that - again, nothing displays. Any ideas? E.g. do I have to tell the NSData that I am using 2 bytes per pixel, or something like that? Thanks Monty Wood
To create an NSData object containing a block to which you have a pointer, you should use one of the three methods that start with initWithBytes:, or, to create an autoreleased NSData object, use one of the class methods that start with dataWithBytes:
UPDATE: I think that if you want to create an NSImage directly from an NSData, the data needs to include the appropriate headers/magic numbers so that NSImage can figure out what the representation is. You should look at NSBitmapImageRep and the Images chapter of the Cocoa Drawing Guide for raw image data.

NSBitmapImageRep data Format as application icon image?

i have a char* array of data that was in RGBA and then moved to ARGB
Bottom line is the set application image looks totally messed up and i cant put my finger on why?
//create a bitmap representation of the image data.
//The data is expected to be unsigned char**
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes : (unsigned char**) &dest
pixelsWide:width pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: bytesPerRow
bitsPerPixel:32 ];
//allocate the image
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[image addRepresentation:bitmap];
if( image == NULL) {
printf("image is null\n");
fflush(stdout);
}
//set the icon image of the application
[NSApp setApplicationIconImage :image];
//tell the image to autorelease when done
[image autorelease];
What in these values is not right? the image looks very multicolored and pixelated, with transparent parts/lines as well.
EDIT: after changing bytes per row to width*4 (scanline), this is the image i get.
![alt text][1]
The original image is just an orange square.
EDIT2: updated image and some of the parameters.
Thanks!
alt text http://www.freeimagehosting.net/uploads/3793520d98.png
It is only useful to specify 0 for bytesPerRow if you're also passing NULL for the data (and thus letting the rep allocate it itself). If you pass zero, you're asking the system to use the "best" bytesPerRow, which is not stable between architectures and OS versions. It isn't width*bitsPerPixel, it's padded out for alignment.
This is one that that is wrong, at least.

Resources