iOS 8 CGContextRef unsupported parameter combination - ios8

Anyone know how to update this code for iOS 8? I am getting this error message:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaPremultipliedFirst; 4294967289 bytes/row.
CGContextRef CreateBitmapContenxtFromSizeWithData(CGSize s, void* data)
{
int w = s.width, h = s.height;
int bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int components = 4;
int bytesPerRow = (w * bitsPerComponent * components + 7)/8;
CGContextRef result = CGBitmapContextCreate(data, w, h, 8, bytesPerRow, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
return result;
}

Bytes per row is calculated incorrectly in the above snippet.
To calculate the bytes per row, you can just take the width of your image and multiply it with the count of bits per pixel, which seems to be four in your case.
int bytesPerRow = w * 4;
Be careful though, if data points to image data that is stored in RGB, you have three bytes per pixel. You will also need to pass the CGImageAlphaInfo.NoneSkipFirst flag as last parameter to CGBitmapContextCreate so the alpha channel is omitted.

Related

CVPixelBuffer created from CGImage has different attributes

EDIT: I worked around this issue but to rephrase my question - is it possible to create the CVPixelBuffer to match the CGImage? For example I would like to have 16 bits per component instead of 8.
As the title says. For example, when I NSLog my buffer it has 4544 bytes per row. But when I NSLog the actual CGImage it has 9000. This is the logs:
buffer: <CVPixelBuffer 0x282b8b020 width=1125 height=2436 bytesPerRow=4544 pixelFormat=BGRA iosurface=0x0 attributes={ PixelFormatDescription = {
BitsPerBlock = 32;
BitsPerComponent = 8;
BlackBlock = {length = 4, bytes = 0x000000ff};
CGBitmapContextCompatibility = 1;
CGBitmapInfo = 8196;
CGImageCompatibility = 1;
ComponentRange = FullRange;
ContainsAlpha = 1;
ContainsGrayscale = 0;
ContainsRGB = 1;
ContainsYCbCr = 0;
FillExtendedPixelsCallback = {length = 24, bytes = 0x00000000000000008c3c1da7010000000000000000000000};
IOSurfaceCoreAnimationCompatibility = 1;
IOSurfaceOpenGLESFBOCompatibility = 1;
IOSurfaceOpenGLESTextureCompatibility = 1;
OpenGLESCompatibility = 1;
PixelFormat = 1111970369;
};
} propagatedAttachments={
} nonPropagatedAttachments={
}>
And the image
image: <CGImage 0x1044625d0> (IP) <<CGColorSpace 0x283e9ea60> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)>
width = 1125, height = 2436, bpc = 16, bpp = 64, row bytes = 9000
kCGImageAlphaNoneSkipLast | kCGImageByteOrder16Little | kCGImagePixelFormatPacked
is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
This is the code I use
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
size_t w = CGImageGetWidth(cgImageRef);
size_t h = CGImageGetHeight(cgImageRef);
CGDataProviderRef x = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(x);
CFDataRef da = CGDataProviderCopyData(x);
CFIndex len = CFDataGetLength(da);
const uint8_t* src = CFDataGetBytePtr(da);
CVPixelBufferCreate(kCFAllocatorDefault,
w,
h,
kCVPixelFormatType_32BGRA,
nil,
&pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
void* base = CVPixelBufferGetBaseAddress(pixelBuffer);
SNLog(#"buffer: %#", pixelBuffer);
SNLog(#"image: %#", cgImageRef);
I am later trying to go through the CGImage address and manually copy the bytes into the destination pixel buffer. But it crashes and I assume it is because I am at some point trying to read from "after" the actual image.
I guess my initial question was not exactly clear, or perhaps I should have rephrased it slightly. The problem is more like - I would like to create my buffer to match the image - with the same amount of bytes/bits per component.
I worked around this by looping through the buffer and picking individually bytes I want from the image. Basically ignoring some bits of each pixel.
In the end the image looks fine still, I suspect the increased amount of bits is used for better quality on more capable screens. Although I am not sure.

Count the number of black pixels using ByteBuffer javacv

I have use this code..I am new to javacv and I need to get the pixels one by one in a region and get the color of that pixel. Can I please know how to do it using the ByteBuffer ,because byte buffer can read pixel by pixel and I need to check whether the pixel is black or white...
Can anyone please consider about this..I am really stuck here...
IplImage img=cvLoadImage("img\\ROI.jpg");
CvScalar Black = cvScalar(0, 0, 0, 0);
CvScalar white = cvScalar(255, 255, 255, 255);
ByteBuffer buffer = img.getByteBuffer();
for(int y = 0; y < img.height(); y++) {
for(int x = 0; x < img.width(); x++) {
int index = y * img.widthStep() + x * img.nChannels();
// Used to read the pixel value - the 0xFF is needed to cast from
// an unsigned byte to an int.
int value = buffer.get(index) & 0xFF;
// Sets the pixel to a value (greyscale).
buffer.put(index, (byte) value);
// Sets the pixel to a value (RGB, stored in BGR order).
//buffer.putInt(index, Black);
// buffer.put(index + 1, white);
}
}

How to iterate through all pixels of an UIImage?

Hey Guys i am currently trying to iterate through all pixels of an UIImage but the way i implemented it it takes sooo much time. So i thought it is the wrong way i implemented it.
This is my method how i get the RGBA Values of an Pixel :
+(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
// Initializing the result array
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage]; // creating an Instance of
NSUInteger width = CGImageGetWidth(imageRef); // Get width of our Image
NSUInteger height = CGImageGetHeight(imageRef); // Get height of our Image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // creating our colour Space
// Getting that raw Data out of an image
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4; // Bytes per pixel defined
NSUInteger bytesPerRow = bytesPerPixel * width; // Bytes per row
NSUInteger bitsPerComponent = 8; // Bytes per component
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace); // releasing the color space
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
And this is the code how i parse through all the pixels :
for (NSUInteger y = 0 ; y < self.originalPictureWidth; y++) {
for (NSUInteger x = 0 ; x < self.originalPictureHeight; x++) {
NSArray * originalRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.originalPicture atX:(int)x andY:(int)y count:1];
NSArray * referenceRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.referencePicture atX:(int)referenceIndexX andY:(int)referenceIndexY count:1];
// Do something else ....
}
}
Is there a faster way of getting all RGBA values of an uiimage instance ?
For every pixel, you're generating a new copy of the image and then throwing it away. Yes, it would be much faster by just getting the data once and then processing on that byte array.
But it heavily depends on what is in "Do something else." There are many CoreImage and vImage functions that can do image processing very quickly, but you may need to approach the problem differently. It depends on what you're doing.

Taking snapshot of contents in CGL?

I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)

Drawing RAW buffer to CGBitmapContext

I have a raw image buffer in the RGB format. I need to draw it to CGContext so that I get a new buffer of the format ARGB. I accomplish this in the following way:
Create a data provider out of raw buffer using CGDataProviderCreateWithData and then create image out of the data provider with the api: CGImageCreate.
Now if I write this image back to the CGBitmapContext using CGContextImageDraw.
Instead of creating an intermediate image, is there any way of writing the buffer directly to CGContext so that I can avoid the image creation phase?
Thanks
If all you want is to take RGB data with no alpha component and turn it into ARGB data with full opacity (alpha = 1.0 at all points), why not just copy the data yourself into a new buffer?
// assuming 24-bit RGB (1 byte per color component)
unsigned char *rgb = /* ... */;
size_t rgb_bytes = /* ... */;
const size_t bpp_rgb = 3; // bytes per pixel - rgb
const size_t bpp_argb = 4; // bytes per pixel - argb
const size_t npixels = rgb_bytes / bpp_rgb;
unsigned char *argb = malloc(npixels * bpp_argb);
for (size_t i = 0; i < npixels; ++i) {
const size_t argbi = bpp_argb * i;
const size_t rgbi = bpp_rgb * i;
argb[argbi] = 0xFF; // alpha - full opacity
argb[argbi + 1] = rgb[rgbi]; // r
argb[argbi + 2] = rgb[rgbi + 1]; // g
argb[argbi + 3] = rgb[rgbi + 2]; // b
}
If you are using a CGBitmapContext then you can get a pointer to the bitmap buffer using the CGBitmapContextGetData() function. You can then write your data directly to the buffer.

Resources