Taking snapshot of contents in CGL? - macos

I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);

Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)

Related

Can't get BITMAPINFOHEADER data to display odd width bmp images correctly

I am trying to display a 24-bit uncompressed bitmap with an odd width using standard Win32 API calls, but it seems like I have a stride problem.
According to the msdn:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229%28v=vs.85%29.aspx
"For uncompressed RGB formats, the minimum stride is always the image width in bytes, rounded up to the nearest DWORD. You can use the following formula to calculate the stride:
stride = ((((biWidth * biBitCount) + 31) & ~31) >> 3)"
but this simply does not work for me and below is is the code:
void Init()
{
pImage = ReadBMP("data\\bird.bmp");
size_t imgSize = pImage->width * pImage->height * 3;
BITMAPINFOHEADER bmih;
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biBitCount = 24;
// This is probably where the bug is
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = pImage->width;
bmih.biWidth = stride;
bmih.biHeight = -((LONG)pImage->height);
bmih.biPlanes = 1;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
bmih.biXPelsPerMeter = 1;
bmih.biYPelsPerMeter = 1;
bmih.biClrUsed = 0;
bmih.biClrImportant = 0;
BITMAPINFO dbmi;
ZeroMemory(&dbmi, sizeof(dbmi));
dbmi.bmiHeader = bmih;
dbmi.bmiColors->rgbBlue = 0;
dbmi.bmiColors->rgbGreen = 0;
dbmi.bmiColors->rgbRed = 0;
dbmi.bmiColors->rgbReserved = 0;
HDC hdc = ::GetDC(NULL);
mTestBMP = CreateDIBitmap(hdc,
&bmih,
CBM_INIT,
pImage->pSrc,
&dbmi,
DIB_RGB_COLORS);
hdc = ::GetDC(NULL);
}
and here the drawing fuction
RawBMP *pImage;
HBITMAP mTestBMP;
void UpdateScreen(HDC srcHDC)
{
if (pImage != nullptr && mTestBMP != 0x00)
{
HDC hdc = CreateCompatibleDC(srcHDC);
SelectObject(hdc, mTestBMP);
BitBlt(srcHDC,
0, // x
0, // y
// I tried passing the stride here and it did not work either
pImage->width, // width of the image
pImage->height, // height
hdc,
0, // x and
0, // y of upper left corner
SRCCOPY);
DeleteDC(hdc);
}
}
If I pass the original image width (odd number) instead of the stride
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = stride;
bmih.biWidth = pImage->width;
the picture looks skewed, below shows the differences:
and if I pass the stride according to msdn, then nothing shows up because the stride is too large.
any clues? Thank you!
thanks Jonathan for the solution. I need to copy row by row with the proper padding for odd width images. More or less the code for 24-bit uncompressed images:
const uint32_t bitCount = 24;
LONG strideInBytes;
// if the width is odd, then we need to add padding
if (width & 0x1)
{
strideInBytes = ((((width * bitCount) + 31) & ~31) >> 3);
}
else
{
strideInBytes = width * 3;
}
// allocate the new buffer
unsigned char *pBuffer = new unsigned char[strideInBytes * height];
memset(pBuffer, 0xaa, strideInBytes * height);
// Copy row by row
for (uint32_t yy = 0; yy < height; yy++)
{
uint32_t rowSizeInBytes = width * 3;
unsigned char *pDest = &pBuffer[yy * strideInBytes];
unsigned char *pSrc = &pData[yy * rowSizeInBytes];
memcpy(pDest, pSrc, rowSizeInBytes);
}
rawBMP->pSrc = pBuffer;
rawBMP->width = width;
rawBMP->height = height;
rawBMP->stride = strideInBytes;

How to draw into device context

I have a bitmap image in form of array of 32-bit integers (ARGB pixels: uint32 *mypixels) and int width and int height. I need to output them to a printer.
I have the printer context: HDC hdcPrinter;
As I learned, I need first to create a compatible context:
HDC hdcMem = CreateCompatibleDC(hdcPrinter);
Then I need to create an HBITMAP object, select it into the compatible context, and render:
HBITMAP hBitmap = ...?
SelectObject(hdcMem, hBitmap);
BitBlt(printerContext, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY);
And finally clean up:
DeleteObject(hBitmap);
DeleteDC(hdcMem);
My question is how do I create an HBITMAP object and put mypixels into it?
I found two options:
HBITMAP hBitmap = CreateCompatibleBitmap(hdcPrinter, width, height);
Looks good, but how do mypixels get into this bitmap?
HBITMAP hBitmap = CreateDIBSection(hdcPrinter /*or hdcMem?*/, ...);
Will it work? Is it better than option 1.?
This function creates a bitmap and sets it to an initial image.
Irt's a bit fiddly to access the bits directly, but it can be done.
HBITMAP MakeBitmap(unsigned char *rgba, int width, int height, VOID **buff)
{
VOID *pvBits; // pointer to DIB section
HBITMAP answer;
BITMAPINFO bmi;
HDC hdc;
int x, y;
int red, green, blue, alpha;
// setup bitmap info
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = width;
bmi.bmiHeader.biHeight = height;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32; // four 8-bit components
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = width * height * 4;
hdc = CreateCompatibleDC(GetDC(0));
answer = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, &pvBits, NULL, 0x0);
for (y = 0; y < height; y++)
{
for (x = 0; x < width; x++)
{
red = rgba[(y*width + x) * 4];
green = rgba[(y*width + x) * 4 + 1];
blue = rgba[(y*width + x) * 4 + 2];
alpha = rgba[(y*width + x) * 4 + 3];
red = (red * alpha) >> 8;
green = (green * alpha) >> 8;
blue = (blue * alpha) >> 8;
((UINT32 *)pvBits)[(height - y - 1) * width + x] = (alpha << 24) | (red << 16) | (green << 8) | blue;
}
}
DeleteDC(hdc);
*buff = pvBits;
return answer;
}

How to iterate through all pixels of an UIImage?

Hey Guys i am currently trying to iterate through all pixels of an UIImage but the way i implemented it it takes sooo much time. So i thought it is the wrong way i implemented it.
This is my method how i get the RGBA Values of an Pixel :
+(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
// Initializing the result array
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage]; // creating an Instance of
NSUInteger width = CGImageGetWidth(imageRef); // Get width of our Image
NSUInteger height = CGImageGetHeight(imageRef); // Get height of our Image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // creating our colour Space
// Getting that raw Data out of an image
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4; // Bytes per pixel defined
NSUInteger bytesPerRow = bytesPerPixel * width; // Bytes per row
NSUInteger bitsPerComponent = 8; // Bytes per component
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace); // releasing the color space
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
And this is the code how i parse through all the pixels :
for (NSUInteger y = 0 ; y < self.originalPictureWidth; y++) {
for (NSUInteger x = 0 ; x < self.originalPictureHeight; x++) {
NSArray * originalRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.originalPicture atX:(int)x andY:(int)y count:1];
NSArray * referenceRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.referencePicture atX:(int)referenceIndexX andY:(int)referenceIndexY count:1];
// Do something else ....
}
}
Is there a faster way of getting all RGBA values of an uiimage instance ?
For every pixel, you're generating a new copy of the image and then throwing it away. Yes, it would be much faster by just getting the data once and then processing on that byte array.
But it heavily depends on what is in "Do something else." There are many CoreImage and vImage functions that can do image processing very quickly, but you may need to approach the problem differently. It depends on what you're doing.

Export Opengl ES video

XCode has the ability to capture Opengl ES frames from the iPad, and that's great! I would like to extend this functionality and capture an entire Opengl ES movie of my application. Is there a way for that?
if it's not possible using XCode, how can i do it without much effort and big changes on my code? thank you very much!
I use a very simple technique, which requires just a few lines of code.
You can capture each OGL frame into UIImage using this code:
- (UIImage*)captureScreen {
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
// Allocate array.
GLuint *buffer = (GLuint *) malloc(dataLength);
GLuint *resultsBuffer = (GLuint *)malloc(dataLength);
// Read data
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// Flip vertical
for(int y = 0; y < framebufferHeight; y++) {
for(int x = 0; x < framebufferWidth; x++) {
resultsBuffer[x + y * framebufferWidth] = buffer[x + (framebufferHeight - 1 - y) * framebufferWidth];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, resultsBuffer, dataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Then you will capture each frame in your main loop:
- (void)onTimer {
// Compute and render new frame
[self update];
// Recording
if (recordingMode == RecordingModeMovie) {
recordingFrameNum++;
// Save frame
UIImage *image = [self captureScreen];
NSString *fileName = [NSString stringWithFormat:#"%d.jpg", (int)recordingFrameNum];
[UIImageJPEGRepresentation(image, 1.0) writeToFile:[basePath stringByAppendingPathComponent:fileName] atomically:NO];
}
}
At the end you will have tons of JPEG files which can be easily converted into a movie by
Time Lapse Assembler
If you want to have nice 30FPS movie, hard fix your calc steps to 1 / 30.0 sec per frame.

Decode values ignored in CGCreateImage

I am creating a monochrome image with the following code:
CGColorSpaceRef cgColorSpace = CGColorSpaceCreateDeviceGray();
CGImageRef cgImage = CGImageCreate (width, height, 1, 1, rowBytes, colorSpace, 0, dataProvider, decodeValues, NO, kCGRenderingIntentDefault);
where decodeValues is an array of 2 CGFloat's, equal to {0,1}. This gives me a fine image, but apparently my data (which comes from a PDF image mask) is black-on-white instead of white-on-black. To invert the image, I tried to set the values of decodeValues to {1,0}, but this did not change anything at all. Actually, whatever nonsensical values I put into decodeValues, I get the same image.
Why is decodeValues ignored here? How do I invert black and white?
here's some code for creating and drawing a mono image. It's the same as yours but with more context (and without necessary cleanup):
size_t width = 200;
size_t height = 200;
size_t bitsPerComponent = 1;
size_t componentsPerPixel = 1;
size_t bitsPerPixel = bitsPerComponent * componentsPerPixel;
size_t bytesPerRow = (width * bitsPerPixel + 7)/8;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGBitmapInfo bitmapInfo = kCGImageAlphaNone;
CGFloat decode[] = {0.0, 1.0};
size_t dataLength = bytesPerRow * height;
UInt32 *bitmap = malloc( dataLength );
memset( bitmap, 255, dataLength );
CGDataProviderRef dataProvider = CGDataProviderCreateWithData( NULL, bitmap, dataLength, NULL);
CGImageRef cgImage = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
dataProvider,
decode,
false,
kCGRenderingIntentDefault
);
CGRect destRect = CGRectMake(0, 0, width, height);
CGContextDrawImage( context, destRect, cgImage );
If i change the decode array to CGFloat decode[] = {0.0, 0.0}; i always get a black image.
If you have tried that and it didn't have any effect (you say you get the same image whatever values you use), either: you aren't actually passing in those values but you think you are, or: somehow you aren't actually examining the output of CGImageCreate.

Resources