CVPixelBuffer created from CGImage has different attributes - cvpixelbuffer

EDIT: I worked around this issue but to rephrase my question - is it possible to create the CVPixelBuffer to match the CGImage? For example I would like to have 16 bits per component instead of 8.
As the title says. For example, when I NSLog my buffer it has 4544 bytes per row. But when I NSLog the actual CGImage it has 9000. This is the logs:
buffer: <CVPixelBuffer 0x282b8b020 width=1125 height=2436 bytesPerRow=4544 pixelFormat=BGRA iosurface=0x0 attributes={ PixelFormatDescription = {
BitsPerBlock = 32;
BitsPerComponent = 8;
BlackBlock = {length = 4, bytes = 0x000000ff};
CGBitmapContextCompatibility = 1;
CGBitmapInfo = 8196;
CGImageCompatibility = 1;
ComponentRange = FullRange;
ContainsAlpha = 1;
ContainsGrayscale = 0;
ContainsRGB = 1;
ContainsYCbCr = 0;
FillExtendedPixelsCallback = {length = 24, bytes = 0x00000000000000008c3c1da7010000000000000000000000};
IOSurfaceCoreAnimationCompatibility = 1;
IOSurfaceOpenGLESFBOCompatibility = 1;
IOSurfaceOpenGLESTextureCompatibility = 1;
OpenGLESCompatibility = 1;
PixelFormat = 1111970369;
};
} propagatedAttachments={
} nonPropagatedAttachments={
}>
And the image
image: <CGImage 0x1044625d0> (IP) <<CGColorSpace 0x283e9ea60> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)>
width = 1125, height = 2436, bpc = 16, bpp = 64, row bytes = 9000
kCGImageAlphaNoneSkipLast | kCGImageByteOrder16Little | kCGImagePixelFormatPacked
is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
This is the code I use
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
size_t w = CGImageGetWidth(cgImageRef);
size_t h = CGImageGetHeight(cgImageRef);
CGDataProviderRef x = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(x);
CFDataRef da = CGDataProviderCopyData(x);
CFIndex len = CFDataGetLength(da);
const uint8_t* src = CFDataGetBytePtr(da);
CVPixelBufferCreate(kCFAllocatorDefault,
w,
h,
kCVPixelFormatType_32BGRA,
nil,
&pixelBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
void* base = CVPixelBufferGetBaseAddress(pixelBuffer);
SNLog(#"buffer: %#", pixelBuffer);
SNLog(#"image: %#", cgImageRef);
I am later trying to go through the CGImage address and manually copy the bytes into the destination pixel buffer. But it crashes and I assume it is because I am at some point trying to read from "after" the actual image.

I guess my initial question was not exactly clear, or perhaps I should have rephrased it slightly. The problem is more like - I would like to create my buffer to match the image - with the same amount of bytes/bits per component.
I worked around this by looping through the buffer and picking individually bytes I want from the image. Basically ignoring some bits of each pixel.
In the end the image looks fine still, I suspect the increased amount of bits is used for better quality on more capable screens. Although I am not sure.

Related

iOS 8 CGContextRef unsupported parameter combination

Anyone know how to update this code for iOS 8? I am getting this error message:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaPremultipliedFirst; 4294967289 bytes/row.
CGContextRef CreateBitmapContenxtFromSizeWithData(CGSize s, void* data)
{
int w = s.width, h = s.height;
int bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int components = 4;
int bytesPerRow = (w * bitsPerComponent * components + 7)/8;
CGContextRef result = CGBitmapContextCreate(data, w, h, 8, bytesPerRow, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
return result;
}
Bytes per row is calculated incorrectly in the above snippet.
To calculate the bytes per row, you can just take the width of your image and multiply it with the count of bits per pixel, which seems to be four in your case.
int bytesPerRow = w * 4;
Be careful though, if data points to image data that is stored in RGB, you have three bytes per pixel. You will also need to pass the CGImageAlphaInfo.NoneSkipFirst flag as last parameter to CGBitmapContextCreate so the alpha channel is omitted.

How to create bitmap from Surface (SharpDX)

I am new to DirectX and trying to use SharpDX to capture a screen shot using the Desktop Duplication API.
I am wondering if there is any easy way to create bitmap that I can use in CPU (i.e. save on file, etc.)
I am using the following code the get the desktop screen shot:
var factory = new SharpDX.DXGI.Factory1();
var adapter = factory.Adapters1[0];
var output = adapter.Outputs[0];
var device = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.BgraSupport |
DeviceCreationFlags.Debug);
var dev1 = device.QueryInterface<SharpDX.DXGI.Device1>();
var output1 = output.QueryInterface<Output1>();
var duplication = output1.DuplicateOutput(dev1);
OutputDuplicateFrameInformation frameInfo;
SharpDX.DXGI.Resource desktopResource;
duplication.AcquireNextFrame(50, out frameInfo, out desktopResource);
var desktopSurface = desktopResource.QueryInterface<Surface>();
can anyone please give me some idea on how can I create a bitmap object from the desktopSurface (DXGI.Surface instance)?
I've just completed this myself although I am not going to say much about this code!
public byte[] GetScreenData()
{
// We want to copy the texture from the back buffer so
// we don't hog it.
Texture2DDescription desc = BackBuffer.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
byte[] data = null;
using (var texture = new Texture2D(DeviceDirect3D, desc))
{
DeviceContextDirect3D.CopyResource(BackBuffer, texture);
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
}
}
return data;
}
This will get you a byte[] which can then be used to generate a bitmap.
The following is how I saved to a png Image.
using (var stream = await file.OpenAsync( Windows.Storage.FileAccessMode.ReadWrite ))
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
double dpi = DisplayProperties.LogicalDpi;
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Straight,
(uint)width, (uint)height, dpi, dpi, pixelData);
encoder.BitmapTransform.ScaledWidth = (uint)newWidth;
encoder.BitmapTransform.ScaledHeight = (uint)newHeight;
await encoder.FlushAsync();
waiter.Set();
}
I know this was answered a while ago, and maybe you figured it out by now :3 but if someone else gets stuck I hope this helps!
The MSDN page for the Desktop Duplication API tells us the format of the image:
DXGI provides a surface that contains a current desktop image through the new IDXGIOutputDuplication::AcquireNextFrame method. The format of the desktop image is always DXGI_FORMAT_B8G8R8A8_UNORM no matter what the current display mode is.
You can use the Surface.Map(MapFlags, out DataStream) method get access to the data on the CPU.
The code should look like* this:
DataStream dataStream;
desktopSurface.Map(MapFlags.Read, out dataStream);
for(int y = 0; y < surface.Description.Width; y++) {
for(int x = 0; x < surface.Description.Height; x++) {
// read DXGI_FORMAT_B8G8R8A8_UNORM pixel:
byte b = dataStream.Read<byte>();
byte g = dataStream.Read<byte>();
byte r = dataStream.Read<byte>();
byte a = dataStream.Read<byte>();
// color (r, g, b, a) and pixel position (x, y) are available
// TODO: write to bitmap or process otherwise
}
}
desktopSurface.Unmap();
*Disclaimer: I don't have a Windows 8 installation at hand, I'm only following the documentation. I hope this works :)

Taking snapshot of contents in CGL?

I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)

Help please with CGBitmapContext and 16 bit images

I'd LOVE to know what I'm doing wrong here. I'm a bit of a newbie with CGImageRefs so any advice would help.
I'm trying to create a bitmap image that has as it's pixel values a weighted sum of the pixels from another bitmap, and both bitmaps are 16bits per channel. For some reason I had no trouble getting this to work with 8bit images but it fails miserably with 16bit. My guess is that I'm just not setting things up correctly. I've tried using CGFloats, floats and UInt16s as the data types but nothing has worked. The input image has no alpha channel. The output image I get looks liked colored snow.
relevant stuff from the header:
UInt16 *inBaseAddress;
UInt16 *outBaseAddress;
CGFloat inAlpha[5];
CGFloat inRed[5];
CGFloat inGreen[5];
CGFloat inBlue[5];
CGFloat alphaSum, redSum, greenSum, blueSum;
int shifts[5];
CGFloat weight[5];
CGFloat weightSum;
I create the context for the input bitmap (a CGImageRef created with CGImageSourceCreateImageAtIndex(source, 0, NULL)) using:
size_t width = CGImageGetWidth(inBitmap);
size_t height = CGImageGetHeight(inBitmap);
size_t bitmapBitsPerComponent = CGImageGetBitsPerComponent(inBitmap);
size_t bitmapBytesPerRow = (pixelsWide * 4 * bitmapBitsPerComponent / 8);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(inImage);
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast;
CGContextRef inContext = CGBitmapContextCreate (NULL,width,height,bitmapBitsPerComponent,bitmapBytesPerRow,colorSpace,bitmapInfo);
The context for the output bitmap is created in the same way. I draw the inBitmap into the inContext using:
CGRect rect = {{0,0},{width,height}};
CGContextDrawImage(inContext, rect, inBitmap);
Then I initialize the inBaseAddress and outBaseAddress like so:
inBaseAddress = CGBitmapContextGetData(inContext);
outBaseAddress = CGBitmapContextGetData(outContext);
Then I fill the outBaseAddress with values from the inBaseAddress:
for (n = 0; n < 5; n++)
{
inRed[n] = inBaseAddress[inSpot + 0 + shifts[n]];
inGreen[n] = inBaseAddress[inSpot + 1 + shifts[n]];
inBlue[n] = inBaseAddress[inSpot + 2 + shifts[n]];
inAlpha[n] = inBaseAddress[inSpot + 3 + shifts[n]];
}
alphaSum = 0.0;
redSum = 0.0;
greenSum = 0.0;
blueSum = 0.0;
for (n = 0; n < 5; n++)
{
redSum += inRed[n] * weight[n];
greenSum += inGreen[n] * weight[n];
blueSum += inBlue[n] * weight[n];
alphaSum += inAlpha[n] * weight[n];
}
outBaseAddress[outSpot + 0] = (UInt16)roundf(redSum);
outBaseAddress[outSpot + 1] = (UInt16)roundf(greenSum);
outBaseAddress[outSpot + 2] = (UInt16)roundf(blueSum);
outBaseAddress[outSpot + 3] = (UInt16)roundf(alphaSum);
As a simple check I've tried:
outBaseAddress[outSpot + 0] = inBaseAddress[inSpot + 0];
outBaseAddress[outSpot + 1] = inBaseAddress[inSpot + 1];
outBaseAddress[outSpot + 2] = inBaseAddress[inSpot + 2];
outBaseAddress[outSpot + 3] = inBaseAddress[inSpot + 3];
which works and at least means that the contexts and pointers to the bitmap data are working.
Thanks for any input. This has been pretty frustrating since it worked just fine with 8bit images.
OK, I've got it figured out. I needed to set the bitmapInfo to kCGBitmapByteOrder16Little for the 16bit images and to kCGBitmapByteOrder32Little for the 8bit images. I'm a bit surprised by this actually as would have expected it to be the other way around (32Little for 16 bit and 16Little for 8bit).
I also needed to type def the pointers to the bitmaps as UInt8* and UInt16*. It also appears that I have to include an alpha channel in the bitmapContext. I'm not sure why but the context returned was always nil without it.
It sounds like a byte ordering problem
Have you checked that CGImageGetBitsPerComponent is returning 16? As a matter of style, if you're assuming you're creating a bitmap context with 16 bits per pixel (since you treat the data as UInt16*), you should set explicitly set size_t bitmapBitsPerComponent = 16.
What is your shifts array for? It seems like the most likely place for error, since it's affecting the address you're reading from, but you don't explain it at all. Are the values in shifts multiples of 16?

Buffere write becomes slow after CGBitmapContextCreate

I have code something like this...
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(pixelArray, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipLast);
CGImageRef createdImage = CGBitmapContextCreateImage (ctx);
uiImage = [[UIImage imageWithCGImage:createdImage] retain];
The problem is that, once I create CGImage and UIImage from the buffer (pixelArray), any write operations into the buffer becomes at least 4x slower. This happens only on iPad device not on iPhone. Has anyones face the same problem? What is going on here?
Here is the write operation code, and I call these in loops (setPixel)...
- (RGBA*) getPixel:(NSInteger)x y:(NSInteger)y {
// Bound the co-cordinates.
x = MIN(MAX(x, 0), width - 1);
y = MIN(MAX(y, 0), height - 1);
// yIndexes are pre populated
return (RGBA*)(&pixelArray[(x + yIndexes[y]) << 2]);
}
- (void) setPixel:(RGBA*)color x:(NSInteger)x y:(NSInteger)y {
// Bound the co-cordinates.
x = MIN(MAX(x, 0), _width);
y = MIN(MAX(y, 0), _height);
memcpy([self getPixel:x y:y], color, 3);
colorDirtyBit = YES;
}
I am not sure what is going wrong, but I believe it might be your write operation code that differ in speed. Could you try raw-writing operation without using those functions instead? e.g.
for(int i = 0; i < bufferlen; i++) {
pixelArray[i] = i; // or any arbitrary value
}

Resources