Drawing RAW buffer to CGBitmapContext - cocoa

I have a raw image buffer in the RGB format. I need to draw it to CGContext so that I get a new buffer of the format ARGB. I accomplish this in the following way:
Create a data provider out of raw buffer using CGDataProviderCreateWithData and then create image out of the data provider with the api: CGImageCreate.
Now if I write this image back to the CGBitmapContext using CGContextImageDraw.
Instead of creating an intermediate image, is there any way of writing the buffer directly to CGContext so that I can avoid the image creation phase?
Thanks

If all you want is to take RGB data with no alpha component and turn it into ARGB data with full opacity (alpha = 1.0 at all points), why not just copy the data yourself into a new buffer?
// assuming 24-bit RGB (1 byte per color component)
unsigned char *rgb = /* ... */;
size_t rgb_bytes = /* ... */;
const size_t bpp_rgb = 3; // bytes per pixel - rgb
const size_t bpp_argb = 4; // bytes per pixel - argb
const size_t npixels = rgb_bytes / bpp_rgb;
unsigned char *argb = malloc(npixels * bpp_argb);
for (size_t i = 0; i < npixels; ++i) {
const size_t argbi = bpp_argb * i;
const size_t rgbi = bpp_rgb * i;
argb[argbi] = 0xFF; // alpha - full opacity
argb[argbi + 1] = rgb[rgbi]; // r
argb[argbi + 2] = rgb[rgbi + 1]; // g
argb[argbi + 3] = rgb[rgbi + 2]; // b
}

If you are using a CGBitmapContext then you can get a pointer to the bitmap buffer using the CGBitmapContextGetData() function. You can then write your data directly to the buffer.

Related

How to flatten an image using OpenCV correctly for image processing and then convert it to Mat again?

I have an image, read using "cv::imread". I have to flatten it so that I could use CUDA & GPU for my image processing algorithms acceleration.
My problem: When I read my image, I can show it correctly using imshow, however when I flatten it and convert it to a Mat object to be used with imshow, only part of my image is displayed. The size of the output image is also wrong, meaning that some data is really lost. What's the problem with my for loop?
// The problematic part of my code
// The Camera Man gray test image
const char* img_gray_name = "../../Test_Images/cameraman.tiff";
const char* img_blur_name = "../cameraman-blur.tiff";
const char* image_general_name = "cameraman_blur";
cv::Mat img = cv::imread(img_gray_name);
unsigned long int img_gray_size = img.rows * img.cols * sizeof(uchar);
uchar *h_img_in;// input image, converted to a flat array to be
// processed by GPU
h_img_in = (uchar *)malloc(img_gray_size);
//*************** The bug should be here! ***************//
for (int i = 0; i < img.rows; ++i) {
for (int j = 0; j < img.cols; ++j) {
h_img_in[i*img.cols+j] = img.at<uchar>(i, j);
}
}
Mat img_test;
img_test = Mat(cv::Size(img.cols, img.rows), CV_8U, h_img_in);
imwrite(img_blur_name, img_test);
// create image window named "camera man"
cv::namedWindow(image_general_name);
// show the image on window
cv::imshow(image_general_name, img_test);
P.S.: I also tested with a new 2D array instead of 1D h_img_in, result is the same; This means that something goes wrong with my usage of "img.at(i, j)".

SaveDDSTextureToFile() saves a black texture instead the expected

I have created a red colored texture of DXGI format DXGI_FORMAT_R32_FLOAT. I have a byte buffer of red color pixels where 4 byte per pixel is prepared. The byte buffer is then copied using device context map and unmap functions and after that I have created a shader resource view. I have get the resource back from resource view then passed that to SaveDDSTextureToFile() to save the bitmap data to dds file format.
But when I am going to save it in dds in file to check it's saves a same sized texture which is total black. Where should I look at to debug?
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = static_cast<UINT>(renderTarget.width);
desc.Height = static_cast<UINT>(renderTarget.height);
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R32_FLOAT;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
...
SaveDDSTextureToFile(renderer->Context(), texture2D, L"D:\\RED.dds");
I have created the red texture buffer by following:
CImage m_cImage;
// create a test image
m_cImage.Create(w, -h, 8 * 4); // 8 bit * 4 channel => 32 bpp or 4 byte per pixel
auto hdc = m_cImage.GetDC();
Gdiplus::Graphics graphics(hdc);
// Create a SolidBrush object.
Gdiplus::SolidBrush redBrush(Gdiplus::Color::Red);
// Fill the rectangle.
Gdiplus::Status status = graphics.FillRectangle(&redBrush, 0, 0, w, h);
TRY_CONDITION(status == Gdiplus::Status::Ok);
....
// Then saved the m_cImage.GetBits() to bmp file using Gdiplus::Bitmap
// and my expected texture is found

Can't isolate pixels from av_frame_copy_to_buffer

I'm trying to pull the YUV pixel data from an AVFrame, modify the pixels, and put it back into FFmpeg.
I'm currently using this to retrieve the YUV buffer
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(base->format);
int baseSize = av_image_get_buffer_size(base->format, base->width, base->height, 32);
uint8_t *baseBuffer = (uint8_t*)malloc(baseSize);
av_image_copy_to_buffer(baseBuffer, baseSize, base->data, base->linesize, base->format, base->width, base->height, 32);
But I can't seem to correctly target pixels in that buffer. From the source code they seem to be stacking the planes on top of each other, leading me to attempt this
int width = base->width;
int height = base->height;
int chroma2h = desc->log2_chroma_h;
int linesizeY = base->linesize[0];
int linesizeU = base->linesize[1];
int linesizeV = base->linesize[2];
int chromaHeight = (height + (1 << chroma2h) -1) >> chroma2h;
int x = 100;
int y = 100;
uint8_t *vY = base;
uint8_t *vU = base +(linesizeY*height);
uint8_t *vV = base +((linesizeY*height) + (linesizeU*chromaHeight));
vY+= x + (y * linesizeY);
vU+= x + (y * linesizeU);
vV+= x + (y * linesizeV);
Using that, if I try to modify pixels from a range of 300,300-400,400 I get a small box darker than the rest of the video, along with horizontal stripes of darkness along the video. The original color is still there, so I think I'm still touching the Y plane on all 3 pointers.
How can I actually hit the pixels I want to hit?

iOS 8 CGContextRef unsupported parameter combination

Anyone know how to update this code for iOS 8? I am getting this error message:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaPremultipliedFirst; 4294967289 bytes/row.
CGContextRef CreateBitmapContenxtFromSizeWithData(CGSize s, void* data)
{
int w = s.width, h = s.height;
int bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int components = 4;
int bytesPerRow = (w * bitsPerComponent * components + 7)/8;
CGContextRef result = CGBitmapContextCreate(data, w, h, 8, bytesPerRow, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
return result;
}
Bytes per row is calculated incorrectly in the above snippet.
To calculate the bytes per row, you can just take the width of your image and multiply it with the count of bits per pixel, which seems to be four in your case.
int bytesPerRow = w * 4;
Be careful though, if data points to image data that is stored in RGB, you have three bytes per pixel. You will also need to pass the CGImageAlphaInfo.NoneSkipFirst flag as last parameter to CGBitmapContextCreate so the alpha channel is omitted.

FFMPEG: Dumping YUV data into AVFrame structure

I'm trying to dump a YUV420 data into the AVFrame structure of FFMPEG. From the below link:
http://ffmpeg.org/doxygen/trunk/structAVFrame.html, i can derive that i need to put my data into
data[AV_NUM_DATA_POINTERS]
using
linesize [AV_NUM_DATA_POINTERS].
The YUV data i'm trying to dump is YUV420 and the picture size is 416x240. So how do i dump/map this yuv data to AVFrame structures variable? Iknow that linesize represents the stride i.e. i suppose the width of my picture, I have tried with some combinations but do not get the output.I kindly request you to help me map the buffer. Thanks in advance.
AVFrame can be interpreted as an AVPicture to fill the data and linesize fields. The easiest way to fill these field is to the use the avpicture_fill function.
To fill in the AVFrame's Y U and V buffers, it depends on your input data and what you want to do with the frame (do you want to write into the AVFrame and erase the initial data? or keep a copy).
If the buffer is large enough (at least linesize[0] * height for Y data, linesize[1 or 2] * height/2 for U/V data), you can directly use input buffers:
// Initialize the AVFrame
AVFrame* frame = avcodec_alloc_frame();
frame->width = width;
frame->height = height;
frame->format = AV_PIX_FMT_YUV420P;
// Initialize frame->linesize
avpicture_fill((AVPicture*)frame, NULL, frame->format, frame->width, frame->height);
// Set frame->data pointers manually
frame->data[0] = inputBufferY;
frame->data[1] = inputBufferU;
frame->data[2] = inputBufferV;
// Or if your Y, U, V buffers are contiguous and have the correct size, simply use:
// avpicture_fill((AVPicture*)frame, inputBufferYUV, frame->format, frame->width, frame->height);
If you want/need to manipulate a copy of input data, you need to compute the needed buffer size, and copy input data in it.
// Initialize the AVFrame
AVFrame* frame = avcodec_alloc_frame();
frame->width = width;
frame->height = height;
frame->format = AV_PIX_FMT_YUV420P;
// Allocate a buffer large enough for all data
int size = avpicture_get_size(frame->format, frame->width, frame->height);
uint8_t* buffer = (uint8_t*)av_malloc(size);
// Initialize frame->linesize and frame->data pointers
avpicture_fill((AVPicture*)frame, buffer, frame->format, frame->width, frame->height);
// Copy data from the 3 input buffers
memcpy(frame->data[0], inputBufferY, frame->linesize[0] * frame->height);
memcpy(frame->data[1], inputBufferU, frame->linesize[1] * frame->height / 2);
memcpy(frame->data[2], inputBufferV, frame->linesize[2] * frame->height / 2);
Once you are done with the AVFrame, do not forget to free it with av_frame_free (and any buffer allocated by av_malloc).
FF_API int ff_get_format_plane_size(int fmt, int plane, int scanLine, int height)
{
const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt);
if (desc)
{
int h = height;
if (plane == 1 || plane == 2)
{
h = FF_CEIL_RSHIFT(height, desc->log2_chroma_h);
}
return h*scanLine;
}
else
return AVERROR(EINVAL);
}

Resources