Convert 12-bit Bayer image to 8-bit RGB using OpenCV - image

I am trying to use OpenCV 2.3.1 to convert a 12-bit Bayer image to an 8-bit RGB image. This seems like it should be fairly straightforward using the cvCvtColor function, but the function throws an exception when I call it with this code:
int cvType = CV_MAKETYPE(CV_16U, 1);
cv::Mat bayerSource(height, width, cvType, sourceBuffer);
cv::Mat rgbDest(height, width, CV_8UC3);
cvCvtColor(&bayerSource, &rgbDest, CV_BayerBG2RGB);
I thought that I was running past the end of sourceBuffer, since the input data is 12-bit, and I had to pass in a 16-bit type because OpenCV doesn't have a 12-bit type. So I divided the width and height by 2, but cvCvtColor still threw an exception that didn't have any helpful information in it (the error message was "Unknown exception").
There was a similar question posted a few months ago that was never answered, but since my question deals more specifically with 12-bit Bayer data, I thought it was sufficiently distinct to merit a new question.
Thanks in advance.
Edit: I must be missing something, because I can't even get the cvCvtColor function to work on 8-bit data:
cv::Mat srcMat(100, 100, CV_8UC3);
const cv::Scalar val(255,0,0);
srcMat.setTo(val);
cv::Mat destMat(100, 100, CV_8UC3);
cvCvtColor(&srcMat, &destMat, CV_RGB2BGR);

I was able to convert my data to 8-bit RGB using the following code:
// Copy the data into an OpenCV Mat structure
cv::Mat bayer16BitMat(height, width, CV_16UC1, inputBuffer);
// Convert the Bayer data from 16-bit to to 8-bit
cv::Mat bayer8BitMat = bayer16BitMat.clone();
// The 3rd parameter here scales the data by 1/16 so that it fits in 8 bits.
// Without it, convertTo() just seems to chop off the high order bits.
bayer8BitMat.convertTo(bayer8BitMat, CV_8UC1, 0.0625);
// Convert the Bayer data to 8-bit RGB
cv::Mat rgb8BitMat(height, width, CV_8UC3);
cv::cvtColor(bayer8Bit, rgb8BitMat, CV_BayerGR2RGB);
I had mistakenly assumed that the 12-bit data I was getting from the camera was tightly packed, so that two 12-bit values were contained in 3 bytes. It turns out that each value was contained in 2 bytes, so I didn't have to do any unpacking to get my data into a 16-bit array that is supported by OpenCV.
Edit: See #petr's improved answer that converts to RGB before converting to 8-bits to avoid losing any color information during the conversion.

The Gillfish's answer technically works but during the conversion it uses smaller data structure (CV_8UC1) than the input (which is CV_16UC1) and loses some color information.
I would suggest first to decode the Bayer encoding but stay in 16-bits per channel (from CV_16UC1 to CV_16UC3) and later convert to CV_8UC3.
The modified Gillfish's code (assuming the camera gives image in 16bit Bayer encoding):
// Copy the data into an OpenCV Mat structure
cv::Mat mat16uc1_bayer(height, width, CV_16UC1, inputBuffer);
// Decode the Bayer data to RGB but keep using 16 bits per channel
cv::Mat mat16uc3_rgb(width, height, CV_16UC3);
cv::cvtColor(mat16uc1_bayer, mat16uc3_rgb, cv::COLOR_BayerGR2RGB);
// Convert the 16-bit per channel RGB image to 8-bit per channel
cv::Mat mat8uc3_rgb(width, height, CV_8UC3);
mat16uc3_rgb.convertTo(mat8uc3_rgb, CV_8UC3, 1.0/256); //this could be perhaps done more effectively by cropping bits

For anyone struggling with this, the above solution only works if your image actually comes in 16bit otherwise, as already suggested by the comments you should chop-off the 4 least significant bits. I achieved that with this. It's not very clean but it works.
unsigned short * image_12bit = (unsigned short*)data;
char out[rows * cols];
for(int i = 0; i < rows * cols; i++) {
out[i] = (char)((double)(255 * image_12bit[i]) / (double)(1 << 12));
}
cv::Mat bayer_image(rows, cols, CV_8UC1, (void*)out);
cv::cvtColor(bayer_image, *res, cv::COLOR_BayerGR2BGR);

Related

Get RGBA values from decoded PNG image

The task at hand is to read a PNG image as bytes, get the RGBA values of each pixel, apply modifications and set new pixels in the image. So far this is the code I have:
import 'package:image/image.dart' as img;
final byteData = await rootBundle.load('assets/images/image.png');
img.Image image = img.Image.fromBytes(1536, 2048, byteData.buffer.asInt8List(), format: img.Format.rgba);
print(image.getPixelSafe(0, 0));
Applying it to this image, I get the integer value 1196314761. How do I convert this integer into RGBA or hex format?
If this is the wrong way to get a specific pixel, how can I do so?
First, you'll need to properly decode the image, which you are not currently doing. Use the Decoder class instead of trying to directly pass your encoded data to the Image constructor.
img.Image image = PngDecoder().decodeImage(byteData.buffer.asInt8List());
The package you're using returns the color as
a Uint32 as #AABBGGRR
To get the colors you want individually, just extract each of those bytes from the Uint32. The following code does a bitwise & to obtain only the least significant byte. Bit shifts are done to get higher bytes.
int input = image.getPixelSafe(0, 0);
int red = input & 0xff;
int green = input >> 8 & 0xff;
int blue = input >> 8 * 2 & 0xff;
int alpha = input >> 8 * 3 & 0xff;
To get this as a hex string you can do red.toRadixString(16).

Monochrome image getting displayed as colored RGB image

Bitmap is constructed by pixel data(purely pixel data). The construction was done by properly setting the bitmap parameters like hieght,width, bitcount etc. Bitmap is actually constructed with CreateDIBsection. And the bitmap is loaded onto a CStatic object having Bitmap as property.
Image is getting displayed with proper width and content. But only difference is the content color is colored instead of scale of gray. For eg image is a white H letter on black Bground, instead of displaying it as whitish, say a blue colored H letter is displayed. Similar color changes applies for different images. Also, sometimes junk colored data appears deviating from original content of image apart from just the color change.
Bitmap is a 16 bit bitmap.
Please see below for code used for creating BitMap.
HDC is device context of CStatic variable in which the created bitmap is loaded;
I directly set the BitMap returned by below function to this variable using setbitmap function. CStatic varibale has also BitMap as one of its property. See below for function used to create bitmap.
Function parameter definitions.
PixMapHeight = number of rows in pixel matrix.
PixMapWidth = number of columns in pixel matrix.
BitsPerPixel = The bits stored for one pixel.
pPixMapBits = Void pointer to pixel array.(raw pixel data only! 16 bit per pixel).
DoBitmapFromPixels(HDC Hdc, UINT PixMapWidth, UINT PixMapHeight, UINT BitsPerPixel, LPVOID pPixMapBits)
BITMAPINFO *bmpInfo = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bmpInfoHeader(bmpInfo->bmiHeader);
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
LONG lBmpSize = PixMapWidth * PixMapHeight * (BitsPerPixel / 8);
bmpInfoHeader.biWidth = PixMapWidth;
bmpInfoHeader.biHeight = -(static_cast<int>(PixMapHeight));
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biBitCount = BitsPerPixel;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biSizeImage = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biClrImportant = 0;
void *pPixelPtr = NULL;
HBITMAP hBitMap = CreateDIBSection(Hdc, bmpInfo, DIB_RGB_COLORS, &pPixelPtr, NULL, 0);
if (pPixMapBits != NULL)
{
BYTE* pbBits = (BYTE*)pPixMapBits;
BYTE *Pix = (BYTE *)pPixelPtr;
memcpy(Pix, ((BYTE*)pbBits + (lBmpSize * (CurrentFrame - 1))), lBmpSize);
}
free(bmpInfo);
return hBitMap;
The supposed output is the figure in the left of attached file. But I am getting a blue toned image as in right(never mind the scaling and exact match issue, put the image to depict the problem).
And also it will be very helpful if I know how RGB values are stored in 16 bits!
You never actually said what format pPixMapBits is in, but I'm guessing that it contains 16-bit values where 0 represents black, 32768 represents gray, and 65535 represents white.
You are creating a BITMAPINFOHEADER with bitBitCount = 16 and biCompression = BI_RGB. According to the documentation, if you set the fields that way, then:
Each WORD in the bitmap array represents a single pixel. The relative intensities of red, green, and blue are represented with five bits for each color component. The value for blue is in the least significant five bits, followed by five bits each for green and red. The most significant bit is not used.
This is not the same format as your source data, and you are doing no conversion, so you get junk. Note that the bitmap format you chose is capable of representing only 2^5 = 32 shades of gray, not 65536, so you will suffer loss of quality during the conversion.

Storing floats in a texture in OpenGL ES

In WebGL, I am trying to create a texture with texels each consisting of 4 float values. Here I attempt to to create a simple texture with one vec4 in it.
var textureData = new Float32Array(4);
var texture = gl.createTexture();
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
// target, level, internal format, width, height
gl.TEXTURE_2D, 0, gl.RGBA, 1, 1,
// border, data format, data type, pixels
0, gl.RGBA, gl.FLOAT, textureData
);
My intent is to sample it in the shader using a sampler like so:
uniform sampler2D data;
...
vec4 retrieved = texture2D(data, vec2(0.0, 0.0));
However, I am getting an error during gl.texImage2D:
WebGL: INVALID_ENUM: texImage2D: invalid texture type
WebGL error INVALID_ENUM in texImage2D(TEXTURE_2D, 0, RGBA, 1, 1, 0, RGBA, FLOAT,
[object Float32Array])
Comparing the OpenGL ES spec and the OpenGL 3.3 spec for texImage2D, it seems like I am not allowed to use gl.FLOAT. In that case, how would I accomplish what I am trying to do?
You can create a byte array from your float array. Each float should take 4bytes (32bit float). This array can be put into texture using a standard RGBA format with unsigned byte. This will create a texture where each texel contains a single 32bit floating number which seems to be exactly what you want.
The only problem is your floating value is split into 4 floating values when you retrieve it from texture in your fragment shader. So what you are looking for is most likely "how to convert vec4 into a single float".
You should note what you are trying to do with internal format being RGBA consisting of 32bit floats will not work as your texture will always be 32bit per texel so even somehow forcing floats into a texture should result into clamping or precision loss. And then even if the texture texel would consist of 4 RGBA 32bit floats your shader would most likely treat them as lowp using texture2D at some point.
The solution to my problem is actually quite simple! I just needed to type
var float_texture_ext = gl.getExtension('OES_texture_float');
Now WebGL can use texture floats!
This MDN page tells us why:
Note: In WebGL, unlike in other GL APIs, extensions are only available if explicitly requested.

How to get raw frame data from AVFrame.data[] and AVFrame.linesize[] without specifying the pixel format?

I get the general idea that the frame.data[] is interpreted depending on which pixel format is the video (RGB or YUV). But is there any general way to get all the pixel data from the frame? I just want to compute the hash of the frame data, without interpret it to display the image.
According to AVFrame.h:
uint8_t* AVFrame::data[AV_NUM_DATA_POINTERS]
pointer to the picture/channel planes.
int AVFrame::linesize[AV_NUM_DATA_POINTERS]
For video, size in bytes of each picture line.
Does this mean that if I just extract from data[i] for linesize[i] bytes then I get the full pixel information about the frame?
linesize[i] contains stride for the i-th plane.
To obtain the whole buffer, use the function from avcodec.h
/**
* Copy pixel data from an AVPicture into a buffer, always assume a
* linesize alignment of 1. */
int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
int width, int height,
unsigned char *dest, int dest_size);
Use
int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
to calculate the required buffer size.
avpicture_* API is deprecated. Now you can use av_image_copy_to_buffer() and av_image_get_buffer_size() to get image buffer.
You can also avoid creating new buffer memory like above (av_image_copy_to_buffer()) by using AVFrame::data[] with the size of each array/plane can be get from av_image_fill_plane_sizes(). Only do this if you clearly understand the pixel format.
Find more here: https://www.ffmpeg.org/doxygen/trunk/group__lavu__picture.html

Reading an Image (standard format png,jpeg etc) and writing the Image Data to a binary file using Objective C

I am pretty new to Objective C and working with Cocoa Framework. I want to read an image and then extract the image data (just pixel data and not the header) and then write the data to a binary file. I am kind of stuck with this, I was going through the methods of NSImage but I couldn't find a suitable one. Can anyone suggest me some other ways of doing this?
Cocoa-wise, the easiest approach is to use the NSBitmapImageRep class. Once initialized with a NSData object, for example, you can access the color value at any coordinate as a NSColor object using the -setColor:atX:y: and -colorAtX:y: methods. Note that if you call these methods in tight loops, you may suffer a performance hit from objc_msg_send. You could consider accessing the raw bitmap data as C array via the -bitmapData method. When dealing with a RGB image, for example, the color values for each channel are stored at offsets of 3.
For example:
color values: [R,G,B][R,G,B][R,G,B]
indices: [0,1,2, 3,4,5, 6,7,8]
To loop through each pixel in the image and extract the RGB components:
unsigned char *bitmapData = [bitmapRep bitmapData];
if ([bitmapRep samplesPerPixel] == 3) {
for (i = 0; i < [image size].width * [image size].height; i++) {
int base = (i * 3);
// these range from 0-255
unsigned char red = bitmapData[base + 0];
unsigned char green = bitmapData[base + 1];
unsigned char blue = bitmapData[base + 2];
}
}

Resources