The task at hand is to read a PNG image as bytes, get the RGBA values of each pixel, apply modifications and set new pixels in the image. So far this is the code I have:
import 'package:image/image.dart' as img;
final byteData = await rootBundle.load('assets/images/image.png');
img.Image image = img.Image.fromBytes(1536, 2048, byteData.buffer.asInt8List(), format: img.Format.rgba);
print(image.getPixelSafe(0, 0));
Applying it to this image, I get the integer value 1196314761. How do I convert this integer into RGBA or hex format?
If this is the wrong way to get a specific pixel, how can I do so?
First, you'll need to properly decode the image, which you are not currently doing. Use the Decoder class instead of trying to directly pass your encoded data to the Image constructor.
img.Image image = PngDecoder().decodeImage(byteData.buffer.asInt8List());
The package you're using returns the color as
a Uint32 as #AABBGGRR
To get the colors you want individually, just extract each of those bytes from the Uint32. The following code does a bitwise & to obtain only the least significant byte. Bit shifts are done to get higher bytes.
int input = image.getPixelSafe(0, 0);
int red = input & 0xff;
int green = input >> 8 & 0xff;
int blue = input >> 8 * 2 & 0xff;
int alpha = input >> 8 * 3 & 0xff;
To get this as a hex string you can do red.toRadixString(16).
Related
what I am trying to do is calculate the temperature of a selected area in an image
my code:
M=imread('IR003609.BMP');
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imshow(M,[a b]);
h = roipoly();
maskOfROI =h;
selectedValues = M(maskOfROI);
averageTemperature =mean(selectedValues)
maxTemperature = max(selectedValues)
minTemperature = min(selectedValues)
my image is this with the mouth area selected
enter image description here
Then the values that he throws at me are these:
averageTemperature =
64.0393
maxTemperature =
uint8
255
minTemperature =
uint8
1
Now my questions are, is the program throwing the correct temperature values (comparing the values seen in the image)? or what values are emissivity?
if they are wrong values how could I solve it?
please help
I see that the color bar is the hue of HSV so I suggest you convert to temperature along these lines: you convert to HSV, use the first layer, then rescale to fit 31-39 deg. And the colors seem to be flipped, so flip them upside down.
M = imread('jQLo5.jpg');
Mhsv = rgb2hsv(M);
maxTemp = 39;
minTemp = 31;
Mtemp = (1-Mhsv(:,:,1))*(maxTemp-minTemp)+minTemp;
figure;
imagesc(Mtemp)
colormap(flipud(hsv))
colorbar
Bitmap is constructed by pixel data(purely pixel data). The construction was done by properly setting the bitmap parameters like hieght,width, bitcount etc. Bitmap is actually constructed with CreateDIBsection. And the bitmap is loaded onto a CStatic object having Bitmap as property.
Image is getting displayed with proper width and content. But only difference is the content color is colored instead of scale of gray. For eg image is a white H letter on black Bground, instead of displaying it as whitish, say a blue colored H letter is displayed. Similar color changes applies for different images. Also, sometimes junk colored data appears deviating from original content of image apart from just the color change.
Bitmap is a 16 bit bitmap.
Please see below for code used for creating BitMap.
HDC is device context of CStatic variable in which the created bitmap is loaded;
I directly set the BitMap returned by below function to this variable using setbitmap function. CStatic varibale has also BitMap as one of its property. See below for function used to create bitmap.
Function parameter definitions.
PixMapHeight = number of rows in pixel matrix.
PixMapWidth = number of columns in pixel matrix.
BitsPerPixel = The bits stored for one pixel.
pPixMapBits = Void pointer to pixel array.(raw pixel data only! 16 bit per pixel).
DoBitmapFromPixels(HDC Hdc, UINT PixMapWidth, UINT PixMapHeight, UINT BitsPerPixel, LPVOID pPixMapBits)
BITMAPINFO *bmpInfo = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bmpInfoHeader(bmpInfo->bmiHeader);
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
LONG lBmpSize = PixMapWidth * PixMapHeight * (BitsPerPixel / 8);
bmpInfoHeader.biWidth = PixMapWidth;
bmpInfoHeader.biHeight = -(static_cast<int>(PixMapHeight));
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biBitCount = BitsPerPixel;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biSizeImage = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biClrImportant = 0;
void *pPixelPtr = NULL;
HBITMAP hBitMap = CreateDIBSection(Hdc, bmpInfo, DIB_RGB_COLORS, &pPixelPtr, NULL, 0);
if (pPixMapBits != NULL)
{
BYTE* pbBits = (BYTE*)pPixMapBits;
BYTE *Pix = (BYTE *)pPixelPtr;
memcpy(Pix, ((BYTE*)pbBits + (lBmpSize * (CurrentFrame - 1))), lBmpSize);
}
free(bmpInfo);
return hBitMap;
The supposed output is the figure in the left of attached file. But I am getting a blue toned image as in right(never mind the scaling and exact match issue, put the image to depict the problem).
And also it will be very helpful if I know how RGB values are stored in 16 bits!
You never actually said what format pPixMapBits is in, but I'm guessing that it contains 16-bit values where 0 represents black, 32768 represents gray, and 65535 represents white.
You are creating a BITMAPINFOHEADER with bitBitCount = 16 and biCompression = BI_RGB. According to the documentation, if you set the fields that way, then:
Each WORD in the bitmap array represents a single pixel. The relative intensities of red, green, and blue are represented with five bits for each color component. The value for blue is in the least significant five bits, followed by five bits each for green and red. The most significant bit is not used.
This is not the same format as your source data, and you are doing no conversion, so you get junk. Note that the bitmap format you chose is capable of representing only 2^5 = 32 shades of gray, not 65536, so you will suffer loss of quality during the conversion.
I know OpenCV only supports binary masks.
But I need to do an overlay where I have a grayscale mask that specifies transparency of the overlay.
Eg. if a pixel in the mask is 50% white it should mean a cv::addWeighted operation for that pixel with alpha=beta=0.5, gamma = 0.0.
Now, if there is no opencv library function, what algorithm would you suggest as the most efficient?
I did something like this for a fix.
typedef double Mask_value_t;
typedef Mat_<Mask_value_t> Mask;
void cv::addMasked(const Mat& src1, const Mat& src2, const Mask& mask, Mat& dst)
{
MatConstIterator_<Vec3b> it1 = src1.begin<Vec3b>(), it1_end = src1.end<Vec3b>();
MatConstIterator_<Vec3b> it2 = src2.begin<Vec3b>();
MatConstIterator_<Mask_value_t> mask_it = mask.begin();
MatIterator_<Vec3b> dst_it = dst.begin<Vec3b>();
for(; it1 != it1_end; ++it1, ++it2, ++mask_it, ++dst_it)
*dst_it = (*it1) * (1.0-*mask_it) + (*it2) * (*mask_it);
}
I have not optimized nor made safe this code yet with assertions.
Working assumptions: all Mat's and the Mask are the same size and Mat's are normal three channel color images.
I have a similar problem, where I wanted to apply a png with transparency.
My solution was using Mat expressions:
void AlphaBlend(const Mat& imgFore, Mat& imgDst, const Mat& alpha)
{
vector<Mat> vAlpha;
Mat imgAlpha3;
for(int i = 0; i < 3; i++) vAlpha.push_back(alpha);
merge(vAlpha,imgAlpha3)
Mat blend = imgFore.mul(imgAlpha3,1.0/255) +
imgDst.mul(Scalar::all(255)-imgAlpha3,1.0/255);
blend.copyTo(imgDst);
}
OpenCV supports RGBA images which you can create by using mixchannels or the split and merge functions to combine your images with your greyscale mask. I hope this is what you are looking for!
Using this method you can combine your grayscale mask with your image like so:
cv::Mat gray_image, mask, rgba_image;
std::vector<cv::Mat> result;
cv::Mat image = cv::imread(image_path);
cv::split(image, result);
cv::cvtColor(image, gray_image, CV_BGR2GRAY);
cv::threshold(gray_image, mask, 128, 255, CV_THRESH_BINARY);
result.push_back(mask);
cv::merge(result, rgba_image);
imwrite("rgba.png", rgba_image);
Keep in mind that you cannot view RGBA images using cv::imshow as described in read-rgba-image-opencv and you cannot save your image as jpeg since that format does not support transparency. It seems that you can combine channels using cv::cvtcolor as shown in opencv-2-3-convert-mat-to-rgba-pixel-array
I am trying to use OpenCV 2.3.1 to convert a 12-bit Bayer image to an 8-bit RGB image. This seems like it should be fairly straightforward using the cvCvtColor function, but the function throws an exception when I call it with this code:
int cvType = CV_MAKETYPE(CV_16U, 1);
cv::Mat bayerSource(height, width, cvType, sourceBuffer);
cv::Mat rgbDest(height, width, CV_8UC3);
cvCvtColor(&bayerSource, &rgbDest, CV_BayerBG2RGB);
I thought that I was running past the end of sourceBuffer, since the input data is 12-bit, and I had to pass in a 16-bit type because OpenCV doesn't have a 12-bit type. So I divided the width and height by 2, but cvCvtColor still threw an exception that didn't have any helpful information in it (the error message was "Unknown exception").
There was a similar question posted a few months ago that was never answered, but since my question deals more specifically with 12-bit Bayer data, I thought it was sufficiently distinct to merit a new question.
Thanks in advance.
Edit: I must be missing something, because I can't even get the cvCvtColor function to work on 8-bit data:
cv::Mat srcMat(100, 100, CV_8UC3);
const cv::Scalar val(255,0,0);
srcMat.setTo(val);
cv::Mat destMat(100, 100, CV_8UC3);
cvCvtColor(&srcMat, &destMat, CV_RGB2BGR);
I was able to convert my data to 8-bit RGB using the following code:
// Copy the data into an OpenCV Mat structure
cv::Mat bayer16BitMat(height, width, CV_16UC1, inputBuffer);
// Convert the Bayer data from 16-bit to to 8-bit
cv::Mat bayer8BitMat = bayer16BitMat.clone();
// The 3rd parameter here scales the data by 1/16 so that it fits in 8 bits.
// Without it, convertTo() just seems to chop off the high order bits.
bayer8BitMat.convertTo(bayer8BitMat, CV_8UC1, 0.0625);
// Convert the Bayer data to 8-bit RGB
cv::Mat rgb8BitMat(height, width, CV_8UC3);
cv::cvtColor(bayer8Bit, rgb8BitMat, CV_BayerGR2RGB);
I had mistakenly assumed that the 12-bit data I was getting from the camera was tightly packed, so that two 12-bit values were contained in 3 bytes. It turns out that each value was contained in 2 bytes, so I didn't have to do any unpacking to get my data into a 16-bit array that is supported by OpenCV.
Edit: See #petr's improved answer that converts to RGB before converting to 8-bits to avoid losing any color information during the conversion.
The Gillfish's answer technically works but during the conversion it uses smaller data structure (CV_8UC1) than the input (which is CV_16UC1) and loses some color information.
I would suggest first to decode the Bayer encoding but stay in 16-bits per channel (from CV_16UC1 to CV_16UC3) and later convert to CV_8UC3.
The modified Gillfish's code (assuming the camera gives image in 16bit Bayer encoding):
// Copy the data into an OpenCV Mat structure
cv::Mat mat16uc1_bayer(height, width, CV_16UC1, inputBuffer);
// Decode the Bayer data to RGB but keep using 16 bits per channel
cv::Mat mat16uc3_rgb(width, height, CV_16UC3);
cv::cvtColor(mat16uc1_bayer, mat16uc3_rgb, cv::COLOR_BayerGR2RGB);
// Convert the 16-bit per channel RGB image to 8-bit per channel
cv::Mat mat8uc3_rgb(width, height, CV_8UC3);
mat16uc3_rgb.convertTo(mat8uc3_rgb, CV_8UC3, 1.0/256); //this could be perhaps done more effectively by cropping bits
For anyone struggling with this, the above solution only works if your image actually comes in 16bit otherwise, as already suggested by the comments you should chop-off the 4 least significant bits. I achieved that with this. It's not very clean but it works.
unsigned short * image_12bit = (unsigned short*)data;
char out[rows * cols];
for(int i = 0; i < rows * cols; i++) {
out[i] = (char)((double)(255 * image_12bit[i]) / (double)(1 << 12));
}
cv::Mat bayer_image(rows, cols, CV_8UC1, (void*)out);
cv::cvtColor(bayer_image, *res, cv::COLOR_BayerGR2BGR);
I am pretty new to Objective C and working with Cocoa Framework. I want to read an image and then extract the image data (just pixel data and not the header) and then write the data to a binary file. I am kind of stuck with this, I was going through the methods of NSImage but I couldn't find a suitable one. Can anyone suggest me some other ways of doing this?
Cocoa-wise, the easiest approach is to use the NSBitmapImageRep class. Once initialized with a NSData object, for example, you can access the color value at any coordinate as a NSColor object using the -setColor:atX:y: and -colorAtX:y: methods. Note that if you call these methods in tight loops, you may suffer a performance hit from objc_msg_send. You could consider accessing the raw bitmap data as C array via the -bitmapData method. When dealing with a RGB image, for example, the color values for each channel are stored at offsets of 3.
For example:
color values: [R,G,B][R,G,B][R,G,B]
indices: [0,1,2, 3,4,5, 6,7,8]
To loop through each pixel in the image and extract the RGB components:
unsigned char *bitmapData = [bitmapRep bitmapData];
if ([bitmapRep samplesPerPixel] == 3) {
for (i = 0; i < [image size].width * [image size].height; i++) {
int base = (i * 3);
// these range from 0-255
unsigned char red = bitmapData[base + 0];
unsigned char green = bitmapData[base + 1];
unsigned char blue = bitmapData[base + 2];
}
}