create HBITMAP with alpha channel from BMP files - winapi

I started with a PNG image. I then split the alpha channel into a grey scale BMP file and converted the PNG into a BMP.
I would like to load both BMP files and merge them to give a HBITMAP with an alpha channel:
HBITMAP splash = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_SPLASH));
HBITMAP splashMask = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_SPLASH_MASK));
HBITMAP splashAlpha = ....
I found an example of creating a HBITMAP directly from a PNG. It uses IStream and COM to do the import. I'd rather not include more dependencies. Surely there is a better way to do this?

If you're looking for minimum code, you should consider converting your PNGs into a 32-bit bitmap and draw them using the AlphaBlend API.
BLENDFUNCTION fn;
ZeroMemory(&fn, sizeof(fn));
fn.BlendOp = AC_SRC_OVER;
fn.BlendFlags = 0;
fn.SourceConstantAlpha = 255;
fn.AlphaFormat = AC_SRC_ALPHA;
AlphaBlend(dstDC, dstX, dstY, dstW, dstH, hdcSrc, srcX, srcY, srcW, srcH, fn);
The gotcha is that hdcSrc should refer to a 32-bit BGRA image where the Alpha channel is premultiplied into the BGR channels, i.e.
B = B * A / 255;
G = G * A / 255;
R = R * A / 255;

Related

How can I erase the edge of a waterprint?

I have a src pic like that (the source pic is too large to upload):
and I have a white background logo like that :
I tried to use the OpenCV code:
std::string file_name = "E:\\xxx\\IMG_0001.JPG";
cv::Mat image = cv::imread(file_name);
cv::Mat mask_not;
cv::Mat mask = cv::imread("E:\\xxx\\white_eva.jpg",0);
cv::Mat logo = cv::imread("E:\\xxx\\white_eva.jpg");
cv::bitwise_not(mask,mask_not);
cv::cvtColor(mask_not,mask_not,cv::COLOR_GRAY2BGR);
std::cout<<mask_not.type()<<std::endl;
cv::Mat imageROI;
imageROI = image(cv::Rect(image.cols-logo.cols-10,image.rows-logo.rows-10,logo.cols,logo.rows));
cv::imwrite("E:\\xxx\\imageROI.jpg",imageROI);
logo.copyTo(imageROI,mask_not);
cv::imwrite("E:\\xxx\\test.JPG",image);
The result like that:
As you can see from the result pic, there is a white edge around the logo. At first, I think the reason is that the mask doesn't large enough to mask the logo all. But as you can see the edge of the logo seems to show entirely. So, it confused me. The first question is that how can erase the white edge of the waterprint?
I tried adding morphology operations on your mask image. So this is what I could reach.
std::string file_name = "./image1.jpg";
cv::Mat image = cv::imread(file_name);
cv::Mat mask_not;
cv::Mat mask = cv::imread("./eva.jpg",0);
cv::Mat logo = cv::imread("./eva.jpg");
// MORPHOLOGY OPS HERE
cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT,
Size(5, 5),
Point(-1, -1));
for (int i = 0; i < 20; ++i) {
cv::morphologyEx( mask, mask, cv::MORPH_CLOSE, element );
cv::morphologyEx( mask, mask, cv::MORPH_CLOSE, element );
cv::morphologyEx( mask, mask, cv::MORPH_CLOSE, element );
cv::medianBlur(mask, mask, 5);
}
cv::Mat element_dilate = cv::getStructuringElement(cv::MORPH_RECT,
Size(5, 5),
Point(-1, -1));
cv::dilate(mask, mask, element_dilate);
cv::bitwise_not(mask,mask_not);
cv::imshow("win", mask_not);
cv::waitKey(0);
cv::cvtColor(mask_not,mask_not,cv::COLOR_GRAY2BGR);
std::cout<<mask_not.type()<<std::endl;
cv::Mat imageROI;
imageROI = image(cv::Rect(image.cols-logo.cols-10,image.rows-logo.rows-10,logo.cols,logo.rows));
cv::imwrite("./imageROI.jpg",imageROI);
logo.copyTo(imageROI,mask_not);
cv::imwrite("./test.JPG",image);
Result image (I resized images a bit, so you may be need to change kernel sizes in morphology operations):

SaveDDSTextureToFile() saves a black texture instead the expected

I have created a red colored texture of DXGI format DXGI_FORMAT_R32_FLOAT. I have a byte buffer of red color pixels where 4 byte per pixel is prepared. The byte buffer is then copied using device context map and unmap functions and after that I have created a shader resource view. I have get the resource back from resource view then passed that to SaveDDSTextureToFile() to save the bitmap data to dds file format.
But when I am going to save it in dds in file to check it's saves a same sized texture which is total black. Where should I look at to debug?
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = static_cast<UINT>(renderTarget.width);
desc.Height = static_cast<UINT>(renderTarget.height);
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R32_FLOAT;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
...
SaveDDSTextureToFile(renderer->Context(), texture2D, L"D:\\RED.dds");
I have created the red texture buffer by following:
CImage m_cImage;
// create a test image
m_cImage.Create(w, -h, 8 * 4); // 8 bit * 4 channel => 32 bpp or 4 byte per pixel
auto hdc = m_cImage.GetDC();
Gdiplus::Graphics graphics(hdc);
// Create a SolidBrush object.
Gdiplus::SolidBrush redBrush(Gdiplus::Color::Red);
// Fill the rectangle.
Gdiplus::Status status = graphics.FillRectangle(&redBrush, 0, 0, w, h);
TRY_CONDITION(status == Gdiplus::Status::Ok);
....
// Then saved the m_cImage.GetBits() to bmp file using Gdiplus::Bitmap
// and my expected texture is found

Monochrome image getting displayed as colored RGB image

Bitmap is constructed by pixel data(purely pixel data). The construction was done by properly setting the bitmap parameters like hieght,width, bitcount etc. Bitmap is actually constructed with CreateDIBsection. And the bitmap is loaded onto a CStatic object having Bitmap as property.
Image is getting displayed with proper width and content. But only difference is the content color is colored instead of scale of gray. For eg image is a white H letter on black Bground, instead of displaying it as whitish, say a blue colored H letter is displayed. Similar color changes applies for different images. Also, sometimes junk colored data appears deviating from original content of image apart from just the color change.
Bitmap is a 16 bit bitmap.
Please see below for code used for creating BitMap.
HDC is device context of CStatic variable in which the created bitmap is loaded;
I directly set the BitMap returned by below function to this variable using setbitmap function. CStatic varibale has also BitMap as one of its property. See below for function used to create bitmap.
Function parameter definitions.
PixMapHeight = number of rows in pixel matrix.
PixMapWidth = number of columns in pixel matrix.
BitsPerPixel = The bits stored for one pixel.
pPixMapBits = Void pointer to pixel array.(raw pixel data only! 16 bit per pixel).
DoBitmapFromPixels(HDC Hdc, UINT PixMapWidth, UINT PixMapHeight, UINT BitsPerPixel, LPVOID pPixMapBits)
BITMAPINFO *bmpInfo = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bmpInfoHeader(bmpInfo->bmiHeader);
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
LONG lBmpSize = PixMapWidth * PixMapHeight * (BitsPerPixel / 8);
bmpInfoHeader.biWidth = PixMapWidth;
bmpInfoHeader.biHeight = -(static_cast<int>(PixMapHeight));
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biBitCount = BitsPerPixel;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biSizeImage = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biClrImportant = 0;
void *pPixelPtr = NULL;
HBITMAP hBitMap = CreateDIBSection(Hdc, bmpInfo, DIB_RGB_COLORS, &pPixelPtr, NULL, 0);
if (pPixMapBits != NULL)
{
BYTE* pbBits = (BYTE*)pPixMapBits;
BYTE *Pix = (BYTE *)pPixelPtr;
memcpy(Pix, ((BYTE*)pbBits + (lBmpSize * (CurrentFrame - 1))), lBmpSize);
}
free(bmpInfo);
return hBitMap;
The supposed output is the figure in the left of attached file. But I am getting a blue toned image as in right(never mind the scaling and exact match issue, put the image to depict the problem).
And also it will be very helpful if I know how RGB values are stored in 16 bits!
You never actually said what format pPixMapBits is in, but I'm guessing that it contains 16-bit values where 0 represents black, 32768 represents gray, and 65535 represents white.
You are creating a BITMAPINFOHEADER with bitBitCount = 16 and biCompression = BI_RGB. According to the documentation, if you set the fields that way, then:
Each WORD in the bitmap array represents a single pixel. The relative intensities of red, green, and blue are represented with five bits for each color component. The value for blue is in the least significant five bits, followed by five bits each for green and red. The most significant bit is not used.
This is not the same format as your source data, and you are doing no conversion, so you get junk. Note that the bitmap format you chose is capable of representing only 2^5 = 32 shades of gray, not 65536, so you will suffer loss of quality during the conversion.

Rendering Windows screenshot capture bitmap as DirectX texture

I'm making progress developing a '3d desktop' directx app that needs to display the current contents of a desktop window (e.g. "Calculator") as a 2D texture on a rectangular surface in directx (11). I'm sooo close but really struggling with the screenshot BMP -> Texture2D step. I do have screenshot->HBITMAP and DDSFile->rendered texture successfully working but can't complete the screenshot->rendered texture.
So far I have working the 'capture the window as a screenshot' bit:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator"));
GetClientRect(user_window, &user_window_rectangle);
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
PrintWindow(user_window, hdc, PW_CLIENTONLY);
At this point I have the window bitmap referenced by HBITMAP hbmp.
Also working is my code to render a DDS file as a texture on a directx/3d rectangle:
ID3D11Device *dev;
ID3D11DeviceContext *dev_context;
...
dev_context->PSSetShaderResources(0, 1, &shader_resource_view);
dev_context->PSSetSamplers(0, 1, &tex_sampler_state);
...
DirectX::TexMetadata tex_metadata;
DirectX::ScratchImage image;
hr = LoadFromDDSFile(L"Earth.dds", DirectX::DDS_FLAGS_NONE, &tex_metadata, image);
hr = CreateShaderResourceView(dev, image.GetImages(), image.GetImageCount(), tex_metadata, &shader_resource_view);
Pixel shader is:
Texture2D ObjTexture
SamplerState ObjSamplerState
float4 PShader(float4 pos : SV_POSITION, float4 color : COLOR, float2 tex : TEXCOORD) : SV_TARGET\
{
return ObjTexture.Sample( ObjSamplerState, tex );
}
The samplerstate (defaulting to linear) is:
D3D11_SAMPLER_DESC sampler_desc;
ZeroMemory(&sampler_desc, sizeof(sampler_desc));
sampler_desc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.MinLOD = 0;
sampler_desc.MaxLOD = D3D11_FLOAT32_MAX;
hr = dev->CreateSamplerState(&sampler_desc, &tex_sampler_state);
Question: how do I replace the LoadFromDDSFile bit with some equivalent that takes the HBITMAP from the windows screencapture and ends up with it on the graphics card as ObjTexture ?
Below is my best shot of bridging from the screenshot HBITMAP hbmp to the shader resource screenshot_texture, but it gives a memory access violation from the graphics driver (I think due to my "data.pSysmem = &bmp.bmBits", but no idea really):
GetObject(hbmp, sizeof(BITMAP), (LPSTR)&bmp)
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_R8G8B8A8_UNORM, bmp.bmWidth, bmp.bmHeight, 1,
1,
D3D11_BIND_SHADER_RESOURCE
);
int bytes_per_pixel = 4;
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * bmp.bmWidth;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * bmp.bmWidth * bmp.bmHeight;// total buffer size in byte
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
:::::::::::::::::::::::::SOLUTION::::::::::::::::::::::::::::::::::::::::::
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);
You are making a lot of assumptions here that the BITMAP returned is actually in 32-bit RGBA form. It is likely not at all in that format, and in any case you need to validate the contents of bmPlanes to be 1 and bmBitsPixel to be 32 if you are assuming it is 4-bytes per pixel. You should read more about the BMP format.
BMPs uses BGRA order, so you can use DXGI_FORMAT_B8G8R8A8_UNORM for the case of bmBitsPixel being 32.
Secondly, you need to derive pitch from bmWidthBytes and not bmWidth.
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bmp.bmWidthBytes;// line size in byte
data.SysMemSlicePitch = bmp.bmWidthBytes * bmp.bmHeight;// total buffer size in byte
If bmBitsPixel is 24, there is no DXGI format equivalent to that. You have to copy the data to a 32-bit format such as DXGI_FORMAT_B8G8R8X8_UNORM.
If bmBitsPixel is 15 or 16, you can use DXGI_FORMAT_B5G5R5A1_UNORM on a system with Direct3D 11.1, but remember that 16-bit DXGI formats are not always supported depending on the driver. Otherwise you'll have to convert this data to something else.
For bmBitsPixel values of 1, 2, 4, or 8 you have to convert them as there are no DXGI texture formats that are equivalent.
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);

Which is the most efficient way to do alpha mask in opencv?

I know OpenCV only supports binary masks.
But I need to do an overlay where I have a grayscale mask that specifies transparency of the overlay.
Eg. if a pixel in the mask is 50% white it should mean a cv::addWeighted operation for that pixel with alpha=beta=0.5, gamma = 0.0.
Now, if there is no opencv library function, what algorithm would you suggest as the most efficient?
I did something like this for a fix.
typedef double Mask_value_t;
typedef Mat_<Mask_value_t> Mask;
void cv::addMasked(const Mat& src1, const Mat& src2, const Mask& mask, Mat& dst)
{
MatConstIterator_<Vec3b> it1 = src1.begin<Vec3b>(), it1_end = src1.end<Vec3b>();
MatConstIterator_<Vec3b> it2 = src2.begin<Vec3b>();
MatConstIterator_<Mask_value_t> mask_it = mask.begin();
MatIterator_<Vec3b> dst_it = dst.begin<Vec3b>();
for(; it1 != it1_end; ++it1, ++it2, ++mask_it, ++dst_it)
*dst_it = (*it1) * (1.0-*mask_it) + (*it2) * (*mask_it);
}
I have not optimized nor made safe this code yet with assertions.
Working assumptions: all Mat's and the Mask are the same size and Mat's are normal three channel color images.
I have a similar problem, where I wanted to apply a png with transparency.
My solution was using Mat expressions:
void AlphaBlend(const Mat& imgFore, Mat& imgDst, const Mat& alpha)
{
vector<Mat> vAlpha;
Mat imgAlpha3;
for(int i = 0; i < 3; i++) vAlpha.push_back(alpha);
merge(vAlpha,imgAlpha3)
Mat blend = imgFore.mul(imgAlpha3,1.0/255) +
imgDst.mul(Scalar::all(255)-imgAlpha3,1.0/255);
blend.copyTo(imgDst);
}
OpenCV supports RGBA images which you can create by using mixchannels or the split and merge functions to combine your images with your greyscale mask. I hope this is what you are looking for!
Using this method you can combine your grayscale mask with your image like so:
cv::Mat gray_image, mask, rgba_image;
std::vector<cv::Mat> result;
cv::Mat image = cv::imread(image_path);
cv::split(image, result);
cv::cvtColor(image, gray_image, CV_BGR2GRAY);
cv::threshold(gray_image, mask, 128, 255, CV_THRESH_BINARY);
result.push_back(mask);
cv::merge(result, rgba_image);
imwrite("rgba.png", rgba_image);
Keep in mind that you cannot view RGBA images using cv::imshow as described in read-rgba-image-opencv and you cannot save your image as jpeg since that format does not support transparency. It seems that you can combine channels using cv::cvtcolor as shown in opencv-2-3-convert-mat-to-rgba-pixel-array

Resources