Can't get BITMAPINFOHEADER data to display odd width bmp images correctly - winapi

I am trying to display a 24-bit uncompressed bitmap with an odd width using standard Win32 API calls, but it seems like I have a stride problem.
According to the msdn:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229%28v=vs.85%29.aspx
"For uncompressed RGB formats, the minimum stride is always the image width in bytes, rounded up to the nearest DWORD. You can use the following formula to calculate the stride:
stride = ((((biWidth * biBitCount) + 31) & ~31) >> 3)"
but this simply does not work for me and below is is the code:
void Init()
{
pImage = ReadBMP("data\\bird.bmp");
size_t imgSize = pImage->width * pImage->height * 3;
BITMAPINFOHEADER bmih;
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biBitCount = 24;
// This is probably where the bug is
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = pImage->width;
bmih.biWidth = stride;
bmih.biHeight = -((LONG)pImage->height);
bmih.biPlanes = 1;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
bmih.biXPelsPerMeter = 1;
bmih.biYPelsPerMeter = 1;
bmih.biClrUsed = 0;
bmih.biClrImportant = 0;
BITMAPINFO dbmi;
ZeroMemory(&dbmi, sizeof(dbmi));
dbmi.bmiHeader = bmih;
dbmi.bmiColors->rgbBlue = 0;
dbmi.bmiColors->rgbGreen = 0;
dbmi.bmiColors->rgbRed = 0;
dbmi.bmiColors->rgbReserved = 0;
HDC hdc = ::GetDC(NULL);
mTestBMP = CreateDIBitmap(hdc,
&bmih,
CBM_INIT,
pImage->pSrc,
&dbmi,
DIB_RGB_COLORS);
hdc = ::GetDC(NULL);
}
and here the drawing fuction
RawBMP *pImage;
HBITMAP mTestBMP;
void UpdateScreen(HDC srcHDC)
{
if (pImage != nullptr && mTestBMP != 0x00)
{
HDC hdc = CreateCompatibleDC(srcHDC);
SelectObject(hdc, mTestBMP);
BitBlt(srcHDC,
0, // x
0, // y
// I tried passing the stride here and it did not work either
pImage->width, // width of the image
pImage->height, // height
hdc,
0, // x and
0, // y of upper left corner
SRCCOPY);
DeleteDC(hdc);
}
}
If I pass the original image width (odd number) instead of the stride
LONG stride = ((((pImage->width * bmih.biBitCount) + 31) & ~31) >> 3);
//bmih.biWidth = stride;
bmih.biWidth = pImage->width;
the picture looks skewed, below shows the differences:
and if I pass the stride according to msdn, then nothing shows up because the stride is too large.
any clues? Thank you!

thanks Jonathan for the solution. I need to copy row by row with the proper padding for odd width images. More or less the code for 24-bit uncompressed images:
const uint32_t bitCount = 24;
LONG strideInBytes;
// if the width is odd, then we need to add padding
if (width & 0x1)
{
strideInBytes = ((((width * bitCount) + 31) & ~31) >> 3);
}
else
{
strideInBytes = width * 3;
}
// allocate the new buffer
unsigned char *pBuffer = new unsigned char[strideInBytes * height];
memset(pBuffer, 0xaa, strideInBytes * height);
// Copy row by row
for (uint32_t yy = 0; yy < height; yy++)
{
uint32_t rowSizeInBytes = width * 3;
unsigned char *pDest = &pBuffer[yy * strideInBytes];
unsigned char *pSrc = &pData[yy * rowSizeInBytes];
memcpy(pDest, pSrc, rowSizeInBytes);
}
rawBMP->pSrc = pBuffer;
rawBMP->width = width;
rawBMP->height = height;
rawBMP->stride = strideInBytes;

Related

Correct RGB values for AVFrame

I have to fill the ffmpeg AVFrame->data from a cairo surface pixel data. I have this code:
/* Image info and pixel data */
width = cairo_image_surface_get_width( surface );
height = cairo_image_surface_get_height( surface );
stride = cairo_image_surface_get_stride( surface );
pix = cairo_image_surface_get_data( surface );
for( row = 0; row < height; row++ )
{
data = pix + row * stride;
for( col = 0; col < width; col++ )
{
img->video_frame->data[0][row * img->video_frame->linesize[0] + col] = data[0];
img->video_frame->data[1][row * img->video_frame->linesize[1] + col] = data[1];
//img->video_frame->data[2][row * img->video_frame->linesize[2] + col] = data[2];
data += 4;
}
img->video_frame->pts++;
}
But the colors in the exported video are wrong. The original heart is red. Can someone point me in the right direction? The encode.c example is useless sadly and on the Internet there is a lot of confusion about Y, Cr and Cb which I really don't understand. Please feel free to ask for more details. Many thanks.
You need to use libswscale to convert the source image data from RGB24 to YUV420P.
Something like:
int width = cairo_image_surface_get_width( surface );
int height = cairo_image_surface_get_height( surface );
int stride = cairo_image_surface_get_stride( surface );
uint8_t *pix = cairo_image_surface_get_data( surface );
uint8_t *data[1] = { pix };
int linesize[1] = { stride };
struct SwsContext *sws_ctx = sws_getContext(width, height, AV_PIX_FMT_RGB24 ,
width, height, AV_PIX_FMT_YUV420P,
SWS_BILINEAR, NULL, NULL, NULL);
sws_scale(sws_ctx, data, linesize, 0, height,
img->video_frame->data, img->video_frame->linesize);
sws_freeContext(sws_ctx);
See the example here: scaling_video

How to draw into device context

I have a bitmap image in form of array of 32-bit integers (ARGB pixels: uint32 *mypixels) and int width and int height. I need to output them to a printer.
I have the printer context: HDC hdcPrinter;
As I learned, I need first to create a compatible context:
HDC hdcMem = CreateCompatibleDC(hdcPrinter);
Then I need to create an HBITMAP object, select it into the compatible context, and render:
HBITMAP hBitmap = ...?
SelectObject(hdcMem, hBitmap);
BitBlt(printerContext, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY);
And finally clean up:
DeleteObject(hBitmap);
DeleteDC(hdcMem);
My question is how do I create an HBITMAP object and put mypixels into it?
I found two options:
HBITMAP hBitmap = CreateCompatibleBitmap(hdcPrinter, width, height);
Looks good, but how do mypixels get into this bitmap?
HBITMAP hBitmap = CreateDIBSection(hdcPrinter /*or hdcMem?*/, ...);
Will it work? Is it better than option 1.?
This function creates a bitmap and sets it to an initial image.
Irt's a bit fiddly to access the bits directly, but it can be done.
HBITMAP MakeBitmap(unsigned char *rgba, int width, int height, VOID **buff)
{
VOID *pvBits; // pointer to DIB section
HBITMAP answer;
BITMAPINFO bmi;
HDC hdc;
int x, y;
int red, green, blue, alpha;
// setup bitmap info
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = width;
bmi.bmiHeader.biHeight = height;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32; // four 8-bit components
bmi.bmiHeader.biCompression = BI_RGB;
bmi.bmiHeader.biSizeImage = width * height * 4;
hdc = CreateCompatibleDC(GetDC(0));
answer = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, &pvBits, NULL, 0x0);
for (y = 0; y < height; y++)
{
for (x = 0; x < width; x++)
{
red = rgba[(y*width + x) * 4];
green = rgba[(y*width + x) * 4 + 1];
blue = rgba[(y*width + x) * 4 + 2];
alpha = rgba[(y*width + x) * 4 + 3];
red = (red * alpha) >> 8;
green = (green * alpha) >> 8;
blue = (blue * alpha) >> 8;
((UINT32 *)pvBits)[(height - y - 1) * width + x] = (alpha << 24) | (red << 16) | (green << 8) | blue;
}
}
DeleteDC(hdc);
*buff = pvBits;
return answer;
}

How do I create a texture 3d programatically?

I am trying to create a texture3d programatically but I am not really understanding how it is done. Should each slice of the texture be a subresource? This what I am trying to do, but it is not working:
// Create texture3d
const int32 cWidth = 6;
const int32 cHeight = 7;
const int32 cDepth = 3;
D3D11_TEXTURE3D_DESC desc;
desc.Width = cWidth;
desc.Height = cHeight;
desc.MipLevels = 1;
desc.Depth = cDepth;
desc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
const uint32 bytesPerPixel = 4;
uint32 sliceSize = cWidth*cHeight*bytesPerPixel;
float tex3d[cWidth*cHeight*cDepth];
memset(tex3d, 0x00, sizeof(tex3d));
uint32 colorIndex = 0;
for (uint32 depthCount = 0; depthCount<depthSize; depthCount++)
{
for (uint32 ii=0; ii<cHeight; ii++)
{
for (uint32 jj=0; jj<cWidth; jj++)
{
// Add some dummy color
tex3d[colorIndex++] = 1.f;
tex3d[colorIndex++] = 0.f;
tex3d[colorIndex++] = 1.f;
tex3d[colorIndex++] = 0.f;
}
}
}
D3D11_SUBRESOURCE_DATA initData[cDepth] = {0};
uint8 *pMem = (uint8*)tex3d;
// What do I pass here? Each slice?
for (uint32 depthCount = 0; depthCount<depthSize; depthCount++)
{
initData[depthCount].pSysMem = static_cast<const void*>(pMem);
initData[depthCount].SysMemPitch = static_cast<UINT>(sliceSize); // not sure
initData[depthCount].SysMemSlicePitch = static_cast<UINT>(sliceSize); // not sure
pMem += sliceSize;
}
ID3D11Texture3D* tex = nullptr;
hr = m_d3dDevice->CreateTexture3D(&desc, &initData[0], &tex);
ID3D11RenderTargetView *pRTV = nullptr;
hr = m_d3dDevice->CreateRenderTargetView(tex, nullptr, &pRTV);
This creates the texture but when I gives me 1 sub-resource? Should it be 3?
I looked at this article, but it refers to texture2d;
D3D11: Creating a cube map from 6 images
If anyone has some snipped of a code that works, I'd like to take a look.
thx!
In Direct3D, 3D textures are laid out such that sub-resources are mipmap levels. Each mipmap level contains 1/2 as many slices as the previous, but in this case you only have 1 mipmap LOD, so you will only have 1 subresource (containing 3 slices).
As for the pitch, SysMemPitch is the number of bytes between rows in each image slice (cWidth * bytesPerPixel assuming you tightly pack this). SysMemSlicePitch is the number of bytes between 2D slices (cWidth * cHeight * bytesPerPixel). Thus, the memory for each mipmap needs to be arranged as a series of 2D images with the same dimensions.

Create ARGB DIB

How to create a DIB with ARGB format. I want to blit a image(that has some part transparent in it ) using this DIB.
I tried with the following code but its not working properly
unsigned char * rawdata; ==> Filled by Qimage Raw Data
unsigned char * buffer = NULL;
memset(&bmi, 0, sizeof(bmi));
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = width;/* Width of your image buffer */
bmi.bmiHeader.biHeight = -height; /* Height of your image buffer */
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
HBITMAP g_dibbmp = CreateDIBSection(hDesktopDC, &bmi, DIB_RGB_COLORS, (void **)&buffer, 0, 0);
if (!buffer)
{ /* ERROR */
printf("ERROR DIB could not create buffer\n");
}
else
{
printf("DIB created buffer successfully\n");
memcpy(buffer,rawdata,sizeof(rawdata));
}
Please help.
Reagards,
Techtotie.
Here's a snippet I put together from pieces of working code. The main difference I see is setting the mask bits and using memsection.
// assumes height and width passed in
int bpp = 32; // Bits per pixel
int stride = (width * (bpp / 8));
unsigned int byteCount = (unsigned int)(stride * height);
HANDLE hMemSection = ::CreateFileMapping( INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, byteCount, NULL );
if (hMemSection == NULL)
return false;
BITMAPV5HEADER bmh;
memset( &bmh, 0, sizeof( BITMAPV5HEADER ) );
bmh.bV5Size = sizeof( BITMAPV5HEADER );
bmh.bV5Width = width;
bmh.bV5Height = -height;
bmh.bV5Planes = 1;
bmh.bV5BitCount = 32;
bmh.bV5Compression = BI_RGB;
bmh.bV5AlphaMask = 0xFF000000;
bmh.bV5RedMask = 0x00FF0000;
bmh.bV5GreenMask = 0x0000FF00;
bmh.bV5BlueMask = 0x000000FF;
HDC hdc = ::GetDC( NULL );
HBITMAP hDIB = ::CreateDIBSection( hdc, (BITMAPINFO *) &bmh, DIB_RGB_COLORS,
&pBits, hMemSection, (DWORD) 0 );
::ReleaseDC( NULL, hdc );
// Much later when done manipulating the bitmap
::CloseHandle( hMemSection );
Thanks for your answer.
But my problem got solved. It was not actually the problem with the DIB creation.
It was due to the wrong API that I was using for Blitting.
I was using BitBlt for blitting but this API does not take care of the Alpha gradient. Instead of it I tried
TransparentBlt (Refer : http://msdn.microsoft.com/en-us/library/windows/desktop/dd145141(v=vs.85).aspx)
and it worked as this API takes care of copying the Alpha values from Source DC to destination DC.

Taking snapshot of contents in CGL?

I want to create a image out of Core OpenGL context.
I used following code but it creates a black image. So I guess I cannot use glReadPixles there? Any other suggestions please?
int myDataLength = 480 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef image= CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, false, renderingIntent);
//PRINT image... Its black!!!!!!
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
Before you do a glReadPixels call you must
set proper packing (see glPixelStorei reference page)
select the right buffer to read from with glReadBuffer (front after swapping, back before swapping, I recommend swap and read from front)

Resources