Related
I am getting a very poor peformance drawing with Win32. It takes too much time and needs improving. Please advise.
Here is what I do.
HDC dc = GetDC(wnd);
HDC memoryDc = CreateCompatibleDC(dc);
HBITMAP memoryMapBitmap = CreateCompatibleBitmap(dc, 400, 400);
HGDIOBJ originalBitmap = SelectObject(memoryDc, memoryMapBitmap);
Then, I draw in a for-loop as follows.
HBRUSH brush = (HBRUSH)GetStockObject(DC_BRUSH);
SetDCBrushColor(memoryDc, colorRef);
FillRect(memoryDc, &rect, brush);
And finally, I do a cleanup
SelectObject(memoryDc, originalBitmap);
DeleteDC(memoryDc);
ReleaseDC(wnd, dc);
Drawing takes a lot of time (several seconds). Is there a way to draw faster with Win32?
Thanks in advance!
It looks like I have solved it. Below is the solution with some comments.
I have a dialog defined in RC-file. There is a control to display a bitmap image in the dialog.
CONTROL "", IDC_MEMORY_MAP, WC_STATIC, SS_BITMAP | SS_CENTERIMAGE | SS_SUNKEN, 9, 21, 271, 338, WS_EX_LEFT
In the run-time I need to create, draw and display a bitmap:
HWND map = GetDlgItem(dlg, IDC_MEMORY_MAP);
HBITMAP bitmap = createMemoryMapBitmap(map);
bitmap = (HBITMAP)SendMessage(map, STM_SETIMAGE, IMAGE_BITMAP, (LPARAM)bitmap);
DeleteObject(bitmap); // (!) this is a very important line, otherwise old bitmap leaks
Code that finds out the size of the bitmap to create:
HBITMAP createMemoryMapBitmap(HWND map) {
RECT rect = {0, 0, 0, 0};
GetClientRect(map, &rect);
SIZE size = {rect.right - rect.left, rect.bottom - rect.top};
HDC dc = GetDC(map);
HBITMAP bitmap = doCreateMemoryMapBitmap(dc, &size);
ReleaseDC(map, dc);
return bitmap;
}
Finally, we actually create the bitmap and draw on it:
HBITMAP doCreateMemoryMapBitmap(HDC dc, LPSIZE bitmapSize) {
// create 24bpp bitmap in memory in order to draw fast
BITMAPINFO info;
memset(&info, 0, sizeof(info));
info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
info.bmiHeader.biWidth = bitmapSize->cx;
info.bmiHeader.biHeight = bitmapSize->cy;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biBitCount = 24;
info.bmiHeader.biCompression = BI_RGB;
void *pixels = NULL;
HBITMAP memoryBitmap = CreateDIBSection(dc, &info, DIB_RGB_COLORS, &pixels, NULL, 0);
HDC memoryDc = CreateCompatibleDC(dc); // (!) memoryDc is attached to current thread
HGDIOBJ originalDcBitmap = SelectObject(memoryDc, memoryBitmap);
// drawing code here
// perform windows gdi cleanup
SelectObject(memoryDc, originalDcBitmap); // restore original bitmap in memoryDC (optional step)
DeleteDC(memoryDc); // this releases memoryBitmap from memoryDC
return memoryBitmap;
}
The idea above is to create a 24bpp bitmap in the memory and draw on it. This way drawing is fast, as #IInspectable pointed out.
If display is in the indexed color mode, e.g. 16 or 256 colors, it seems Windows native control is smart enough to convert the color depth automatically displaying the bitmap.
I'm new to win32 and there are some concepts I haven't fully grasped. For one, the difference between HDC and HWND. I understand (or think I understand) that they're handles to a objects and that hdc's can be derived from hwnd's. Like in the case of BeginPaint:
hdc = BeginPaint(hWnd, &ps);
But i don't fully understand what seperates each as certain methods have hdc as parameters and some use hwnd
Secondly, I'm not sure what SelectObject and GetObject do. I think that they associate handles with various objects, like in the case of this bitmap drawing function:
BOOL DrawBitmap (HDC hDC, INT x, INT y, INT width, INT height, HBITMAP hBitmap,
DWORD dwROP)
{
HDC hDCBits;
BITMAP Bitmap;
BOOL bResult;
hDCBits = CreateCompatibleDC(hDC);
GetObject(hBitmap, sizeof(BITMAP), (LPSTR)&Bitmap);
SelectObject(hDCBits, hBitmap);
bResult = StretchBlt(hDC, x, y, width, height,hDCBits, 0, 0, Bitmap.bmWidth,
Bitmap.bmHeight, dwROP);
DeleteDC(hDCBits);
return bResult;
}
But despite all of this I don't understand exactly what they do or how they work. Thanks, in advance.
I have obtained a screenshot by doing the following:
GetDesktopWindow
GetDC
GetClientRect
CreateCompatibleBitmap
This gives me a HBITMAP, I can optionally take it to HDC with:
CreateCompatibleDC
My goal was to end up with a uint8 byte array from either step 4 (CreateCompatibleBitmap) or step 5 (CreateCompatibleDC) is this possible?
Thanks
You need to create a new DC with CreateCompatibleDC(), create a DIB (device-independent bitmap) for this DC with CreateDIBSection(), select the DIB in the new DC with SelectObject(), then copy from your original DC to the new DC with BitBlt(). The pointer retrieved by the CreateDIBSection will point to the raw data. This data is allocated by the system, which means you don't need to allocate it yourself, but it will be freed when you call DeleteObject() for the DIB.
Here is an example in C :
HDC hdcMemoryDC = CreateCompatibleDC(yourDC);
BITMAPINFO bmi;
memset(&bmi, 0, sizeof(BITMAPINFO));
bmi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth = width;
bmi.bmiHeader.biHeight = -height; // top-down
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
HBITMAP hbmp;
COLORREF *pixelBuffer;
hbmp = CreateDIBSection( hdcMemoryDC, &bmi, DIB_RGB_COLORS, (VOID**)&pixelBuffer, NULL, 0 );
SelectObject( hdcMemoryDC, hbmp );
BitBlt( hdcMemoryDC, 0, 0, width, height, yourDC, 0, 0, SRCCOPY );
I want to calculate the full size of a window before opening it. I'm using AdjustWindowRectEx() to achieve this. My code looks like this for a window with a client size of 640x480:
wrect.left = 0;
wrect.top = 0;
wrect.right = 640;
wrect.bottom = 480;
AdjustWindowRectEx(&wrect, WS_CAPTION|WS_SYSMENU|WS_MINIMIZEBOX, FALSE, 0);
This returns the following values:
left: -3
top: -22
right: 643
bottom: 483
However, when opening the window using CreateWindowEx() and passing
wrect.right - wrect.left
wrect.bottom - wrect.top
as the window size, the window's size physical size actually turns out to be 656x515 pixels. Still, GetWindowRect() returns 646x505, i.e. the same dimensions as returned by AdjustWindowRectEx() but as I said, when I take a screenshot of the desktop and measure the window's size using a paint program, its physical size is actually 656x515 pixels. Does anybody have an explanation for this?
The client size is alright, it's 640x480 but it looks like the border size is calculated wrongly because the border uses more pixels than calculated by AdjustWindowRectEx() and GetWindowRect().
I'm on Windows 7.
EDIT:
Is this downvoted because the title of the question is misleading? As stated in the MSDN autodocs, WS_OVERLAPPED is not supported by AdjustWindowRectEx(). So is there any other way to calculate the dimensions of a WS_OVERLAPPED window? Using WS_OVERLAPPEDWINDOW instead is not a solution because it sets the WS_THICKFRAME and WS_MAXIMIZEBOX which I don't want.
Here is some test code now which shows the problem. You can see that the client size is alright but the window's physical size is larger than what is returned by GetWindowRect().
#include <stdio.h>
#include <windows.h>
#define CLASSNAME "Test"
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
return DefWindowProc(hwnd, uMsg, wParam, lParam);
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
WNDCLASSEX wcx;
RECT wrect;
HWND hWnd;
char tmpstr[256];
memset(&wcx, 0, sizeof(WNDCLASSEX));
wcx.cbSize = sizeof(WNDCLASSEX);
wcx.style = CS_HREDRAW|CS_VREDRAW;
wcx.lpfnWndProc = WindowProc;
wcx.hInstance = hInstance;
wcx.hCursor = LoadCursor(NULL, IDC_ARROW);
wcx.hbrBackground = GetStockObject(BLACK_BRUSH); // important! otherwise a borderless window resize will not be drawn correctly
wcx.lpszClassName = CLASSNAME;
RegisterClassEx(&wcx);
wrect.left = 0;
wrect.top = 0;
wrect.right = 640;
wrect.bottom = 480;
AdjustWindowRectEx(&wrect, WS_CAPTION|WS_SYSMENU|WS_MINIMIZEBOX, FALSE, 0);
hWnd = CreateWindowEx(0, CLASSNAME, "Test", WS_CAPTION|WS_SYSMENU|WS_MINIMIZEBOX|WS_OVERLAPPED, 0, 0, wrect.right - wrect.left, wrect.bottom - wrect.top, NULL, NULL, hInstance, NULL);
ShowWindow(hWnd, SW_SHOWNORMAL);
GetWindowRect(hWnd, &wrect);
sprintf(tmpstr, "%d %d %d %d\n", wrect.left, wrect.top, wrect.right, wrect.bottom);
AllocConsole();
WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE), tmpstr, strlen(tmpstr), NULL, NULL);
GetClientRect(hWnd, &wrect);
sprintf(tmpstr, "%d %d %d %d\n", wrect.left, wrect.top, wrect.right, wrect.bottom);
WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE), tmpstr, strlen(tmpstr), NULL, NULL);
Sleep(10000);
DestroyWindow(hWnd);
UnregisterClass(CLASSNAME, hInstance);
return 0;
}
GetWindowRect api doesn't give the correct size:
Due to compatability requirements, certain metrics are reported in such a way that they're not consistent with what is actually drawn on the screen when Aero Glass (more accurately, "Windows Vista Aero") is enabled. This sort of approach is needed in order to change the look of the system for the vast majority of apps for which this isn't an issue.However, there's been a recent change in the system which will be coming out in Vista RC1 that will return the correct rendered value from GetWindowRect() for executables that are linked with "winver = 6.0". This allows new and newly-linked applications to get the "correct" values from GetWindowRect().
GetWindowRect on non-resizable windows under Aero Glass
Use instead DwmGetWindowAttribute to get the correct size:
DwmGetWindowAttribute(hWnd, DWMWA_EXTENDED_FRAME_BOUNDS, &wrect, sizeof(wrect));
If AdjustWindowRect doesn't work, just try to create the Window and adjust the size after the window is created.
// Create the window with a nearly correct size
RECT rect;
rect.left = 0;
rect.top = 0;
rect.right = 640;
rect.bottom = 480;
AdjustWindowRectEx(&rect, WS_CAPTION|WS_SYSMENU|WS_MINIMIZEBOX, FALSE, 0);
// try
hWnd = CreateWindowEx(0, CLASSNAME, "Test", WS_CAPTION|WS_SYSMENU|WS_MINIMIZEBOX|WS_OVERLAPPED, 0, 0, rect.right - rect.left, rect.bottom - rect.top, NULL, NULL, hInstance, NULL);
// Get the size that is really required and adjust
RECT rectCreated;
GetWindowRect(hWnd, &rectCreated);
rect.right += rectCreated.right-rectcreated.left;
rect.bottom += rectCreated.bottom-rectcreated.top;
// Resize to let the window fir the inner client aea
SetWindowPos(hWnd,NULL,0,0,rect.right-rect.left,rect.bottom-rect.top,SWP_NOMOVE|SWP_NOZORDER);
// Show it.
ShowWindow(hWnd, SW_SHOWNORMAL);
Code is not tested. Hopefully I have no bugs or syntax errors in it.
I'm making progress developing a '3d desktop' directx app that needs to display the current contents of a desktop window (e.g. "Calculator") as a 2D texture on a rectangular surface in directx (11). I'm sooo close but really struggling with the screenshot BMP -> Texture2D step. I do have screenshot->HBITMAP and DDSFile->rendered texture successfully working but can't complete the screenshot->rendered texture.
So far I have working the 'capture the window as a screenshot' bit:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator"));
GetClientRect(user_window, &user_window_rectangle);
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
PrintWindow(user_window, hdc, PW_CLIENTONLY);
At this point I have the window bitmap referenced by HBITMAP hbmp.
Also working is my code to render a DDS file as a texture on a directx/3d rectangle:
ID3D11Device *dev;
ID3D11DeviceContext *dev_context;
...
dev_context->PSSetShaderResources(0, 1, &shader_resource_view);
dev_context->PSSetSamplers(0, 1, &tex_sampler_state);
...
DirectX::TexMetadata tex_metadata;
DirectX::ScratchImage image;
hr = LoadFromDDSFile(L"Earth.dds", DirectX::DDS_FLAGS_NONE, &tex_metadata, image);
hr = CreateShaderResourceView(dev, image.GetImages(), image.GetImageCount(), tex_metadata, &shader_resource_view);
Pixel shader is:
Texture2D ObjTexture
SamplerState ObjSamplerState
float4 PShader(float4 pos : SV_POSITION, float4 color : COLOR, float2 tex : TEXCOORD) : SV_TARGET\
{
return ObjTexture.Sample( ObjSamplerState, tex );
}
The samplerstate (defaulting to linear) is:
D3D11_SAMPLER_DESC sampler_desc;
ZeroMemory(&sampler_desc, sizeof(sampler_desc));
sampler_desc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampler_desc.MinLOD = 0;
sampler_desc.MaxLOD = D3D11_FLOAT32_MAX;
hr = dev->CreateSamplerState(&sampler_desc, &tex_sampler_state);
Question: how do I replace the LoadFromDDSFile bit with some equivalent that takes the HBITMAP from the windows screencapture and ends up with it on the graphics card as ObjTexture ?
Below is my best shot of bridging from the screenshot HBITMAP hbmp to the shader resource screenshot_texture, but it gives a memory access violation from the graphics driver (I think due to my "data.pSysmem = &bmp.bmBits", but no idea really):
GetObject(hbmp, sizeof(BITMAP), (LPSTR)&bmp)
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(DXGI_FORMAT_R8G8B8A8_UNORM, bmp.bmWidth, bmp.bmHeight, 1,
1,
D3D11_BIND_SHADER_RESOURCE
);
int bytes_per_pixel = 4;
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * bmp.bmWidth;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * bmp.bmWidth * bmp.bmHeight;// total buffer size in byte
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
:::::::::::::::::::::::::SOLUTION::::::::::::::::::::::::::::::::::::::::::
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);
You are making a lot of assumptions here that the BITMAP returned is actually in 32-bit RGBA form. It is likely not at all in that format, and in any case you need to validate the contents of bmPlanes to be 1 and bmBitsPixel to be 32 if you are assuming it is 4-bytes per pixel. You should read more about the BMP format.
BMPs uses BGRA order, so you can use DXGI_FORMAT_B8G8R8A8_UNORM for the case of bmBitsPixel being 32.
Secondly, you need to derive pitch from bmWidthBytes and not bmWidth.
data.pSysMem = &bmp.bmBits; //pixel buffer
data.SysMemPitch = bmp.bmWidthBytes;// line size in byte
data.SysMemSlicePitch = bmp.bmWidthBytes * bmp.bmHeight;// total buffer size in byte
If bmBitsPixel is 24, there is no DXGI format equivalent to that. You have to copy the data to a 32-bit format such as DXGI_FORMAT_B8G8R8X8_UNORM.
If bmBitsPixel is 15 or 16, you can use DXGI_FORMAT_B5G5R5A1_UNORM on a system with Direct3D 11.1, but remember that 16-bit DXGI formats are not always supported depending on the driver. Otherwise you'll have to convert this data to something else.
For bmBitsPixel values of 1, 2, 4, or 8 you have to convert them as there are no DXGI texture formats that are equivalent.
The main issue was trying to use &bmp.bmBits directly as a pixel buffer caused memory conflicts within the graphics driver - this was resolved by using 'malloc' to allocate an appropriately sized block of memory to store the pixel data. Thanks to Chuck Walbourn for helping with my poking around in the dark to work out how the pixel data is actually stored (it was actually 32 bits/pixel by default). It's still possible/likely some of code is relying on luck to read the pixel data correctly, but it's been improved with Chuck's input.
My basic technique was;
FindWindow to get the client window on the desktop
CreateCompatibleBitmap and SelectObject and PrintWindow to get a HBITMAP to the snapshot
malloc to allocate the correct amount of space for a (byte*)pixel buffer
GetDIBits to populate the (byte*)pixel buffer from the HBITMAP
CreateTexture2D to build the texture buffer
CreateShaderResourceView to map the texture to the graphics pixel shader
So working code to screenshot a windows desktop window and pass that as a texture to a direct3d app is:
RECT user_window_rectangle;
HWND user_window = FindWindow(NULL, TEXT("Calculator")); //the window can't be min
if (user_window == NULL)
{
MessageBoxA(NULL, "Can't find Calculator", "Camvas", MB_OK);
return;
}
GetClientRect(user_window, &user_window_rectangle);
//create
HDC hdcScreen = GetDC(NULL);
HDC hdc = CreateCompatibleDC(hdcScreen);
UINT screenshot_width = user_window_rectangle.right - user_window_rectangle.left;
UINT screenshot_height = user_window_rectangle.bottom - user_window_rectangle.top;
hbmp = CreateCompatibleBitmap(hdcScreen, screenshot_width, screenshot_height);
SelectObject(hdc, hbmp);
//Print to memory hdc
PrintWindow(user_window, hdc, PW_CLIENTONLY);
BITMAPINFOHEADER bmih;
ZeroMemory(&bmih, sizeof(BITMAPINFOHEADER));
bmih.biSize = sizeof(BITMAPINFOHEADER);
bmih.biPlanes = 1;
bmih.biBitCount = 32;
bmih.biWidth = screenshot_width;
bmih.biHeight = 0-screenshot_height;
bmih.biCompression = BI_RGB;
bmih.biSizeImage = 0;
int bytes_per_pixel = bmih.biBitCount / 8;
BYTE *pixels = (BYTE*)malloc(bytes_per_pixel * screenshot_width * screenshot_height);
BITMAPINFO bmi = { 0 };
bmi.bmiHeader = bmih;
int row_count = GetDIBits(hdc, hbmp, 0, screenshot_height, pixels, &bmi, DIB_RGB_COLORS);
D3D11_TEXTURE2D_DESC screenshot_desc = CD3D11_TEXTURE2D_DESC(
DXGI_FORMAT_B8G8R8A8_UNORM, // format
screenshot_width, // width
screenshot_height, // height
1, // arraySize
1, // mipLevels
D3D11_BIND_SHADER_RESOURCE, // bindFlags
D3D11_USAGE_DYNAMIC, // usage
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
1, // sampleCount
0, // sampleQuality
0 // miscFlags
);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = pixels; // texArray; // &bmp.bmBits; //pixel buffer
data.SysMemPitch = bytes_per_pixel * screenshot_width;// line size in byte
data.SysMemSlicePitch = bytes_per_pixel * screenshot_width * screenshot_height;
hr = dev->CreateTexture2D(
&screenshot_desc, //texture format
&data, // pixel buffer use to fill the texture
&screenshot_texture // created texture
);
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = screenshot_desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MostDetailedMip = screenshot_desc.MipLevels;
dev->CreateShaderResourceView(screenshot_texture, NULL, &shader_resource_view);