Is there any way to poll the actual physical aspect ratio (or even the dimensions) of the display device (not the display mode resolution - the screen itself)? ...and would the method work correctly despite the correct driver being installed for the monitor? I'm looking for Win32 API calls that will work with all Win32 platforms.
This is the closest I could come up with, using GetDeviceCaps() to determine the aspect ratio of the device corresponding to the desktop DC. A snippet from my code...
HDC hDC = GetDC(NULL);
if (hDC != NULL)
{
float dw = (float)GetDeviceCaps(hDC, HORZSIZE);
float dh = (float)GetDeviceCaps(hDC, VERTSIZE);
ReleaseDC(NULL, hDC);
// Equivalent of reducing a fraction
if (dw > dh)
{
dw /= dh;
dh = 1.0f;
}
else
{
dh /= dw;
dw = 1.0f;
}
wcp.fAspectNumerator = dw;
wcp.fAspectDenominator = dh;
}
Related
I'm creating a generic SNES tilemap editor (similar to NES Screen Tool), meaning I'm drawing a lot of 4bpp tiles. However, my graphics loop takes too long to run, even with CachedBitmaps, which can't have their palettes changed, of which I may need to switch between 8. I can deal with the SNES format and size of things, but am struggling with the Windows side.
// basically the entire graphics drawing routine
case(WM_PAINT):{
PAINTSTRUCT ps;
HDC hdc = BeginPaint(hwnd, &ps);
Gdiplus::Graphics graphics(hdc);
graphics.Clear(ARGB1555toARGB8888(CGRAM[0])); // convert 1st 15-bit CGRAM color to 32-bit & clear bkgd
// tileset2[i]->SetPalette(colorpalette); // called in tileset loading to test 1 palette
for(uint16_t i = 0; i < 1024; i++){
tilesetX[i] = new Gdiplus::CachedBitmap(tileset2[i], &graphics);
}
/* struct SNES_Tile{
uint16_t tileIndex: 10,
uint16_t palette: 3,
uint16_t priority: 1, // (irrelevant for this project)
uint16_t horzFlip: 1,
uint16_t vertFlip: 1,
}*/
// I can see each individual tile being drawn
for(int y = 0; y < 32; y++){
for(int x = 0; x < 32; x++){
// assume tilemap is set to 32x32, and not 64x32 or 32x64 or 64x64
graphics.DrawCachedBitmap(tilesetX[BG2[y * 32 + x] & 0x03FF], x * BG2CHRSize, y * BG2CHRSize);
// BG2[y * 32 + x] & 0x03FF : get tile index from VRAM and strip attributes
// tilesetX[...] : get CachedBitmap to draw
}
}
EndPaint(hwnd, &ps);
break;
}
I am early enough in my program that rewriting the entire graphics routine wouldn't be too much of a hassle.
Should I give up on GDI+ and switch to Direct2D or something else? Is there a faster way to draw 4bpp bitmaps without having to create a copy for each palette?
EDIT:
The reason my graphics drawing routine was so slow was because I was drawing directly to the screen. It is much faster to draw to a separate bitmap as a buffer, then draw the buffer to the screen.
Updating the tile's palette when drawing to the buffer results in perfectly reasonably speeds.
I've been going through these tutorials (only 2 links allowed for me): https:// code.msdn.microsoft.com/Direct3D-Tutorial-Win32-829979ef
and reading through the Direct3D 11 Graphics Pipeline: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882%28v=vs.85%29.aspx
I currently have a Pixel (aka. Fragment) Shader coded in HLSL, consisting of the following code:
//Pixel Shader input.
struct psInput
{
float4 Position: SV_POSITION;
float4 Color: COLOR;
};
//Pixel (aka. Fragment) Shader.
float4 PS(psInput input): SV_TARGET
{
return input.Color;
}
What I (think I) would like to do is multisample and access nearby pixel data for each pixel in my Pixel Shader so that I can perform a sort of custom anti-aliasing like FXAA (http://developer.download.nvidia.com/assets/gamedev/files/sdk/11/FXAA_WhitePaper.pdf). From my understanding, I need to pass a texture to HLSL using PSSetShaderResources for each render, but beyond that I have no idea. So, my question is:
How do I send nearby pixel data to a Pixel-Shader in Direct3D 11?
Being able to do this kind of thing would also be extremely beneficial to my understanding of how c++ and HLSL interact with each other beyond the standard "pass some float4's to the shader" that I find in tutorials. It seems that this is the most crucial aspect of D3D development, and yet I can't find very many examples of it online.
I've considered traditional MSAA (MultiSample Anti-Aliasing), but I can't find any information on how to do it successfully in D3D 11 beyond that I need to be using a "BitBlt" (bit-block transfer) model swap chain first. (See DXGI_SAMPLE_DESC1 and DXGI_SAMPLE_DESC; only a count of 1 and a quality of 0 (no AA) will result in things being drawn.) Additionally, I would like to know how to perform the above for general understanding in case I need it for other aspects of my project. Answers on how to perform MSAA in D3D 11 are welcome too though.
Please use D3D 11 and HLSL code only.
To do custom anti-aliasing like FXAA you'll need to render the scene to an offscreen render target:
-Create a ID3D11Texture2D with bind flags D3D11_BIND_RENDER_TARGET and D3D11_BIND_SHADER_RESOURCE
-Create a ID3D11ShaderResourceView and a ID3D11RenderTargetView for the texture created in step one.
-Render the scene to the ID3D11RenderTargetView created in step 2
-Set the backbuffer as render target and bind the ID3D11ShaderResourceView created in step 2 to the correct pixel shader slot.
-Render a fullscreen triangle covering the entire screen you'll be able to sample the texture containing the scene in the pixel shader (use the Load() function)
When you tried to do traditional MSAA did you remeber to set MultisampleEnable in the rasterizer state?
And again I answer my own question, sort of (never did use FXAA...). I am providing my answer here to be nice to those who are following my footsteps.
It turns out I was missing the depth stencil view for MSAA. You want SampleCount to be 1U for disabled MSAA, 2U for 2XMSAA, 4U for 4XMSAA, 8U for 8XMSAA, etc. (Use ID3D11Device::CheckMultisampleQualityLevels to "probe" for viable MSAA levels...) You pretty much always want to use a quality level of 0U for disabled MSAA and 1U for enabled MSAA.
Below is my working MSAA code (you should be able to fill in the rest). Note that I used DXGI_FORMAT_D24_UNORM_S8_UINT and D3D11_DSV_DIMENSION_TEXTURE2DMS, and that the Format values for the depth texture and depth stencil view are the same and the SampleCount and SampleQuality values are the same.
Good luck!
unsigned int SampleCount = 1U;
unsigned int SampleQuality = (SampleCount > 1U ? 1U : 0U);
//Create swap chain.
IDXGIFactory2* dxgiFactory2 = nullptr;
d3dResult = dxgiFactory->QueryInterface(__uuidof(IDXGIFactory2), reinterpret_cast<void**>(&dxgiFactory2));
if (dxgiFactory2)
{
//DirectX 11.1 or later.
d3dResult = D3DDevice->QueryInterface(__uuidof(ID3D11Device1), reinterpret_cast<void**>(&D3DDevice1));
if (SUCCEEDED(d3dResult))
{
D3DDeviceContext->QueryInterface(__uuidof(ID3D11DeviceContext1), reinterpret_cast<void**>(&D3DDeviceContext1));
}
DXGI_SWAP_CHAIN_DESC1 swapChain;
ZeroMemory(&swapChain, sizeof(swapChain));
swapChain.Width = width;
swapChain.Height = height;
swapChain.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChain.SampleDesc.Count = SampleCount;
swapChain.SampleDesc.Quality = SampleQuality;
swapChain.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChain.BufferCount = 2U;
d3dResult = dxgiFactory2->CreateSwapChainForHwnd(D3DDevice, w32Window, &swapChain, nullptr, nullptr, &SwapChain1);
if (SUCCEEDED(d3dResult))
{
d3dResult = SwapChain1->QueryInterface(__uuidof(IDXGISwapChain), reinterpret_cast<void**>(&SwapChain));
}
dxgiFactory2->Release();
}
else
{
//DirectX 11.0.
DXGI_SWAP_CHAIN_DESC swapChain;
ZeroMemory(&swapChain, sizeof(swapChain));
swapChain.BufferCount = 2U;
swapChain.BufferDesc.Width = width;
swapChain.BufferDesc.Height = height;
swapChain.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChain.BufferDesc.RefreshRate.Numerator = 60U;
swapChain.BufferDesc.RefreshRate.Denominator = 1U;
swapChain.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChain.OutputWindow = w32Window;
swapChain.SampleDesc.Count = SampleCount;
swapChain.SampleDesc.Quality = SampleQuality;
swapChain.Windowed = true;
d3dResult = dxgiFactory->CreateSwapChain(D3DDevice, &swapChain, &SwapChain);
}
//Disable Alt + Enter and Print Screen shortcuts.
dxgiFactory->MakeWindowAssociation(w32Window, DXGI_MWA_NO_PRINT_SCREEN | DXGI_MWA_NO_ALT_ENTER);
dxgiFactory->Release();
if (FAILED(d3dResult))
{
return false;
}
//Create render target view.
ID3D11Texture2D* backBuffer = nullptr;
d3dResult = SwapChain->GetBuffer(0U, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer));
if (FAILED(d3dResult))
{
return false;
}
d3dResult = D3DDevice->CreateRenderTargetView(backBuffer, nullptr, &RenderTargetView);
backBuffer->Release();
if (FAILED(d3dResult))
{
return false;
}
//Create depth stencil texture.
ID3D11Texture2D* DepthStencilTexture = nullptr;
D3D11_TEXTURE2D_DESC depthTextureLayout;
ZeroMemory(&depthTextureLayout, sizeof(depthTextureLayout));
depthTextureLayout.Width = width;
depthTextureLayout.Height = height;
depthTextureLayout.MipLevels = 1U;
depthTextureLayout.ArraySize = 1U;
depthTextureLayout.Usage = D3D11_USAGE_DEFAULT;
depthTextureLayout.CPUAccessFlags = 0U;
depthTextureLayout.MiscFlags = 0U;
depthTextureLayout.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthTextureLayout.SampleDesc.Count = SampleCount;
depthTextureLayout.SampleDesc.Quality = SampleQuality;
depthTextureLayout.BindFlags = D3D11_BIND_DEPTH_STENCIL;
d3dResult = D3DDevice->CreateTexture2D(&depthTextureLayout, nullptr, &DepthStencilTexture);
if (FAILED(d3dResult))
{
return false;
}
//Create depth stencil.
D3D11_DEPTH_STENCIL_DESC depthStencilLayout;
depthStencilLayout.DepthEnable = true;
depthStencilLayout.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilLayout.DepthFunc = D3D11_COMPARISON_LESS;
depthStencilLayout.StencilEnable = true;
depthStencilLayout.StencilReadMask = 0xFF;
depthStencilLayout.StencilWriteMask = 0xFF;
depthStencilLayout.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilLayout.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthStencilLayout.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilLayout.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
ID3D11DepthStencilState* depthStencilState;
D3DDevice->CreateDepthStencilState(&depthStencilLayout, &depthStencilState);
D3DDeviceContext->OMSetDepthStencilState(depthStencilState, 1U);
//Create depth stencil view.
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewLayout;
ZeroMemory(&depthStencilViewLayout, sizeof(depthStencilViewLayout));
depthStencilViewLayout.Format = depthTextureLayout.Format;
depthStencilViewLayout.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DMS;
depthStencilViewLayout.Texture2D.MipSlice = 0U;
d3dResult = D3DDevice->CreateDepthStencilView(DepthStencilTexture, &depthStencilViewLayout, &DepthStencilView);
DepthStencilTexture->Release();
if (FAILED(d3dResult))
{
return false;
}
//Set output-merger render targets.
D3DDeviceContext->OMSetRenderTargets(1U, &RenderTargetView, DepthStencilView);
I timed a DDB drawing operation which uses multiple StretchBlt and StretchDIBits calls.
And I found that, time to complete is increase/decrease proportionally to the destination window size.
With 900x600 window it takes around 5ms, but with 1920x1080 it takes as large as 55ms (source image is 1280x640).
It seems Stretch.. APIs don't use any hardware acceleration features.
Source image (actually this is temporary drawing canvas) is created with CreateDIBSection because I need resulting (stretched and merged) bitmap's pixel data for every frame drawn.
Let's assume, Windows GDI is hopeless. Then what is the promising alternative?
I considered D3D, D2D with WIC method (write to WIC bitmap and draw it with D2D then read back pixel data from the WIC bitmap).
I planed to try D2D with WIC method because I will needed to use extensive text drawing feature sometime soon.
But it seems WIC is not that promising: What is the most effective pixel format for WIC bitmap processing?
I've implemented D2D + WIC routine today. Test results are really good.
With my previous GDI StretchDIBits version, it took 20 ~ 60ms time for drawing 1280x640 DDB into a 1920x1080 window. After switching to Direct2D + WIC, it usually takes under 5ms, also picture quality looks better.
I used ID2D1HwndRenderTarget with WicBitmapRenderTarget, because I need to read/write raw pixel data.
HWndRenderTarget is only used for screen painting (WM_PAINT).
The main advantage of HWndRenderTarget is that the destination window size doesn't affect drawing performance.
WicBitmapRenderTarget is used as a temporary drawing canvas (as Memory DC in GDI drawing). We can create WicBitmapRenderTarget with a WIC bitmap object (like GDI DIBSection). We can read/write raw pixel data from/to this WIC bitmap at any time. Also it's very fast. For side note, somewhat similar D3D GetFrontBufferData call is really slow.
Actual pixel I/O is done through IWICBitmap and IWICBitmapLock interface.
Writing:
IWICBitmapPtr m_wicRemote;
...
const uint8* image = ...;
...
WICRect rcLock = { 0, 0, width, height };
IWICBitmapLockPtr wicLock;
hr = m_wicRemote->Lock(&rcLock, WICBitmapLockWrite, &wicLock);
if (SUCCEEDED(hr))
{
UINT cbBufferSize = 0;
BYTE *pv = NULL;
hr = wicLock->GetDataPointer(&cbBufferSize, &pv);
if (SUCCEEDED(hr))
{
memcpy(pv, image, cbBufferSize);
}
}
m_wicRenderTarget->BeginDraw();
m_wicRenderTarget->SetTransform(D2D1::Matrix3x2F::Identity());
ID2D1BitmapPtr d2dBitmap;
hr = m_wicRenderTarget->CreateBitmapFromWicBitmap(m_wicRemote, &d2dBitmap.GetInterfacePtr());
if (SUCCEEDED(hr))
{
float cw = (renderTargetSize.width / 2);
float ch = renderTargetSize.height;
float x, y, w, h;
FitFrameToCenter(cw, ch, (float)width, (float)height, x, y, w, h);
m_wicRenderTarget->DrawBitmap(d2dBitmap, D2D1::RectF(x, y, x + w, y + h));
}
m_wicRenderTarget->EndDraw();
Reading:
IWICBitmapPtr m_wicCanvas;
IWICBitmapLockPtr m_wicLockedData;
...
UINT width, height;
HRESULT hr = m_wicCanvas->GetSize(&width, &height);
if (SUCCEEDED(hr))
{
WICRect rcLock = { 0, 0, width, height };
hr = m_wicCanvas->Lock(&rcLock, WICBitmapLockRead, &m_wicLockedData);
if (SUCCEEDED(hr))
{
UINT cbBufferSize = 0;
BYTE *pv = NULL;
hr = m_wicLockedData->GetDataPointer(&cbBufferSize, &pv);
if (SUCCEEDED(hr))
{
return pv; // return data pointer
// need to Release m_wicLockedData after reading is done
}
}
}
Drawing:
ID2D1HwndRenderTargetPtr m_renderTarget;
....
D2D1_SIZE_F renderTargetSize = m_renderTarget->GetSize();
m_renderTarget->BeginDraw();
m_renderTarget->SetTransform(D2D1::Matrix3x2F::Identity());
m_renderTarget->Clear(D2D1::ColorF(D2D1::ColorF::Black));
ID2D1BitmapPtr d2dBitmap;
hr = m_renderTarget->CreateBitmapFromWicBitmap(m_wicCanvas, &d2dBitmap.GetInterfacePtr());
if (SUCCEEDED(hr))
{
UINT width, height;
hr = m_wicCanvas->GetSize(&width, &height);
if (SUCCEEDED(hr))
{
float x, y, w, h;
FitFrameToCenter(renderTargetSize.width, renderTargetSize.height, (float)width, (float)height, x, y, w, h);
m_renderTarget->DrawBitmap(d2dBitmap, D2D1::RectF(x, y, x + w, y + h));
}
}
m_renderTarget->EndDraw();
In my opinion, GDI Stretch.. APIs are totally useless in Windows 7+ setup (for performance sensitive applications).
Also note that, unlike Direct3D, basic graphics operations such as text drawing, ling drawing are really simple in Direct2D.
I want to draw a lot of lines in the WM_PAINT message handler with the following code.
//DrawLine with double buffering
LRESULT CALLBACK CMyDoc::OnPaint(HWND hWnd, WPARAM wParam, LPARAM lParam)
{
std::vector<Gdiplus::Point> points;
std::vector<Gdiplus::Point>::iterator iter1, iter2;
HDC hdc, hdcMem;
HBITMAP hbmScreen, hbmOldBitmap;
PAINTSTRUCT ps;
RECT rect;
hdc = BeginPaint(hWnd, &ps);
//Create memory dc
hdcMem = CreateCompatibleDC(hdc);
GetClientRect(hWnd, &rect);
hbmScreen = CreateCompatibleBitmap(hdc, rect.right, rect.bottom);
hbmOldBitmap = (HBITMAP)SelectObject(hdcMem, hbmScreen);
//Fill the rect with white
FillRect(hdcMem, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));
//Draw the lines
Gdiplus::Graphics graphics(hdcMem);
Gdiplus::Pen blackPen(Gdiplus::Color(255, 0, 0));
points = m_pPolyLine->GetPoints();
for (iter1 = points.begin(); iter1 != points.end(); iter1++) {
for (iter2 = iter1 + 1; iter2 != points.end(); iter2++)
graphics.DrawLine(&blackPen, *iter1, *iter2);
}
//Copy the bitmap from memory dc to the real dc
BitBlt(hdc, 0, 0, rect.right, rect.bottom, hdcMem, 0, 0, SRCCOPY);
//Clean up
SelectObject(hdcMem, hbmOldBitmap);
DeleteObject(hbmScreen);
DeleteDC(hdcMem);
EndPaint(hWnd, &ps);
return 0;
}
However, if the size of points exceed 20, the client rect just flicker. I think the reason is that Gdiplus::DrawLines is too slow.
Is there any method to solve the flicker problem?
Thanks.
The flickering may be caused by slow painting as well as by other things. In general, try/ensure the following:
Try to not rely on WM_ERASEBKGND message, i.e. return non-zero, or specify NULL in WNDCLASS::hbrBackground if possible. Often the paint method paints all the background of the dirty region, then there is no need to do the erasing.
If you need the erasing, it can often be optimized so that WM_ERASEBKGND returns non-zero, and the paint method then ensures the "erasing" by also painting areas not covered by the regular painted contents, if PAINTSTRUCT::fErase is set.
If reasonably possible, write the paint method so that it does not repaint the same pixels in one call. E.g. to make blue rect with red border, do not FillRect(red), then repainting inner part of it with FillRect(blue). Try to paint each pixel once as much as reasonably possible.
For complex controls/windows, the paint method may often be optimized to easily skip a lot of painting outside the dirty rect (PAINTSTRUCT::rcPaint) by proper organizing the control data.
When changing the control state, invalidate only the minimal required region of the control.
If it is not top-level window, consider using CS_PARENTDC. If your paint method does not rely on system setting clipping rectangle to the client rect of the control, this class style will lead to a somewhat better performance.
If you see the flickering on control/window resizing, consider to not using CS_HREDRAW and CS_VREDRAW. Instead invalidate the relevant parts of the control in WM_SIZE manually. This often allows to invalidate only smaller parts of the control.
If you see the flickering on control scrolling, do not invalidate whole client, but use ScrollWindow() and invalidate only small area which exposes the new (scrolled-in) content.
If everything above fails, then use double buffering.
Use double buffer. It is an offen problem with Win32 C++ application and specifically with OnPaint function and DC facilities.
Here is a few links to help you check if everything is fine with YOUR implementation of double buffer: Flicker Free Drawing In MFC and SO question "Reduce flicker with GDI+ and C++"
If your lines happen to extend outside the bounds of the DC (Graphics), Win32/GDI+ is painfully slow at clipping. Like, up to two orders of magnitude slower than rolling your own clipping function. Here is some C# code that implements Liang/Barsky - I scrounged this up from an old library that was originally in C++ 20 years ago. Should be easy enough to port back.
If your lines can extend beyond the client rectangle, call ClipLine(rect, ...) on your points before handing them off to Graphics::DrawLine.
private static bool clipTest(double dp, double dq, ref double du1, ref double du2)
{
double dr;
if (dp < 0.0)
{
dr = dq / dp;
if (dr > du2)
{
return false;
}
else if (dr > du1)
{
du1 = dr;
}
}
else
{
if (dp > 0.0)
{
dr = dq / dp;
if (dr < du1)
{
return false;
}
else if (dr < du2)
{
du2 = dr;
}
}
else
{
if (dq < 0.0)
{
return false;
}
}
}
return true;
}
public static bool ClipLine(Rectangle clipRect, ref int x1, ref int y1, ref int x2, ref int y2)
{
double dx1 = (double)x1;
double dx2 = (double)x2;
double dy1 = (double)y1;
double dy2 = (double)y2;
double du1 = 0;
double du2 = 1;
double deltaX = dx2 - dx1;
double deltaY;
if (clipTest(-deltaX, dx1 - clipRect.Left, ref du1, ref du2))
{
if (clipTest(deltaX, clipRect.Right - dx1, ref du1, ref du2))
{
deltaY = dy2 - dy1;
if (clipTest(-deltaY, dy1 - clipRect.Top, ref du1, ref du2))
{
if (clipTest(deltaY, clipRect.Bottom - dy1, ref du1, ref du2))
{
if (du2 < 1.0)
{
x2 = DoubleRoundToInt(dx1 + du2 * deltaX);
y2 = DoubleRoundToInt(dy1 + du2 * deltaY);
}
if (du1 > 0.0)
{
x1 = DoubleRoundToInt(dx1 + du1 * deltaX);
y1 = DoubleRoundToInt(dy1 + du1 * deltaY);
}
return x1 != x2 || y1 != y2;
}
}
}
}
return false;
}
The problem is I have not handle WM_ERASEBKGND message by myself.
I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence).
For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen.
The game runs in full screen mode on a win32 system (direct-X I assume).
Currently I am using the win32 functions
#include <windows.h>
#include <cvaux.h>
class Screen {
public:
HWND windowHandle;
HDC windowContext;
HBITMAP buffer;
HDC bufferContext;
CvSize size;
uchar* bytes;
int channels;
Screen () {
windowHandle = GetDesktopWindow();
windowContext = GetWindowDC (windowHandle);
size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES));
buffer = CreateCompatibleBitmap (windowContext, size.width, size.height);
bufferContext = CreateCompatibleDC (windowContext);
SelectObject (bufferContext, buffer);
channels = 4;
bytes = new uchar[size.width * size.height * channels];
}
~Screen () {
ReleaseDC(windowHandle, windowContext);
DeleteDC(bufferContext);
DeleteObject(buffer);
delete[] bytes;
}
void CaptureScreen (IplImage* img) {
BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY);
int n = size.width * size.height;
int imgChannels = img->nChannels;
GetBitmapBits (buffer, n * channels, bytes);
uchar* src = bytes;
uchar* dest = (uchar*) img->imageData;
uchar* end = dest + n * imgChannels;
while (dest < end) {
dest[0] = src[0];
dest[1] = src[1];
dest[2] = src[2];
dest += imgChannels;
src += channels;
}
}
The rate at which I can process frames using this approach is much too slow. Is there a better way to acquire screen frames?
As a general method, I would hook the buffer-flipping function calls so I can capture the framebuffer on every frame.
http://easyhook.codeplex.com/