ExtCreatePen and Windows 7 GDI - winapi

I created DIBPATTERN pens with ExtCreatePen API for custom pattern pens.
It sucessfully draws desired lines on Windows XP,
But on Windows 7 (x64 for my case), it does not draw any lines; no changes on screen.
(Other simply created pens, for example CreatePen(PS_DOT,1,0), are working.)
I found that calling SetROP2(hdc, R2_XORPEN) makes the following line-drawing API calls draw something but with XOR operation. I don't want XOR drawing.
Here is my code to create the pen. It has no problem on Windows XP:
LOGBRUSH lb;
lb.lbStyle = BS_DIBPATTERN;
lb.lbColor = DIB_RGB_COLORS;
int cb = sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 2 + 8*4;
HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, cb);
BITMAPINFO* pbmi = (BITMAPINFO*) GlobalLock(hg);
ZeroMemory(pbmi, cb);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth = 8;
pbmi->bmiHeader.biHeight = 8;
pbmi->bmiHeader.biPlanes = 1;
pbmi->bmiHeader.biBitCount = 1;
pbmi->bmiHeader.biCompression = BI_RGB;
pbmi->bmiHeader.biSizeImage = 8;
pbmi->bmiHeader.biClrUsed = 2;
pbmi->bmiHeader.biClrImportant = 2;
pbmi->bmiColors[1].rgbBlue =
pbmi->bmiColors[1].rgbGreen =
pbmi->bmiColors[1].rgbRed = 0xFF;
DWORD* p = (DWORD*) &pbmi->bmiColors[2];
for(int k=0; k<8; k++) *p++ = patterns[k];
GlobalUnlock(hg);
lb.lbHatch = (LONG) hg;
s_aSelectionPens[i] = ExtCreatePen(PS_GEOMETRIC, 1, &lb, 0, NULL);
ASSERT(s_aSelectionPens[i]); // success on both XP and Win7
GlobalFree(hg);
Is it bug only on my PC? Please check this problem.
Thank you.

This is a known bug with the Windows 7 GDI, though good luck getting Microsoft to acknowledge it.
http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/a70ab0d5-e404-4e5e-b510-892b0094caa3
-Noel

I will admit, I was dubious as first, but I compiled and ran your program, and it does indeed fail to draw the second line on Windows 7, buy only in aero mode
By switching to Windows basic or classic mode, all four lines are drawn, as expected.
I can only assume that this is some kind of bad interaction with your custom pen and the new way aero mode implements GDI calls. This seems like it might be a Microsoft bug, perhaps you can post this question on one of their message boards?

So you are creating an 8x8 black/white (monochrome) bitmap as a DIB, and then using that to create a pen. I see nothing wrong with this code. this definitely looks like a windows bug, but there may be a workaround.
Try setting
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
In this context, setting the values to 0 should mean the same thing as setting them to 2, but 0 is more standard behavior for situations where you have are using the full palette. You still need two entries in your palette, 0 just means "full size based on biBitCount".
Also, each palette entrie is a RGBQUAD, which means there is room for alpha, and your alpha is set to 0, which should be ignored, but maybe it isn't. so try setting the high byte of your two palette entries to 0xFF or 0x80.
Finally, it's possible that your palette is being ignored entirely, and Windows is using the BkMode, BkColor and TextColor of the destination DC for everything, so you need to make sure that they are set to values that you can see.
My guess is that this has something to do with alpha transparency, since GDI ignores alpha entirely, but Aero doesn't.

Related

Direct2D fails when drawing a single-channel bitmap

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

Getting the exact edit box dimensions for the given text

I need to be able to determine the size of the edit box according to the text I have, and a maximum width.
There are similar questions and answers, which suggest GetTextExtentPoint32 or DrawTextEx.
GetTextExtentPoint32 doesn't support multiline edit controls, so it doesn't fit.
DrawTextEx kind of works, but sometimes the edit box turns out to be larger than necessary, and, what's worse, sometimes it's too small.
Then there's EM_GETRECT and EM_GETMARGINS. I'm not sure whether I should use one of them, or maybe both.
What is the most accurate method for calculating the size? This stuff is more complicated then it should be... and I prefer not to resort to reading the source code of Wine or ReactOS.
Thanks.
Edit
Here's my code and a concrete example:
bool AutoSizeEditControl(CEdit window, LPCTSTR lpszString, int *pnWidth, int *pnHeight, int nMaxWidth = INT_MAX)
{
CFontHandle pEdtFont = window.GetFont();
if(!pEdtFont)
return false;
CClientDC oDC{ window };
CFontHandle pOldFont = oDC.SelectFont(pEdtFont);
CRect rc{ 0, 0, nMaxWidth, 0 };
oDC.DrawTextEx((LPTSTR)lpszString, -1, &rc, DT_CALCRECT | DT_EDITCONTROL | DT_WORDBREAK);
oDC.SelectFont(pOldFont);
::AdjustWindowRectEx(&rc, window.GetStyle(), (!(window.GetStyle() & WS_CHILD) && (window.GetMenu() != NULL)), window.GetExStyle());
UINT nLeftMargin, nRightMargin;
window.GetMargins(nLeftMargin, nRightMargin);
if(pnWidth)
*pnWidth = rc.Width() + nLeftMargin + nRightMargin;
if(pnHeight)
*pnHeight = rc.Height();
return true;
}
I call it with nMaxWidth = 143 and the following text (below), and get nHeight = 153, nWidth = 95. But the numbers are too small for the text to fit, on both axes.
The text (two lines):
Shopping
https://encrypted.google.com/search?q=winapi+resize+edit+control+to+text+size&source=lnms&tbm=shop&sa=X&ved=0ahUKEwiMyNaWxZjLAhUiLZoKHQcoDqUQ_AUICigE
Edit 2
I found out that the word wrapping algorithm of DrawTextEx and of the exit control are different. For example, the edit control wraps on ?, DrawTextEx doesn't. What can be done about it?

How to detect whether Windows 10 buffer wrapping mode is currently enabled in the console

Is there any way to detect whether a console app is running with Windows 10's new features enabled?
This MSDN page shows that HKEY_CURRENT_USER\Console\ForceV2, HKEY_CURRENT_USER\Console\LineWrap and HKEY_CURRENT_USER\Console\{name}\LineWrap control it, but besides that being less robust to parse, it may not be correct. If the user switches to or from legacy mode, the change won't take effect until the console relaunches.
If I develop the app, I can do the check at startup. There could have been a race condition though, which renders the registry check useless for any practical use. I am curious what the solution would be for third party console windows.
There seems to be no API for that, though I'd expect one to surface in some later SDK (maybe additional hyper-extended flags in GetConsoleMode).
Meanwhile, the following is a quick hack which attempts to detect the resize-wider capability of the new console, based on checking the ptMaxTrackSize.X value returned by GetMinMaxInfo.
The legacy console doesn't allow resizing the window wider than the screen buffer width, while the new one does. On the assumptions that (a) the console is running at full buffer width i.e. has no horizontal scrollbar, and (b) it's not already stretched to the full/max screen width, it's fairly straightforward to check whether the window allows itself to be resized wider (new console) or not (legacy console). Should be noted that assumption (a) could technically be avoided by manually converting the buffer width from characters to pixels, rather than relying on GetWindowRect, but assumption (b) is pretty much unavoidable.
This is the code (disclaimer: quick-and-dirty proof-of concept, no error checking etc).
int main()
{
// largest possible console size for given font and desktop
HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
COORD cd = GetLargestConsoleWindowSize(hOut);
SHORT nScrMaxXch = cd.X,
nScrMaxYch = cd.Y;
// current and max console sizes for given screen buffer
CONSOLE_SCREEN_BUFFER_INFOEX csbix = { sizeof(csbix) };
GetConsoleScreenBufferInfoEx(hOut, &csbix);
SHORT nWndXch = csbix.srWindow.Right - csbix.srWindow.Left + 1,
nWndYch = csbix.srWindow.Bottom - csbix.srWindow.Top + 1;
SHORT nWndMaxXch = csbix.dwMaximumWindowSize.X,
nWndMaxYch = csbix.dwMaximumWindowSize.Y;
wprintf(L"chars: wnd-size %d %d, max-wnd-size %d %d, largest-size %d %d\n",
nWndXch, nWndYch, nWndMaxXch, nWndMaxYch, nScrMaxXch, nScrMaxYch);
// current window size
HWND hWnd = GetConsoleWindow();
RECT rc; GetWindowRect(hWnd, &rc);
LONG nWndXpx = rc.right - rc.left,
nWndYpx = rc.bottom - rc.top;
// max window tracking size
MINMAXINFO mmi = { 0 };
SendMessage(hWnd, WM_GETMINMAXINFO, 0, (LPARAM)&mmi);
LONG nWndMaxXpx = mmi.ptMaxTrackSize.x,
nWndMaxYpx = mmi.ptMaxTrackSize.y;
wprintf(L"pixels: wnd-size %lu %lu, max-tracking-size %lu %lu\n",
nWndXpx, nWndYpx, nWndMaxXpx, nWndMaxYpx);
if (nWndXch == nWndMaxXch // full buffer width, no h-scrollbar
&& nWndXch < nScrMaxXch // not already stretched to full screen width
&& nWndMaxXpx > nWndXpx) // allowed to resized wider
wprintf(L"\n...most likely a Win10 console with ForceV2 enabled\n");
return 0;
}
This is the output when run in a legacy console.
chars: wnd-size 80 25, max-wnd-size 80 71, largest-size 240 71
pixels: wnd-size 677 443, max-tracking-size 677 1179
This is the output when run in the new console.
chars: wnd-size 80 25, max-wnd-size 80 71, largest-size 239 71
pixels: wnd-size 677 443, max-tracking-size 1936 1186
...most likely a Win10 console with ForceV2 enabled

How is buffer data accessed in D3D10?

Basically, I'm trying to copy the front or back buffer to a texture, grab the 1x1 mipmap level of said texture, and then spew the color of the resulting back to the Arduino to control my room's lighting. Everything else is up and running, and I've already gotten it to work via GetDC(NULL) and StretchBlt. But this was about 15FPS, and the windows GUI ran choppy.
The downsampling is just DEMANDING to be used on a GPU.
In D3D9, it seemed like there was simply GetBackBuffer() or something like that, but I see nothing similar in D3D10. And I'm not even sure it would grab anything from the windows GUI.
Questions:
-What function(s) would I use?:
-Do I need to explicitly create a swapchain beforehand?
-Would this only capture data from other D3D programs?
Okay, here's where I'm at as far as creating the texture goes But I'm not seeing anything that gets me back to textures:
//Create Texture
D3D10_TEXTURE2D_DESC tBufferDesc;
ID3D10Texture2D *tBuffer = NULL;
DXGI_SAMPLE_DESC iBufferSamples = {1,0};
tBufferDesc.Width = iScreenSizeX;
tBufferDesc.Height = iScreenSizeY;
tBufferDesc.MipLevels = 0;
tBufferDesc.ArraySize = 1;
tBufferDesc.Format = DXGI_FORMAT_B8G8R8A8_TYPELESS;
tBufferDesc.SampleDesc = iBufferSamples;
tBufferDesc.Usage = D3D10_USAGE_DEFAULT;
tBufferDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE | D3D10_BIND_RENDER_TARGET;
tBufferDesc.CPUAccessFlags = 0;
tBufferDesc.MiscFlags = D3D10_RESOURCE_MISC_GENERATE_MIPS;
HRESULT tBufferResult = pDevice->CreateTexture2D(&tBufferDesc, NULL , &tBuffer);
hrResult(tBufferResult);
ID3D10RenderTargetView *rtBuffer;
ID3D10Resource *rsBuffer;
pDevice->OMGetRenderTargets(1, &rtBuffer, NULL);
rtBuffer->GetResource(&rsBuffer);
rsBuffer->

WP7 zxing scan not reliable

I've printed a few short qr-codes (like "HAEB16653") on a page using this algorythm:
private void CreateQRCodeFile(int size, string filename, string codecontent)
{
QRCodeWriter writer = new QRCodeWriter();
com.google.zxing.common.ByteMatrix matrix;
matrix = writer.encode(codecontent, BarcodeFormat.QR_CODE, size, size, null);
Bitmap img = new Bitmap(size, size);
Color Color = Color.FromArgb(0, 0, 0);
for (int y = 0; y < matrix.Height; ++y)
{
for (int x = 0; x < matrix.Width; ++x)
{
Color pixelColor = img.GetPixel(x, y);
//Find the colour of the dot
if (matrix.get_Renamed(x, y) == -1)
{
img.SetPixel(x, y, Color.White);
}
else
{
img.SetPixel(x, y, Color.Black);
}
}
}
img.Save(filename, ImageFormat.Png);
}
The printed barcodes work very well and fast with the integrated WP7 bing scan&search.
When I try to scan the very same printed qrcodes with Stéphanie Hertrichs sample app, scanning is very slow, most do not scan at all, or will only be recognized when I slowly rotate the camera around.
How do I get my scanning to be as reliable as the integrated barcode recognition? I only need to scan QrCodes, so I disabled all the others, still it does not work most of the time.
Is there maybe some other barcode scanning library which is working better?
The silverlight port in Stéphanie Hertrichs sample app is very old. It seems to me that the project at codeplex isn't maintained anymore since more then 1 year. You should try one of the newer and maintained ports like ZXing.Net
zxing works very well -- just try it on Android. I would not be surprised if it is what powers the Bing search.
The problems are likely in the port. Any non-Java port is at best old and incomplete. I also can't speak to the efficiency of the approach used in the sample you are looking at. For example, is it really binarizing the image from the APIs correctly? Also make sure it is not using TRY_HARDER mode.
There is no objective answer to this question...
My personal opinion is that the ZXing lib that you tried (Stéphanie Hertrichs sample app) is the best you can get. As far as I know it is used on the other plattforms, too (e.g. Android).
As I tested the lib a few months ago, I had the impression it worked very reliable and quick, but it may be that you had other circumstances (lighting, camera, angle, etc...)

Resources