DXGI 1.5 DuplicateOutput1 fails with DXGI_ERROR_UNSUPPORTED (0x887a0004) - dxgi

For some reason DuplicateOutput1 fails where DuplicateOutput does not.
#include <D3D11.h>
#include <DXGI1_5.h>
int main() {
ID3D11Device *device;
D3D_FEATURE_LEVEL levels[] = { D3D_FEATURE_LEVEL_11_1 };
D3D11CreateDevice(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, 0, levels, ARRAYSIZE(levels), D3D11_SDK_VERSION, &device, NULL, NULL);
IDXGIDevice *dxDevice;
device->QueryInterface<IDXGIDevice>(&dxDevice);
IDXGIAdapter *adapter;
dxDevice->GetAdapter(&adapter);
IDXGIOutput *output;
adapter->EnumOutputs(0, &output);
IDXGIOutput5 *output5;
output->QueryInterface<IDXGIOutput5>(&output5);
IDXGIOutputDuplication *outputDuplication;
auto hr1 = output5->DuplicateOutput(device, &outputDuplication);
S_OK here
const DXGI_FORMAT formats[] = { DXGI_FORMAT_B8G8R8A8_UNORM };
auto hr2 = output5->DuplicateOutput1(device, 0, ARRAYSIZE(formats), formats, &outputDuplication);
}
0x887a0004 : The specified device interface or feature level is not supported on this system.

I will post here the answer from #weggo, because I almost missed it!
For those that might stumble upon this in the future, calling
SetProcessDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2)
allows the DuplicateOutput1 to succeed. I have no idea why the
DuplicateOutput1 checks the process dpi version, though.
I will just add that you have to set DPI awareness to False in properties of the solution in manifest settings, to get the SetProcessDpiAwarenessContext to work :)

This could happen if you run on a system with both an integrated graphics chip and a discrete GPU. See https://support.microsoft.com/en-us/kb/3019314:
unfortunately this issue occurs because the Desktop Duplication API does not support being run against the discrete GPU on a Microsoft Hybrid system. By design, the call fails together with error code DXGI_ERROR_UNSUPPORTED in such a scenario.
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.

Related

rust imgui, how do you set it up?

I am trying to set up rust imgui for a custom renderer I am porting to rust.
I am stuck on two fronts, getting the peripheral callbacks, and the rendering.
In C++ the setup was farily simple
ImGuiContext* InitImgui(ModuleStorage::ModuleStorage& module, NECore::Gallery& gallery)
{
ImGuiContext* imgui_context = ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
unsigned char* pixels;
int width, height;
io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height);
CpuImage font_image(pixels, width, height, 4);
uint font_id = gallery.StoreImage<CpuImage::GetImageData>(
font_image, "__ImguiFont", NECore::ImageFormat::R8G8B8A8_UNORM);
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
ImGui_ImplGlfw_InitForVulkan(module.GetWindow().GetGLFWWindow(), true);
imgui_shader = module.AddShader(
{"./CommonShaders/imgui.vert",
"./CommonShaders/imgui.frag"});
return imgui_context;
}
30 lines of code and we have the initialization done.
Well some issues in rust, io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height); does not exist. I assume the equivalent is let font = fonts.build_rgba32_texture();
Assuming that's the case the next issue is setting the texture id, which I cannot find anywhere in the docs or the source code.
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
That function does not exist in the rust bindings. And ImGui_ImplGlfw_InitForVulkan is no where to be found either.
The examples https://github.com/imgui-rs/imgui-rs/blob/main/imgui-examples/examples/support/mod.rs
Seem to be using pre existen renderers and do not do a good job of showing how to integrate the tool onto an existing renderer other than the ones the author chose, which is baffling, one of the biggest selling points of imgui is how simple it is to integrate in pre-existing codebases.
I am at a loss, hwo do you bootstrap the library in rust?

What's the etymology behind type name ID3D10Blob for direct x shaders?

What's the etymology behind type name ID3D10Blob for direct x shaders?
I'm reading http://www.directxtutorial.com/Lesson.aspx?lessonid=11-4-5 and trying to reason about the naming conventions in windows.
I see ID3D10Blob as (I)(D3D10)(Blob).
I = interface?
D3D10 = direct3d 10?
Blob = ?
I've seen "Binary large object" online for Blob, but I'm not sure if that has the same meaning in this context.
What does the blob term mean?
TL;DR: It's just a simple ref-counted container for a variable-length blob of binary data used by the D3DCompiler COM interfaces.
The HLSL compiler produces a 'shader blob' which is just an opaque binary object. It has a size and data. It could be really anything, but in the world of "COM" objects, it was implemented for Windows Vista as ID3D10Blob with the introduction of Direct3D 10.
Historically, Direct3D 9 and earlier had a 'fixed-function' rendering pipeline which means you could use it without HLSL shaders. For Direct3D 10, the 'fixed-function' was removed, so HLSL was required to use it at all. Therefore, a version of the Direct3D HLSL Compiler was added to the OS.
The ID3DBlob interface is what's used for Direct3D 11 or Direct3D 12, but if you look at it closely, it's the same thing.
typedef ID3D10Blob ID3DBlob;
The Direct3D API itself actually doesn't use this specific 'blob' interface. In a C++ STL world, you could use std::vector<uint8_t> as a shader blob:
inline std::vector<uint8_t> ReadData(_In_z_ const wchar_t* name)
{
std::ifstream inFile(name, std::ios::in | std::ios::binary | std::ios::ate);
if (!inFile)
throw std::exception("ReadData");
std::streampos len = inFile.tellg();
if (!inFile)
throw std::exception("ReadData");
std::vector<uint8_t> blob;
blob.resize(size_t(len));
inFile.seekg(0, std::ios::beg);
if (!inFile)
throw std::exception("ReadData");
inFile.read(reinterpret_cast<char*>(blob.data()), len);
if (!inFile)
throw std::exception("ReadData");
inFile.close();
return blob;
}
…
auto vertexShaderBlob = ReadData(L"VertexShader.cso");
ThrowIfFailed(
device->CreateVertexShader(vertexShaderBlob.data(), vertexShaderBlob.size(),
nullptr, m_spVertexShader.ReleaseAndGetAddressOf()));
auto pixelShaderBlob = ReadData(L"PixelShader.cso");
ThrowIfFailed(
device->CreatePixelShader(pixelShaderBlob.data(), pixelShaderBlob.size(),
nullptr, m_spPixelShader.ReleaseAndGetAddressOf()));
See Microsoft Docs and this blog post.

Direct2D fails when drawing a single-channel bitmap

I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.

Using DEFAULT_GUI_FONT in high DPI Windows application

I have a Windows application which I want to look good at high DPI monitors. The application is using DEFAULT_GUI_FONT in lots of places, and the font created this way doesn't scale correctly.
Is there any simple way to fix this problem with not too much pain?
you need get NONCLIENTMETRICS by SystemParametersInfo(SPI_GETNONCLIENTMETRICS,) and then use it LOGFONT data, for create self font. or you can query for SystemParametersInfo(SPI_GETICONTITLELOGFONT) and use it
The recommended fonts for different purposes can be obtained from the NONCLIENTMETRICS structure.
For automatically DPI-scaled fonts (Windows 10 1607+, must be per-monitor DPI-aware):
// Your window's handle
HWND window;
// Get the DPI for which your window should scale to
UINT dpi = GetDpiForWindow(window);
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoForDpi(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0, dpi)
{
// Error handling
}
// Create an appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
For older Windows versions you can use the system-wide DPI and scale the font manually (Windows 7+, must be system DPI-aware):
// Your window's handle
HWND window;
// Obtain the recommended fonts, which are already correctly scaled for the current DPI
NONCLIENTMETRICSW non_client_metrics;
if (!SystemParametersInfoW(SPI_GETNONCLIENTMETRICS, sizeof(non_client_metrics), &non_client_metrics, 0)
{
// Error handling
}
// Get the system-wide DPI
HDC hdc = GetDC(nullptr);
if (!hdc)
{
// Error handling
}
UINT dpi = GetDeviceCaps(hdc, LOGPIXELSY);
ReleaseDC(nullptr, hdc);
// Scale the font(s)
constexpr UINT font_size = 12;
non_client_metrics.lfMessageFont.lfHeight = -((font_size * dpi) / 72);
// Create the appropriate font(s)
HFONT message_font = CreateFontIndirectW(&non_client_metrics.lfMessageFont);
if (!message_font)
{
// Error handling
}
NONCLIENTMETRICS has also many other fonts in it. Make sure to choose the right one for your purpose.
You should set the DPI-awareness level in your application manifest as described here for best compatibility.
WinForms in the .NET framework internally converts the DEFAULT_GUI_FONT (which is in fact used to get the default font for WinForms Forms and Controls in most situations) by scaling its height from pixels (which is the unit GDI fonts use natively) to Points (which is preferred by GDI+). Drawing text using points implies that the physical size of the rendered text depends on the monitor DPI setting.
System.Drawing.Font.SizeInPoints:
float emHeightInPoints;
IntPtr screenDC = UnsafeNativeMethods.GetDC(NativeMethods.NullHandleRef);
try {
using( Graphics graphics = Graphics.FromHdcInternal(screenDC)){
float pixelsPerPoint = (float) (graphics.DpiY / 72.0);
float lineSpacingInPixels = this.GetHeight(graphics);
float emHeightInPixels = lineSpacingInPixels * FontFamily.GetEmHeight(Style) / FontFamily.GetLineSpacing(Style);
emHeightInPoints = emHeightInPixels / pixelsPerPoint;
}
}
finally {
UnsafeNativeMethods.ReleaseDC(NativeMethods.NullHandleRef, new HandleRef(null, screenDC));
}
return emHeightInPoints;
Obviously you cannot use this directly as it's C#. But besides that, this article suggests that you should scale pixel dimensions assuming a 96 dpi design, and use GetDpiForWindow to determine the actual DPI. Note that the "72" in the formula above has nothing to do with the monitor DPI setting, it comes from the fact that .NET likes to use fonts specified in points rather than pixels (otherwise just scale the LOGFONT's height by DPIy/96).
This site suggests something similar, but with GetDpiForMonitor.
I cannot say for sure whether the general approach of manually scaling the font size according to some DPI-dependent factor is a robust and future-proof for scaling fonts (it seems to be the way to go about scaling non-font GUI elements though). However, since .NET basically also just calculates some magic factor based on some sort of DPI value, it's probably a pretty good guess.
Also, you'll want to cache that HFONT. HFONT - LOGFONT conversions are not negligible.
See also (references):
WinForms gets its default using GetStockObject(DEFAULT_GUI_FONT) (there are a few exceptions though, mostly obsolete):
IntPtr handle = UnsafeNativeMethods.GetStockObject(NativeMethods.DEFAULT_GUI_FONT);
try {
Font fontInWorldUnits = null;
// SECREVIEW : We know that we got the handle from the stock object,
// : so this is always safe.
//
IntSecurity.ObjectFromWin32Handle.Assert();
try {
fontInWorldUnits = Font.FromHfont(handle);
}
finally {
CodeAccessPermission.RevertAssert();
}
try{
defaultFont = FontInPoints(fontInWorldUnits);
}
finally{
fontInWorldUnits.Dispose();
}
}
catch (ArgumentException) {
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,355
The HFONT is converted to GDI+, and then the GDI+ font retrieved this way is transformed using FontInPoints:
private static Font FontInPoints(Font font) {
return new Font(font.FontFamily, font.SizeInPoints, font.Style, GraphicsUnit.Point, font.GdiCharSet, font.GdiVerticalFont);
}
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/SystemFonts.cs,452
The content of the SizeInPoints getter is already listed above.
https://referencesource.microsoft.com/#System.Drawing/commonui/System/Drawing/Advanced/Font.cs,992

Resize Windows OnScreen keyboard programmatically

I wonder if it is possible to resize Windows OnScreen-keyboard in my program? What Windows methods to use for that?
simply use standard Win32 api.
I know this question is old, but the given answer is really short. To add value to this topic I could not resist to add the following information:
You could do something like this, the flag SWP_NOREPOSITION should make the iPosX and iPosY to be ignored by SetWindowPos. So only the width and height should change. I have not tested this code though.
HWND hWndOSK = FindWindow("IPTip_Main_Window", null); //Only the class is known, the window has no name
int iPosX=0;
int iPosY=0;
int iWidth=1000;
int iHeight=600;
if(hWndOSK != NULL)
{
//Window is up
if(!SetWindowPos(hWndOSK, HWND_TOPMOST, iPosX, iPosY, iWidth, iHeight, SWP_NOREPOSITION))
{
//Something went wrong do some error handling
}
}
SetWindowPos: http://msdn.microsoft.com/en-us/library/ms633545.aspx
FindWindow: http://msdn.microsoft.com/en-us/library/windows/desktop/ms633499(v=vs.85).aspx

Resources