When using CachedBitmaps in GDIPlus, there is graphical corruption if Windows video "Hardware Acceleration" is lowered too much - such that DirectDraw is disabled:
There are six levels of hardware acceleration:
Disable all accelerations
Disable all but basic accelerations. (Default on server machines)
Disable all DirectDraw and Direct3D accelerations, as well as all cursor and advanced accelerations
Disable all cursor and advanced drawing accelerations
Disable cursor and bitmap accelerations
All accelerations are enabled (Default on desktop machines)
If DirectDraw is disabled, then using DrawCachedBitmap in GDI+ will result in graphical corruption. It's easy enough for me to use the slower DrawImage() API if DirectDraw is not enabled - but i have to be able to detect that DirectDraw is disabled.
How can i programatically check if DirectDraw is enabled?
The question is: How does dxdiag do this:
See also
KB191660 - DirectDraw or Direct3D option is unavailable (archive)
If you download the latest DirectX SDK (I'm sure older sdk's have similar examples) there is an example of querying DXDIAG info.
The example is located at (SDK Root)\Samples\C++\Misc\DxDiagReport
In dxdiaginfo.cpp methods of note
CDxDiagInfo::CDxDiagInfo
CDxDiagInfo::Init
CDxDiagInfo::QueryDxDiagViaDll
CDxDiagInfo::GetDisplayInfo
If you run the program it ouputs a giant list of values. I think the value you're interested in is pDisplayInfo->m_szDDStatusEnglish
You could check the registry for the acceleration slider value.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video\{'some hex string'}\0000\Acceleration.Level
You're probably going to have to loop through all the folders in Video as there are generally more than one entry.
Acceleration.Level Values
5 Disable all accelerations
4 Disable all but basic accelerations. (Default on server machines)
3 Disable all DirectDraw and Direct3D accelerations, as well as all cursor and advanced accelerations
2 Disable all cursor and advanced drawing accelerations
1 Disable cursor and bitmap accelerations
0 All accelerations are enabled (Default on desktop machines)
Update:
Here's an older thread about programatically changing/checking the acceleration level.
http://www.autoitscript.com/forum/topic/61185-hardware-acceleration/
You could query a IDirectDraw interface and see what it does. I assume it will fail if hardware acceleration is turned off, but you might want to test GetCaps() or TestCooperativeLevel().
LPDIRECTDRAW lpdd7 = NULL; // DirectDraw 7.0
// first initialize COM, this will load the COM libraries
// if they aren't already loaded
if (FAILED(CoInitialize(NULL)))
{
// error
} // end if
// Create the DirectDraw object by using the
// CoCreateInstance() function
if (FAILED(CoCreateInstance(&CLSID_DirectDraw,
NULL, CLSCTX_ALL,
&IID_IDirectDraw7,
&lpdd7)))
{
// error
}
// now before using the DirectDraw object, it must
// be initialized using the initialize method
if (FAILED(IDirectDraw7_Initialize(lpdd7, NULL)))
{
// error
}
lpdd7->Release();
lpdd7 = NULL; // set to NULL for safety
// now that we're done with COM, uninitialize it
CoUninitialize();
Unfortunately the DirectDraw docs are no longer included in the SDKs. You might need an older version to get the samples and header files.
Related
My application happens to be written in Python using pygame, which wraps SDL, but I'm imagining that this is probably a more-general question to do with the Windows API.
In some of my Python applications, I want pixel-for-pixel control under Windows 10 even at high resolutions. I want to be able to ensure, for example, that if my Surface Pro 3 has a native resolution of 2160x1440, then I can enter full-screen mode with those dimensions and present a full-screen image of exactly those dimensions.
The barrier to this is "DPI scaling". By default, under Windows' Settings -> Display, the value of "Change the size of text, apps, and other items" is "150% (Recommended)" and the result is that I only see 2/3 of my image. I have discovered how to fix this behaviour...
systemwide, by moving that slider down to 100% (but that's undesirable for most other applications)
just for python.exe and pythonw.exe, by going to those executables' "Properties" dialogs, Compatibility tab, and clicking "Disable display scaling on high DPI settings". I can do this for me alone, or for all users. I can also automate this process by setting the appropriate keys in the registry programmatically. Or via .exe.manifest files (which also seems to require a global setting change, to prefer external manifests, with possible side-effects on other applications).
My question is: can I do this from inside my program on a per-launch basis, before I open my graphics window? I, or anyone using my software, won't necessarily want this setting enabled for all Python applications ever—we might want it just when running particular Python programs. I'm imagining there might be a winapi call (or failing that something inside SDL, wrapped by pygame) that could achieve this, but so far my research is drawing a blank.
Here's the answer I was looking for, based on comments by IInspectable and andlabs (many thanks):
import ctypes
# Query DPI Awareness (Windows 10 and 8)
awareness = ctypes.c_int()
errorCode = ctypes.windll.shcore.GetProcessDpiAwareness(0, ctypes.byref(awareness))
print(awareness.value)
# Set DPI Awareness (Windows 10 and 8)
errorCode = ctypes.windll.shcore.SetProcessDpiAwareness(2)
# the argument is the awareness level, which can be 0, 1 or 2:
# for 1-to-1 pixel control I seem to need it to be non-zero (I'm using level 2)
# Set DPI Awareness (Windows 7 and Vista)
success = ctypes.windll.user32.SetProcessDPIAware()
# behaviour on later OSes is undefined, although when I run it on my Windows 10 machine, it seems to work with effects identical to SetProcessDpiAwareness(1)
The awareness levels are defined as follows:
typedef enum _PROCESS_DPI_AWARENESS {
PROCESS_DPI_UNAWARE = 0,
/* DPI unaware. This app does not scale for DPI changes and is
always assumed to have a scale factor of 100% (96 DPI). It
will be automatically scaled by the system on any other DPI
setting. */
PROCESS_SYSTEM_DPI_AWARE = 1,
/* System DPI aware. This app does not scale for DPI changes.
It will query for the DPI once and use that value for the
lifetime of the app. If the DPI changes, the app will not
adjust to the new DPI value. It will be automatically scaled
up or down by the system when the DPI changes from the system
value. */
PROCESS_PER_MONITOR_DPI_AWARE = 2
/* Per monitor DPI aware. This app checks for the DPI when it is
created and adjusts the scale factor whenever the DPI changes.
These applications are not automatically scaled by the system. */
} PROCESS_DPI_AWARENESS;
Level 2 sounds most appropriate for my goal although 1 will also work provided there's no change in system resolution / DPI scaling.
SetProcessDpiAwareness will fail with errorCode = -2147024891 = 0x80070005 = E_ACCESSDENIED if it has previously been called for the current process (and that includes being called by the system when the process is launched, due to a registry key or .manifest file)
I don't have any Deep-Color capable hardware attached to my computer, so I haven't been able to experiment myself, and searching online for "win32 deep-color" or "win32 10-bit color" (or 30-bit, 48-bit or 64-bit) yields nothing relevant or recent. The top result is still this NVIDIA PDF from 2009: https://www.nvidia.com/docs/IO/40049/TB-04701-001_v02_new.pdf - it describes using OpenGL and an NVIDIA API for displaying images with more than 8-bits per channel.
I understand how using OpenGL allows 30-bit color images to be displayed: it effectively bypasses the operating system and the OpenGL surface is rendered in deep-color on the GPU and sent directly to the monitor in an appropriate format over DisplayPort or HDMI.
But what options are there outside of OpenGL?
In Win32, after you create a Window with CreateWindow, you render it by handling the WM_PAINT message, and then calling BeginPaint, which gives you a handle to a GDI device-context, which cannot be more than 32bpp (8-bits per channel).
While GDI ostensibly abstracts away implementation details of the rendering device, including color depth, it is impossible to specify a 10bpp RGB value, for example (the COLORREF struct is hardcoded to use 32-bit (8bpp) DWORD values), a leaky-abstraction.
Does this mean it is impossible to display 30bpp / Deep-color content in the Windows Desktop using a program handling WM_PAINT and that OpenGL is the only way?
And what would happen if you attempted to blit from an in-memory OpenGL rendering buffer back to the window surface? (i.e. what happens if you press PrintScreen while displaying a Deep-Color BluRay disc in a BD player, or displaying 30-bit content in Photoshop?)
During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.
I'm currently writing an OpenGL renderer and am part-way through writing some classes for enumerating display adaptors, devices and modes for use in drop-down lists.
I'm using EnumDisplayDevices to get the adaptors and then EnumDisplaySettings for each device, giving me bpp, width, height and refresh rate. However I'm not sure how to find out which modes are available full-screen (there doesn't appear to be a flag for it in the DEVMODE structure). Can I assume all modes listed can in-principle be instantiated full-screen?
As a follow up question, is this approach to device enumeration generally the best way of getting the required information on Windows?
OpenGL has not this distinction between windowed and fullscreen mode. If you want an OpenGL program to be fullscreen you just set the window to be toplevel, borderless, without decoration, stay on top and maximum size.
The above is actually a dumb question. By definition windowed mode must be the current display settings. All other modes must be available full-screen (provided the OS supports them, i.e. 640x480 not advisable in Vista/7).
Hmmph, not correct at all, and with an attitude too. You have variety of functions that can be used.
SetPixelFormat, ChoosePixelFormat, ChangeDisplaySettings.
The PixelFormat functions will let you enumerator available modes. ChangeDisplaySettings with allow you to set whatever screen mode (including bit depth) your app wants. Look them up in MSDN.
I'm calling this method:
http://msdn.microsoft.com/en-us/library/dd371264(VS.85).aspx
The call fails with E_NOINTERFACE. The documentation is especially unhelpful as to why this may happen. I've enabled all of the DirectX 11 debug stuff and that's the best I got. I know that I have a valid IDXGISurface1* (also tried IDXGISurface) and the other parameters are set correctly. Any ideas as to why this call may fail?
Edit:
I also am having problems creating D3D11 devices. If I pass nullptr as the IDXGIAdapter* argument in D3D11CreateDeviceAndSwapChain, it works fine, but if I enumerate the adapters myself and pass in a pointer (the only one returned), it fails with invalid argument. The MSDN documentation explicitly says that if nullptr is passed, then the system uses the first return from EnumAdapters1. I am running a DX11 system.
Direct2D only works when you create a Direct3D 10.1 device, but it can share surfaces with Direct3D 11. All you need to do is create both devices and render all of your Direct2D content to a texture that you share between them. I use this technique in my own applications to use Direct2D with Direct3D 11. It incurs a slight cost, but it is small and constant per frame.
A basic outline of the process you will need to use is:
Create your Direct3D 11 device like you do normally.
Create a texture with the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX option in order to allow access to the ID3D11KeyedMutex interface.
Use the GetSharedHandle to get a handle to the texture that can be shared among devices.
Create a Direct3D 10.1 device, ensuring that it is created on the same adapter.
Use OpenSharedResource function on the Direct3D 10.1 device to get a version of the texture for Direct3D 10.1.
Get access to the D3D10 KeyedMutex interface for the texture.
Use the Direct3D 10.1 version of the texture to create the RenderTarget using Direct2D.
When you want to render with D2D, use the keyed mutex to lock the texture for the D3D10 device. Then, acquire it in D3D11 and render the texture like you were probably already trying to do.
It's not trivial, but it works well, and it is the way that they intended you to interoperate between them. Windows 8 looks like it will introduce full D3D11 compatibility, so it will be just as simple as you expect.
Direct2D uses D3D10 devices not D3D11 devices. D3D11 device is probably that is reported as lacking interface by that E_NOINTERFACE.