D3D Device Failure During Screen Locked - windows

I have a problem caused by a failure in Direct3D9::CreateDevice(). It fails when the following code is executed with a locked screen under Windows 7. Due to requirements, I need to be able to create a device while the screen is locked.
I get a D3DERR_INVALIDCALL error when CreateDevice is called with the following parameters. I've experimented extensively with the HWND being used, and double checked that it is valid. I've also tried out various tweaks to the presentation parameters to no avail. Anyone encountered this before or have a better idea of what might be causing the invalid call return?
Again, this failure only occurs with a locked screen, when run in any other tested state, it succeeds.
D3DPRESENT_PARAMETERS pp;
ZeroMemory( &pp, sizeof(D3DPRESENT_PARAMETERS) );
pp.BackBufferFormat = D3DFMT_UNKNOWN;
pp.SwapEffect = D3DSWAPEFFECT_DISCARD;
pp.Windowed = TRUE;
HWND focusWndHnd = GetConsoleWindow();
if ( focusWndHnd == NULL && pp.hDeviceWindow == NULL )
{
focusWndHnd = ::GetDesktopWindow();
}
IDirect3DDevice9* pd3dDevice;
IDirect3D9* pD3D = Direct3DCreate9( D3D_SDK_VERSION );
hr = pD3D->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_NULLREF, focusWndHnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE, &pp, &pd3dDevice );

The legacy Direct3D 9 interface considers the 'secure desktop' to be a lost device scenario. Use of a WDDM aware version of Direct3D (Direct3D9Ex, Direct3D 10.x, or Direct3D 11.x) will avoid this problem.

Could it be that you need a different value for BackBufferFormat different than D3DFMT_UNKNOWN, due to only windowed apps allowing that value, just like OJ stated here?

My memory is hazy, but I believe this is a known limitation ("by design") with D3D with respect to the lock screen (and running as a service).
Even if you could create the D3D device, you won't be able to draw on top of the lock screen. So you'll probably be better off designing your app such that it defers the D3D device creation until after the screen becomes unlocked.
Use WTSRegisterSessionNotification to register for notifications of when the screen becomes locked or unlocked.

Thanks to Chuck Walbourn's answer I have solved my related issue that the D3D Device initialization fails as soon the elevation prompt secure session is active. In my case I received a D3DERR_NOTAVAILABLE error during the secure session. Having replaced IDirect3D9* with IDirect3D9Ex* and Direct3DCreate9 with Direct3DCreate9Ex then initialization finished successfully!
Additionally I have to stress that Chuck's answer does not refer to Kent's answer directly but just to a related issue, since - as I have understood it right - Kent's scenario refers to the WTS_SESSIONSTATE_LOCK session that can be entered through CTRL+L. In Kent's case I haven't experienced a problem with the D3D initialization in a locked session.

Related

Why is WebViewControlProcess.CreateWebViewControlAsync() never completing?

I’m trying to write some Rust code that uses Windows.Web.UI.Interop.WebViewControl (which is a Universal Windows Platform out-of-process wrapper expressly designed so Win32 apps can use EdgeHTML), and it’s all compiling, but not working properly at runtime.
The relevant code boils down to this, using the winit, winapi and winrt crates:
use winit::os::windows::WindowExt;
use winit::{EventsLoop, WindowBuilder};
use winapi::winrt::roapi::{RoInitialize, RO_INIT_SINGLETHREADED};
use winapi::shared::winerror::S_OK;
use winrt::{RtDefaultConstructible, RtAsyncOperation};
use winrt::windows::foundation::Rect;
use winrt::windows::web::ui::interop::WebViewControlProcess;
fn main() {
assert!(unsafe { RoInitialize(RO_INIT_SINGLETHREADED) } == S_OK);
let mut events_loop = EventsLoop::new();
let window = WindowBuilder::new()
.build(&events_loop)
.unwrap();
WebViewControlProcess::new()
.create_web_view_control_async(
window.get_hwnd() as usize as i64,
Rect {
X: 0.0,
Y: 0.0,
Width: 800.0,
Height: 600.0,
},
)
.expect("Creation call failed")
.blocking_get()
.expect("Creation async task failed")
.expect("Creation produced None");
}
The WebViewControlProcess instantiation works, and the CreateWebViewControlAsync function does seem to care about the value it received as host_window_handle (pass it 0, or one off from the actual HWND value, and it complains). Yet the IAsyncOperation stays determinedly at AsyncStatus.Started (0), and so the blocking_get() call hangs indefinitely.
A full, runnable demonstration of the issue (with a bit more instrumentation).
I get the feeling that the WebViewControlProcess is at fault: its ProcessId is stuck at 0, and it doesn’t look to have spawned any subprocess. The ProcessExited event does not seem to be being fired (I attached something to it immediately after instantiation, is there opportunity for it to be fired before that?). Calling Terminate() fails as one might expect in such a situation, E_FAIL.
Have I missed some sort of initialization for using Windows.Web.UI.Interop? Or is there some other reason why it’s not working?
It turned out that the problem was threading-related: the winit crate was doing its event loop in a different thread, and I did not realise this; I had erroneously assumed winit to be a harmless abstraction, which it turned out not quite to be.
I discovered this when I tried minimising and porting a known-functioning C++ example, this time doing all the Win32 API calls manually rather than using winit, so that the translation was correct. I got it to work, and discovered this:
The IAsyncOperation is fulfilled in the event loop, deep inside a DispatchMessageW call. That is when the Completion handler is called. Thus, for the operation to complete, you must run an event loop on the same thread. (An event loop on another thread doesn’t do anything.) Otherwise, it stays in the Started state.
Fortunately, winit is already moving to a new event loop which operates in the same thread, with the Windows implementation having landed a few days ago; when I migrated my code to use the eventloop-2.0 branch of winit, and to using the Completed handler instead of blocking_get(), it all started working.
I shall clarify about the winrt crate’s blocking_get() call which would normally be the obvious solution while prototyping: you can’t use it in this case because it causes deadlock, since it blocks until the IAsyncOperation completes, but the IAsyncOperation will not complete until you process messages in the event loop (DispatchMessageW), which will never happen because you’re blocking the thread.
Try to initialize WebViewProcessControl with winrt::init_apartment(); And it may needs a single-threaded apartment(according to the this answer).
More attention on Microsoft Edge Developer Guide:
Lastly, power users might notice the apppearance of the Desktop App
Web Viewer (previously named Win32WebViewHost), an internal system app
representing the Win32 WebView, in the following places:
● In the Windows 10 Action Center. The source of these notifications
should be understood as from a WebView hosted from a Win32 app.
● In the device access settings UI
(Settings->Privacy->Camera/Location/Microphone). Disabling any of
these settings denies access from all WebViews hosted in Win32 apps.

Custom windows credential provider crashes with Exception code: 0xc0000374

I have developed a custom credential provider. This credential provider uses 1) camera 2) facial sdk to match the user. Once the user is matched account name is populated and CredentialsChanged signal is triggered. I have customized samplehardwareeventcredentialprovider
to achieve this functionality. This works fine with few of the machine ( all windows 10). When I tried to execute this another machine ( different brand), I get the following exception randomly and makes the screen go black , unstable login screen. All the dependencies are in place but it is not stable at all.
I have turned off the winbio service, disabled many of default credential providers but I face the same issue.
My Flow:
I initiate the facial identification flow in CSampleCredential::Initialize api and once it is identified, update the value for rgFieldStrings[SFI_USERNAME]
In the following method, after completing CSampleCredential::Initialize , I use CSampleProvider::OnConnectStatusChanged method to trigger login window. If everything works as expected, it launches login window with user name auto populated. The entire flow works file, but it is not stable in few machine.
HRESULT CSampleProvider::SetUsageScenario(
__in CREDENTIAL_PROVIDER_USAGE_SCENARIO cpus,
__in DWORD dwFlags
)
Am I doing something fundamentally wrong here?
Any pointers will be helpful! Thanks
I generated localdump by following Steps to Catch a Simple “Crash Dump” of a Crashing Process
By analyzing the log, it was evident that there was a heap corruption. By mistake, malloc allocation was done for the size of 4. Actually this allocation should be of size 260. When the memory is accessed beyond this size, it was triggering the random crash based on the input data.
Original code with bug:
uint8_t* data = (uint8_t*)malloc(sizeof(MAX_PATH));
Fixed code:
uint8_t* data = (uint8_t*)malloc(MAX_PATH*sizeof(uint8_t));

drmDropMaster requires root privileges?

Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...

OpenCL-GL Interop memory not in sync

I'm having troubles with OpenCL-GL shared memory.
I have a application that's working in both linux and windows. The CL-GL sharing works in linux, but not in windows.
The windows driver says that it supports sharing, the examples from AMD work so it should work. My code for creating the context in windows is:
cl_context_properties properties[] = {
CL_CONTEXT_PLATFORM, (cl_context_properties)platform_(),
CL_WGL_HDC_KHR, (intptr_t) wglGetCurrentDC(),
CL_GL_CONTEXT_KHR, (intptr_t) wglGetCurrentContext(),
0
};
platform_.getDevices(CL_DEVICE_TYPE_GPU, &devices_);
context_ = cl::Context(devices_, properties, &CL::cl_error_callback, nullptr, &err);
err = clGetGLContextInfoKHR(properties, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(device_id), &device_id, NULL);
context_device_ = cl::Device(device_id);
queue_ = cl::CommandQueue(context_, context_device_, 0, &err);
My problem is that the CL and GL memory in a shared buffer is not the same. I print them out (by memory mapping) and I notice that they differ. Changing the data in the memory works in both CL and GL, but only changes that memory, not both (that is both buffers seems intact, but not shared).
Also, clGetGLObjectInfo on the cl-buffer returns the correct gl buffer.
Update: I have found that if I create the opencl-context on the cpu it works. This seems weird, as I'm not using integrated graphics, and I don't belive the cpu is handling opengl. I'm using SDL to create the window, could that have something to do with this?
I have now confirmed that the opengl context is running on the gpu, so the problem lies elsewhere.
Update 2: Ok, so this is weird. I tried again today, and suddenly it works. As far as I know I didn't install any new drivers before I shut down the computer yesterday, so I don't know what could have brought this about.
Update 3: Right, I noticed that changing the number of particles caused this to work. When I allocated so many particles that the shared buffer is slightly above one MB it suddenly starts to work.
I solved the problem.
OpenGL buffer object must be created "after" OpenCL context was created.
If "before", we can't share the OpenGL data.
I use RadeonHD5670 ATI Catalyst 12.10
Maybe, ATI driver's problem because Nvidia-Computing-SDK samples don't depend on the order.

Is there any way to detect the monitor state in Windows (on or off)?

Does anyone know if there is an API to get the current monitor state (on or off) in Windows (XP/Vista/2000/2003)?
All of my searches seem to indicate there is no real way of doing this.
This thread tries to use GetDevicePowerState which according to Microsoft's docs does not work for display devices.
In Vista I can listen to GUID_MONITOR_POWER_ON but I do not seem to get events when the monitor is turned off manually.
In XP I can hook into WM_SYSCOMMAND SC_MONITORPOWER, looking for status 2. This only works for situations where the system triggers the power off.
The WMI Win32_DesktopMonitor class does not seem to help out as well.
Edit: Here is a discussion on comp.os.ms-windows.programmer.win32 indicating there is no reliable way of doing this.
Anyone else have any other ideas?
GetDevicePowerState sometimes works for monitors. If it's present, you can open the \\.\LCD device. Close it immediately after you've finished with it.
Essentially, you're out of luck—there is no reliable way to detect the monitor power state, short of writing a device driver and filtering all of the power IRPs up and down the display driver chain. And that's not very reliable either.
You could hook up a webcam, point it at your screen and do some analysis on the images you receive ;)
Before doing anything based on the monitor state, just remember that users can use a machine with remote desktop of other systems that don't require a monitor connected to the machine - so don't turn off any visualization based on the monitor state.
You can't.
Look like all monitor power capabilities connected to the "power safe mode"
After searching i found here code that connecting between SC_MONITORPOWER message and system values (post number 2)
I use the code to testing if the system values is changing when i am manually switch off the monitor.
int main()
{
for(;monitorOff()!=1;)
Sleep(500);
return 0;
}//main
And the code is never stopped, no matter how long i am switch off my monitor.
There the code of monitorOff function:
int monitorOff()
{
const GUID MonitorClassGuid =
{0x4d36e96e, 0xe325, 0x11ce,
{0xbf, 0xc1, 0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18}};
list<DevData> monitors;
ListDeviceClassData(&MonitorClassGuid, monitors);
list<DevData>::iterator it = monitors.begin(),
it_end = monitors.end();
for (; it != it_end; ++it)
{
const char *off_msg = "";
//it->PowerData.PD_PowerStateMapping
if (it->PowerData.PD_MostRecentPowerState != PowerDeviceD0)
{
return 1;
}
}//for
return 0;
}//monitorOff
Conclusion : when you manually switch of the the monitor, you cant catch it by windows (if there is no unusual driver interface for this), because all windows capabilities is connected to "power safe mode".
In Windows XP or later you can use the IMSVidDevice Interface.
See
http://msdn.microsoft.com/en-us/library/dd376775(VS.85).aspx
(not sure if this works in Sever 2003)
With Delphi code, you can detect invalid monitor geomerty while standby in progress:
i := 0
('Monitor'+IntToStr(i)+': '+IntToStr(Screen.Monitors[i].BoundsRect.Left)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Top)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Right)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Bottom))
Results:
Monitor geometry before standby:
Monitor0: 0, 0, 1600, 900
Monitor geometry while standby in Deplhi7:
Monitor0: 1637792, 4210405, 31266576, 1637696
Monitor geometry while standby in DeplhiXE:
Monitor0: 4211194, 40, 1637668, 1637693
This is a really old post but if it can help someone, I have found a solution to detect a screen being available or not : the Connecting and Configuring Displays (CCD) API of Windows.
It's part of User32.ddl and the interesting functions are GetDisplayConfigBufferSizes and QueryDisplayConfig. It give us all informations that can be viewed in the Configuration Panel of windows.
In particular the PathInfo contains a TargetInfo property that have a targetAvailable flag. This flag seems to be correctly updated on all the configurations I have tried so far.
This allow you to know the state of every screens connected to the PC and set their configurations.
Here a CCD wrapper for .Net
If your monitor has some sort of built-in USB hub, you could try and use that to detect if the monitor is off/on.
This will of course only work if the USB hub doesn't stay connected when the monitor is consider "off".

Resources