I have developed a custom credential provider. This credential provider uses 1) camera 2) facial sdk to match the user. Once the user is matched account name is populated and CredentialsChanged signal is triggered. I have customized samplehardwareeventcredentialprovider
to achieve this functionality. This works fine with few of the machine ( all windows 10). When I tried to execute this another machine ( different brand), I get the following exception randomly and makes the screen go black , unstable login screen. All the dependencies are in place but it is not stable at all.
I have turned off the winbio service, disabled many of default credential providers but I face the same issue.
My Flow:
I initiate the facial identification flow in CSampleCredential::Initialize api and once it is identified, update the value for rgFieldStrings[SFI_USERNAME]
In the following method, after completing CSampleCredential::Initialize , I use CSampleProvider::OnConnectStatusChanged method to trigger login window. If everything works as expected, it launches login window with user name auto populated. The entire flow works file, but it is not stable in few machine.
HRESULT CSampleProvider::SetUsageScenario(
__in CREDENTIAL_PROVIDER_USAGE_SCENARIO cpus,
__in DWORD dwFlags
)
Am I doing something fundamentally wrong here?
Any pointers will be helpful! Thanks
I generated localdump by following Steps to Catch a Simple “Crash Dump” of a Crashing Process
By analyzing the log, it was evident that there was a heap corruption. By mistake, malloc allocation was done for the size of 4. Actually this allocation should be of size 260. When the memory is accessed beyond this size, it was triggering the random crash based on the input data.
Original code with bug:
uint8_t* data = (uint8_t*)malloc(sizeof(MAX_PATH));
Fixed code:
uint8_t* data = (uint8_t*)malloc(MAX_PATH*sizeof(uint8_t));
Related
Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...
I have a driver that runs in the kernel of a Windows Embedded Compact 2013. The driver is loaded with the "Drivers\BuiltIn" registry key. It accesses a set of HW-registers that are mapped with MmMapIoSpace.
The access to the hardware has some problems. That's why I would like to develop the hardware access in a user mode program and debug the problems. I created a program with VS2013 for that purpose. That's the way we used to go with Windows CE 5.0.
The driver maps the physical address with MmMapIoSpace to the process address space. My program should do the same or something similar. Unfortunately this doesn't work in my program. MmMapIoSpace returns NULL, LastError=87 (invalid parameters). Even CreateStaticMapping returns NULL.
How can I access memory mapped register in WEC2013 without building a new platform for each iteration?
MmMapIoSpace doesn't work in applications anymore since WinCE6.
You maybe could create a driver which maps your hw-register to your user process. Your user process would then obtain this pointer by an ioctl call to this driver.
We mapped some external memory to an application with this method.
VirtualAllocCopyEx() can create a mapping to a specified process.
Hope, this helps. Greetings.
Corresponding to timmfs answer I implemented this code in the driver's XXX_IOControl function:
PHYSICAL_ADDRESS PhysAddress = { 0 };
PhysAddress.LowPart = phys_address;
PVOID pRegister = MmMapIoSpace(PhysAddress, phys_size, FALSE);
HANDLE hCallerProcess = (HANDLE)GetCallerVMProcessId();
HANDLE hCurrentProcess = (HANDLE)GetCurrentProcessId();
PVOID UserSpaceAddress = VirtualAllocCopyEx(hCurrentProcess, hCallerProcess, pRegister, shys_size, PAGE_NOCACHE);
This excerpt shows some solutions for pitfalls I met. So I show how I get all the parameters.
I'm messing around with some driver development and I'm having an issue getting some of my code to work. I'm not sure if it's a quirk with the API that I'm unaware of or what.
I have a user app that has created a named shared object under BaseNamedObjects with CreateFileMapping, MapViewOfFile, etc. I'm trying to read this shared object inside of my driver code using ZwOpenSection and ZwMapViewOfSection
The problem code is below:
char *sharedData = NULL;
SIZE_T vs = 256;
InitializeObjectAttributes(&myAttributes,§ionName,OBJ_KERNEL_HANDLE,NULL,NULL);
ZwOpenSection(§ionHandle,SECTION_MAP_READ,&myAttributes)
ZwMapViewOfSection(§ionHandle, ZwGetCurrentProcess(), (PVOID *)sharedData, 0, 256,
NULL, &vs, ViewUnmap, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
The call to ZwOpenSection completes successfully and I get the object properly, but the second call fails. The status returned says it's an issue with the ninth parameter, but I've tried every combination I could think of with nothing to show for it, so I'm not sure if it's an issue with a different parameter causing the 9th to be "incorrect" or if I'm missing something else
Thanks.
Is the access permission with which the section was created the same as the one you have passed here?
MEM_COMMIT is not allowed in a direct call in this function. If you still want to commit and reserve pages, try calling the virtualalloc(), otherwise just pass NULL in the 8th parameter.
I have a problem caused by a failure in Direct3D9::CreateDevice(). It fails when the following code is executed with a locked screen under Windows 7. Due to requirements, I need to be able to create a device while the screen is locked.
I get a D3DERR_INVALIDCALL error when CreateDevice is called with the following parameters. I've experimented extensively with the HWND being used, and double checked that it is valid. I've also tried out various tweaks to the presentation parameters to no avail. Anyone encountered this before or have a better idea of what might be causing the invalid call return?
Again, this failure only occurs with a locked screen, when run in any other tested state, it succeeds.
D3DPRESENT_PARAMETERS pp;
ZeroMemory( &pp, sizeof(D3DPRESENT_PARAMETERS) );
pp.BackBufferFormat = D3DFMT_UNKNOWN;
pp.SwapEffect = D3DSWAPEFFECT_DISCARD;
pp.Windowed = TRUE;
HWND focusWndHnd = GetConsoleWindow();
if ( focusWndHnd == NULL && pp.hDeviceWindow == NULL )
{
focusWndHnd = ::GetDesktopWindow();
}
IDirect3DDevice9* pd3dDevice;
IDirect3D9* pD3D = Direct3DCreate9( D3D_SDK_VERSION );
hr = pD3D->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_NULLREF, focusWndHnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE, &pp, &pd3dDevice );
The legacy Direct3D 9 interface considers the 'secure desktop' to be a lost device scenario. Use of a WDDM aware version of Direct3D (Direct3D9Ex, Direct3D 10.x, or Direct3D 11.x) will avoid this problem.
Could it be that you need a different value for BackBufferFormat different than D3DFMT_UNKNOWN, due to only windowed apps allowing that value, just like OJ stated here?
My memory is hazy, but I believe this is a known limitation ("by design") with D3D with respect to the lock screen (and running as a service).
Even if you could create the D3D device, you won't be able to draw on top of the lock screen. So you'll probably be better off designing your app such that it defers the D3D device creation until after the screen becomes unlocked.
Use WTSRegisterSessionNotification to register for notifications of when the screen becomes locked or unlocked.
Thanks to Chuck Walbourn's answer I have solved my related issue that the D3D Device initialization fails as soon the elevation prompt secure session is active. In my case I received a D3DERR_NOTAVAILABLE error during the secure session. Having replaced IDirect3D9* with IDirect3D9Ex* and Direct3DCreate9 with Direct3DCreate9Ex then initialization finished successfully!
Additionally I have to stress that Chuck's answer does not refer to Kent's answer directly but just to a related issue, since - as I have understood it right - Kent's scenario refers to the WTS_SESSIONSTATE_LOCK session that can be entered through CTRL+L. In Kent's case I haven't experienced a problem with the D3D initialization in a locked session.
I got a ERROR_INSUFFICIENT_BUFFER error when invoking FindNextUrlCacheEntry(). Then I want to retrieve the failed entry again, using a enlarged buffer. But I found that when I invoke FindNextUrlCacheEntry(), it seems I was retrieving the one next to the failed entry. Is there any approach I can go back to retrieve the information of the just failed entry?
I also observed the same behavior on XP. I am trying to clear IE cache programmatically using WinInet APIs. The code at the following MSDN link works perfectly fine on Win7/Vista but deletes cache files in batches(multiple runs) on XP. On debugging I found that API FindNextUrlCacheEntry gives different sizes for the same entry when executed multiple times.
MSDN Link: http://support.microsoft.com/kb/815718
Here is what I am doing:
First of all I make a call to determine the size of the next URL entry
fSuccess = FindNextUrlCacheEntry(hCacheHandle, 0, &cacheEntryInfoBufferSizeInitial) // cacheEntryInfoBufferSizeInitial = 0 at this point
The above call returns false with error no as INSUFFICIENT_BUFFER and with cacheEntryInfoBufferSizeInitial parameter set equal to the size of the buffer required to retrieve the cache entry, in bytes. After allocating the required size (cacheEntryInfoBufferSizeInitial) I call the same WinInet API again expecting it to retrieve the entry successfully this time. But sometimes it fails. I see that the cases in which API fails again even though with required buffered sizes (as determined it only) because it expects morebytes then what it retrieved earlier. Most of times the difference is of few bytes but I have also seen cases where the difference is almost 4 to 5 times.
For what it's worth this seems to be solved in Vista.