Shutting down Windows from kernel mode? - winapi

I'm trying to create a driver that will intercept a certain key sequence and perform a reboot from kernel mode in Windows, similarly to the REISUB key sequence in Linux.
I've created a keyboard hook just like Ctrl2Cap does, and I've tried calling NtShutdownSystem to reboot the system.
The handler does detect the key press, but the problem is that when it actually calls NtShutdownSystem, I get a BSOD with the ATTEMPTED_SWITCH_FROM_DPC error code.
I'm assuming this is because I can't shut down the system from an executing DPC, so I probably need to execute my code from somewhere else.
But I don't know where.
So the question is:
How can I shut down the system upon detecting the key sequence in kernel mode?

Ah, I figured out the answer....
Seems like ExQueueWorkItem does the trick:
VOID NTAPI MyShutdownSystem(PVOID) { NtShutdownSystem(1); }
// ... [code] ...
PWORK_QUEUE_ITEM pWorkItem =
(PWORK_QUEUE_ITEM)ExAllocatePool(NonPagedPool, sizeof(WORK_QUEUE_ITEM));
if (pWorkItem != NULL) {
ExInitializeWorkItem(pWorkItem, &MyShutdownSystem, NULL);
ExQueueWorkItem(pWorkItem, DelayedWorkQueue);
}

Related

What dose the error code meaning of mac kernel-extension?

I was program a kernel extension about tun on mac, I use a API proto_register_plumber like follows:
err = proto_register_plumber(PF_INET, IFNET_FAMILY_TUN, method_attach, method_detach);
if (err) {
printf("error code is : %d\n", err);
}
On one mac(10.13), it return 17, what it means? how can i fix it?
I read about the API doc on https://developer.apple.com/documentation/kernel/1532491-proto_register_plumber?language=objc, but i do not found anything about what the error code means.
17 is almost certainly an errno, especially as this is from the BSD portion of the KPI. If you look in errno.h you will find that it corresponds to EEXIST:
#define EEXIST 17 /* File exists */
In the context of your API call, this probably means there already is something registered for the thing you're trying to register. I'm not familiar with the proto_register_plumber() function, but a very quick look at its source code reveals the following check near the start of the function, which appears to confirm my suspicion:
lck_mtx_lock(proto_family_mutex);
TAILQ_FOREACH(proto_family, &proto_family_head, proto_fam_next) {
if (proto_family->proto_family == protocol_family &&
proto_family->if_family == interface_family) {
lck_mtx_unlock(proto_family_mutex);
return (EEXIST);
}
}
Could it be that:
You have previously registered the handler, unloaded your kext, which did not unregister it, and then you have reloaded your kext, trying to register it again? In this case, a reboot (and fixing your kext stop function!) should fix it.
Another loaded kext already registers its own handler? If so, try unloading the likely candidates.
The xnu kernel already provides a default handler for this protocol family? Maybe you need to go about what you're trying to do in a different way.

drmDropMaster requires root privileges?

Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...

What is the connection between IDT and IRQ?

I want to intercept some interrupts in the kernel, and just wrap the original function with some of my code. Mainly for learning purpose.
I already know how to intercept page-faults, and double-faults, through the IDT (Interrupt Descriptor Table), and it works just fine.
So now I want to intercept the RTC, which is at IRQ 8. I didn't find anything specific, but after reading some code I think that the IRQ is inside the IDT, and that it starts at the 32th entry (in code IRQ = IDT+32;). So I ran an example where I changed the 40th entry of the IDT, and nothing happened. (just in case, I ran it again while changing the 0x3a entry, even I'm pretty sure it's in decimal - nothing happened.)
So my questions:
Am I right that IRQ = IDT+32? (If not, where is the table that dispatches IRQs?)
Am I right expecting a kernel crash after intercepting the RTC? (I just redirect it to a function that just prints 'Hello, World!')
In case it matters, my test are run inside a VM. I run linux Mint on a 64 bit machine (gust & host). The host has Windows 7.

SetWindowsHook Global not very Global

I'm playing around with SetWindowsHookEx, specifically I would like be able to find out about any window (on my desktop) thats been activated, via mouse or keyboard.
Reading through MSDN docs for SetWindowsHookEx it would appear that a WH_CBT type would do the job. I've created a dll and put all the code in there, which I control from a gui app (which also handles the unhook).
BUT I only appear to be getting the activation code when I activate my gui app though, any other app I activate is ignored.
In my dll I have the setup code and the CBTProc like so:
LRESULT WINAPI CBTProc(int Code, WPARAM W, LPARAM L) {
if(Code<0) CallN....
if (Code == HCBT_ACTIVATE) { // never get unless I activate my app
HWND a = reinterpret_cast<HWND>(W);
TRACE("this window was activated %d\n", a);
}
CallNext....
}
EXPORTED HHOOK WINAPI Setup(HWND MyWind) {
...
// gDllUInstance set in dllmain
return SetWindowsHookEx(WH_CBT, CBTProc, gDllUInstance, 0);
}
All pretty simple stuff, i've tried moving the setup out of the dll but I still get the same effect.
It would appear that the dll is getting loaded into other processes, I'm counting the number of DLL_PROCESS_ATTACHs I'm getting and can see its going up (not very scientific i know.
NOTE that this is 32 bit code running on 32bit OS - win2k3.
Are my expectations of the hooking mechanism wrong? should I only be getting the activation of my app or do I need a different type of hook?
EDIT: the trace function writes to a file telling me whats sending me activations
TIA.
Turns out its working ok, as Hans points out, i'm just not seeing the output from the debugger from the other processes, if I put in some extra tracing code - one trace file per attached process - I can see can see that things are working after all.
Many thanks for the replies.

Is there any way to detect the monitor state in Windows (on or off)?

Does anyone know if there is an API to get the current monitor state (on or off) in Windows (XP/Vista/2000/2003)?
All of my searches seem to indicate there is no real way of doing this.
This thread tries to use GetDevicePowerState which according to Microsoft's docs does not work for display devices.
In Vista I can listen to GUID_MONITOR_POWER_ON but I do not seem to get events when the monitor is turned off manually.
In XP I can hook into WM_SYSCOMMAND SC_MONITORPOWER, looking for status 2. This only works for situations where the system triggers the power off.
The WMI Win32_DesktopMonitor class does not seem to help out as well.
Edit: Here is a discussion on comp.os.ms-windows.programmer.win32 indicating there is no reliable way of doing this.
Anyone else have any other ideas?
GetDevicePowerState sometimes works for monitors. If it's present, you can open the \\.\LCD device. Close it immediately after you've finished with it.
Essentially, you're out of luck—there is no reliable way to detect the monitor power state, short of writing a device driver and filtering all of the power IRPs up and down the display driver chain. And that's not very reliable either.
You could hook up a webcam, point it at your screen and do some analysis on the images you receive ;)
Before doing anything based on the monitor state, just remember that users can use a machine with remote desktop of other systems that don't require a monitor connected to the machine - so don't turn off any visualization based on the monitor state.
You can't.
Look like all monitor power capabilities connected to the "power safe mode"
After searching i found here code that connecting between SC_MONITORPOWER message and system values (post number 2)
I use the code to testing if the system values is changing when i am manually switch off the monitor.
int main()
{
for(;monitorOff()!=1;)
Sleep(500);
return 0;
}//main
And the code is never stopped, no matter how long i am switch off my monitor.
There the code of monitorOff function:
int monitorOff()
{
const GUID MonitorClassGuid =
{0x4d36e96e, 0xe325, 0x11ce,
{0xbf, 0xc1, 0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18}};
list<DevData> monitors;
ListDeviceClassData(&MonitorClassGuid, monitors);
list<DevData>::iterator it = monitors.begin(),
it_end = monitors.end();
for (; it != it_end; ++it)
{
const char *off_msg = "";
//it->PowerData.PD_PowerStateMapping
if (it->PowerData.PD_MostRecentPowerState != PowerDeviceD0)
{
return 1;
}
}//for
return 0;
}//monitorOff
Conclusion : when you manually switch of the the monitor, you cant catch it by windows (if there is no unusual driver interface for this), because all windows capabilities is connected to "power safe mode".
In Windows XP or later you can use the IMSVidDevice Interface.
See
http://msdn.microsoft.com/en-us/library/dd376775(VS.85).aspx
(not sure if this works in Sever 2003)
With Delphi code, you can detect invalid monitor geomerty while standby in progress:
i := 0
('Monitor'+IntToStr(i)+': '+IntToStr(Screen.Monitors[i].BoundsRect.Left)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Top)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Right)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Bottom))
Results:
Monitor geometry before standby:
Monitor0: 0, 0, 1600, 900
Monitor geometry while standby in Deplhi7:
Monitor0: 1637792, 4210405, 31266576, 1637696
Monitor geometry while standby in DeplhiXE:
Monitor0: 4211194, 40, 1637668, 1637693
This is a really old post but if it can help someone, I have found a solution to detect a screen being available or not : the Connecting and Configuring Displays (CCD) API of Windows.
It's part of User32.ddl and the interesting functions are GetDisplayConfigBufferSizes and QueryDisplayConfig. It give us all informations that can be viewed in the Configuration Panel of windows.
In particular the PathInfo contains a TargetInfo property that have a targetAvailable flag. This flag seems to be correctly updated on all the configurations I have tried so far.
This allow you to know the state of every screens connected to the PC and set their configurations.
Here a CCD wrapper for .Net
If your monitor has some sort of built-in USB hub, you could try and use that to detect if the monitor is off/on.
This will of course only work if the USB hub doesn't stay connected when the monitor is consider "off".

Resources