Win32: How to crash? - windows

i'm trying to figure out where Windows Error Reports are saved; i hit Send on some earlier today, but i forgot that i want to "view the details" so i can examine the memory minidumps.
But i cannot find where they are stored (and google doesn't know).
So i want to write a dummy application that will crash, show the WER dialog, let me click "view the details" so i can get to the folder where the dumps are saved.
How can i crash on Windows?
Edit: The reason i ask is because i've tried overflowing the stack, and floating point dividing by zero. Stack Overflow makes the app vanish, but no WER dialog popped up. Floating point division by zero results in +INF, but no exception, and no crash.

You guys are all so verbose! :-)
Here's a compact way to do it:
*((int*)0)=0;

Should be a good start:
int main(int argc, char* argv[])
{
char *pointer = NULL;
printf("crash please %s", *pointer);
return 0;
}

You are assuming the memory dumps are still around. Once they are sent, AFAIK the dumps are deleted from the machine.
The dumps themselves should be located in %TEMP% somewhere.
As far as crashing, that's not difficult, just do something that causes a segfault.

void crash(void)
{
char* a = 0;
*a = 0;
}

Not sure if this will trigger the Error Reporting dialog, but you could try division by zero.

The officially-supported ways to trigger a crash on purpose can be found here:
http://msdn.microsoft.com/en-us/library/ff545484(v=VS.85).aspx
Basically:
With USB keyboards, you must enable
the keyboard-initiated crash in the
registry. In the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\kbdhid\Parameters,
create a value named
CrashOnCtrlScroll, and set it equal to
a REG_DWORD value of 0x01.
Then:
You must restart the system for these
settings to take effect.
After this is completed, the keyboard
crash can be initiated by using the
following hotkey sequence: Hold down
the rightmost CTRL key, and press the
SCROLL LOCK key twice.
No programming necessary ;) No wheel reinvention here :)

Interesting to know how to crash Windows. But why not have a look at
%allusersprofile%\Application Data\Microsoft\Dr Watson\
first? Look out for application specific crashdata folders as well, I found e.g.
...\FirefoxPortable\Data\profile\minidumps\
and
...\OpenOfficePortable\Data\settings\user\crashdata\.

Related

Debugging OnWndMsg exception Windows C++ WM_TIMER message Crash mentions IsWindowVisible

I have a crash that is caught in Sentry.
CWnd::IsWindowVisible
Unhandled
Fatal Error: EXCEPTION_ACCESS_VIOLATION_READ
I downloaded the mini dump file (dmp) and ran through Visual Studio 19. The crash is occurring in the method:
BOOL CWnd::OnWndMsg(UINT message, WPARAM wParam, LPARAM lParam, LRESULT* pResult)
I'm including a screen shot because one of my questions is:
Where exactly is the crash? Is the line (screenshot) really representing the crash?
Using the debugger I can see that the parameter message coming into the method is 275 which is
0113 275 WM_TIMER
See for example: https://wiki.winehq.org/List_Of_Windows_Messages.
My questions:
Where exactly is the crash? I don't see how checking for pResult being NULL and assigning lResult to *pResult could cause a crash of any kind and certainly not EXCEPTION_ACCESS_VIOLATION_READ
In anticipation of somebody stating that the exception is actually elsewhere...
a. How do I get the exact location/problem from the dump
b. Anyone recognize this type of problem with TIMERS? It's not my code, but I'm already imagining such things as multiple stops of timers, doing something to a Window that no longer exists, etc. Based on EXCEPTION_ACCESS_VIOLATION_READ
Any clues/ideas how to proceed?
As always, if better asked in another group, or more clarification needed, please let me know.
And Thanks!
EDIT - ADDITIONAL INFO
Note that Sentry says the program is actually crashing on IsWindowVisible. That certainly isn't present in debugging the dump file (that I can see) and the point of crash as shown by the screen shot doesn't show it, but it might make sense. The OnTimer routine certainly mentions IsWindowVisible().
4. Can IsWindowVisible() crash with EXCEPTION_ACCESS_VIOLATION_READ ?

GetAsyncKeyState from windows.h

I will use the following code to explain my question:
#include <Windows.h>
#include <iostream>
int main()
{
bool toggle = 0;
while (1)
{
if (GetAsyncKeyState('C') & 0x8000)
{
toggle = !toggle;
if (toggle) std::cout << "Pressed\n";
else std::cout << "Not pressed\n";
}
}
}
Testing, I see that
(GetAsyncKeyState('C') & 0x8000) // 0x8000 to see if the most significant bit is 1
has the same behavior as
(GetAsyncKeyState('C'))
However, to achieve the behavior I want, which is the way any text input out there works (it waits like 1 second, and if you are still pressing the button, it starts spamming in a certain rate), I need to write
(GetAsyncKeyState('C') & 1)
The documentation says
The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
Can someone clarify this please?
MSDN tells you why on the same page you linked to!
Although the least significant bit of the return value indicates whether the key has been pressed since the last query, due to the pre-emptive multitasking nature of Windows, another application can call GetAsyncKeyState and receive the "recently pressed" bit instead of your application. The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
GetAsyncKeyState gives you "the interrupt-level state associated with the hardware" and is probably shared by all processes in the window station/session.
The low bit might be connected to the keyboard key repeat delay you can set in Control Panel, but it does not really matter because MSDN tells you to not look at that bit.
GetAsyncKeyState is usually not the correct way to process keyboard input. Console applications should read stdin, or use the console API. GUI applications should use the WM_CHAR/WM_KEYDOWN/WM_KEYUP window messages.

drmDropMaster requires root privileges?

Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...

SetWindowsHook Global not very Global

I'm playing around with SetWindowsHookEx, specifically I would like be able to find out about any window (on my desktop) thats been activated, via mouse or keyboard.
Reading through MSDN docs for SetWindowsHookEx it would appear that a WH_CBT type would do the job. I've created a dll and put all the code in there, which I control from a gui app (which also handles the unhook).
BUT I only appear to be getting the activation code when I activate my gui app though, any other app I activate is ignored.
In my dll I have the setup code and the CBTProc like so:
LRESULT WINAPI CBTProc(int Code, WPARAM W, LPARAM L) {
if(Code<0) CallN....
if (Code == HCBT_ACTIVATE) { // never get unless I activate my app
HWND a = reinterpret_cast<HWND>(W);
TRACE("this window was activated %d\n", a);
}
CallNext....
}
EXPORTED HHOOK WINAPI Setup(HWND MyWind) {
...
// gDllUInstance set in dllmain
return SetWindowsHookEx(WH_CBT, CBTProc, gDllUInstance, 0);
}
All pretty simple stuff, i've tried moving the setup out of the dll but I still get the same effect.
It would appear that the dll is getting loaded into other processes, I'm counting the number of DLL_PROCESS_ATTACHs I'm getting and can see its going up (not very scientific i know.
NOTE that this is 32 bit code running on 32bit OS - win2k3.
Are my expectations of the hooking mechanism wrong? should I only be getting the activation of my app or do I need a different type of hook?
EDIT: the trace function writes to a file telling me whats sending me activations
TIA.
Turns out its working ok, as Hans points out, i'm just not seeing the output from the debugger from the other processes, if I put in some extra tracing code - one trace file per attached process - I can see can see that things are working after all.
Many thanks for the replies.

Windows Client graphics written off the window to upper-left of screen

I have a Windows WinMain() window in which I write simple graphics -- merely LineTo() and FillRect(). The rectangles move around. After about an hour, the output that used o go to the main window, all of a sudden goes to the upper left corner of my screen -- as if client coordinates were being interpreted as screen coordinates. My GetDC()'s and ReleaseDC()'s seem to be balanced, and I even checked the return value from ReleaseDC(), make sure it is not 0 (per MSDN). Sometimes the output moves back to my main window. When I got to the debugger (VS 2010), my coordinates do not seem amiss--but output is going to the wrong place. I handle WM_PAINT, WM_CREATE, WM_TIMER, and a few others. I do not know how to debug this. Any help would be appreciated.
This has 'not checking return values' written all over it. Pretty crucial in raw Win32 programming, most every API function returns a boolean or a handle where FALSE or NULL indicates failure. GetLastError() provides the error code.
A cheap way to check for this without modifying code is by using the debugger to look at the EAX register value after the API call. A 0 indicates failure. In Visual Studio you can do so by using the #eax and #err pseudo variables in the Watch window, respectively the function return value and the GetLastError value.
This goes bad once Windows starts failing API calls, probably because of a resource leak. You can see it with TaskMgr.exe, Processes tab. View + Select Columns and tick Handles, USER objects and GDI objects. It is usually the latter, restoring the device context and releasing drawing objects is very easy to fumble. You don't have to wait until it fails, a steadily climbing number in one of those columns is the giveaway. It goes belly-up when the value hits 10,000
You must be calling GetDC(NULL) somewhere by mistake, which would get the DC for the entire desktop.
You could make all your GetDC calls call a wrapper function which asserts if the argument is NULL to help track this down:
#include <assert.h>
HDC GetDCAssert(HWND hWnd)
{
assert(hWnd);
return ::GetDC(hWnd);
}

Resources