Minidump creates empty dump file - windows

We have an in-proc crash handler which is using MiniDumpWriteDump() from DbgHelp to write a minidump is case of a process crash.
I know its not the best way to do it, however, at the moment we do not have other option.
The problem is: one certain executable always creates 0 byte dumps. But it works well for other processes. What could be the possible reason behind this behavior?

We had this problem from time to time with our minidumping code. In the end we changed it to spawn a lightweight secondary process on startup and used a simple MMF to communicate with the dumper process when we needed a minidump generated.
We had all sorts of problems using MiniDumpWriteDump from within the process being dumped. Since the change to a dedicated dumping process, it's been very reliable.
If at all possible I suggest you consider the same. It ended up not being that much work.

Related

Win32: Storing debug info to survive crash

I'd like to store some info somewhere for debugging purposes, if my program crashes. On the next start I would read this information. I could do this in the registry or file system, but that's not very fast. I could use shared memory and a second process, but I'd like to avoid running a second program all the time.
I'd like to know if there is any faster way for this. Is there any windows mechanism (pipes, shared memory, whatever) that is fast and not lost if a program crashes?
You may consider using the Application Event Log:
(C# code here, but you can do it unmanaged as well)
string cs = "MyAppEvents";
if (!EventLog.SourceExists(cs))
EventLog.CreateEventSource(cs, "Application");
EventLog.WriteEntry(cs, message, EventLogEntryType.Error);
Events written there are sent to a service, which may be fast enough.
The architecture of the event logging service is explained here:
http://oreilly.com/catalog/winlog/chapter/ch02.html
(it is old, but has not changed much)
EDIT: it WAS changed since the days of NT4 :)
Never versions of windows (>2000 - NT5) have a more efficient feature called Event Tracing. You can read more about it here http://msdn.microsoft.com/en-us/magazine/cc163437.aspx
A reasonable implementation should use a named pipe between advapi32 in your process and the services, which on the same machine uses shared memory maps (in fact, there is a named pipe called "eventlog" among the others you can find in an windows system)

File operation functions return, but are not actually committed when Windows shuts down

I am working on an MFC application that can (among other things) be used to shut Windows down. When doing this, Windows of course sends the WM_QUERYENDSESSION and WM_ENDSESSION to all applications, mine included. However, the problem is that my application, as part of some destructors, delete certain files (with CFile::Remove) that have been used during the execution. I have reason to believe that the destructors are called (but that is hard to know for certain) when the application is closed by Windows.
However, when Windows starts back up again, I do occasionally notice that the files that were supposed to be deleted are still present. This does not happen consistently, even when the execution of the program is identical (I have a script for testing this). This leads me to think that one of two things are happening: Either a) the destructors are not consistently being called, or b) the Remove function returns, but the file is not actually deleted before Windows is shut down.
The only work-around I have found so far is that if I get the system to wait with the shutdown for approximately 10 seconds after my program has stopped, then the files will be properly deleted. This leads me to believe that b) may be the case.
I hope someone is able to help me with this problem.
Regards
Mort
Once your program returns from WM_ENDSESSION, Windows can terminate it at any time:
If the session is being ended, this parameter is TRUE; the session can end any time after all applications have returned from processing this message.
If the session ends quickly, then it may end before your destructors run. You must do all your cleanup before returning from WM_ENDSESSION, because there is no guarantee that you will get a chance to do it afterwards.
The problem here is that some versions of Windows report back that file handling operations have been completed before they actually have. This isn't a problem unless shutdown is triggered as some operations, including file delete will be abandoned.
I would suggest that you cope with this by forcing your code to wait for a confirmed deletion of the files (have a process look for the files and raise an event when they've gone) before calling for system shutdown.
If the system is properly shut down (nut went sudden power loss or etc.) then all the cached data is flushed. In particular this includes flushing the global file descriptor table (or whatever it's called in your file system) which should commit the file deletion.
So the problem seems to be that the user-mode code doesn't call DeleteFile, or it failes (for whatever reason).
Note that there are several ways the application (process) may exit, whereas not always d'tors are called. There are automatic objects which are destroyed in the context of their callstack, plus there are global/static objects, which are initialized and destroyed by the CRT init/cleanup code.
Below is a short summary of ways to terminate the process, with the consequences:
All process threads exit conventionally (return from their procedure). The OS terminates the process that has no threads. All the d'tors are executed.
Some threads either exit via ExitThread or killed by TerminateThread. The automatic objects of those threads are not d'tructed.
Process exited by ExitProcess. Automatic objects are not destructed, global may be destructed (this happens in the CRT is used in a DLL)
Process is terminated by TerminateProcess. All d'tors are not called.
I suggest you check if the DeleteFile (or CFile::Remove that wraos it) is called indeed, and check also if it succeeds. For instance you may open the same file twice for whatever reason

ReadDirectoryChangesW and determining which process caused the change

How can I determine which processes are making changes to which files.
I did find this:
FileSystemWatcher: how to know which process made the change?
But I'm curious if anything has changed lately? Is it possible yet to determine which process is making changes to the file system, either using ReadDirectoryChangesW or anything else? I'd prefer not to have to write or use a kernel driver.
Create a security audit on the files you want to track. The information will be recorded in the security event log.
While it may be possible to find out the process that changes a file using kernel drivers (for example, process monitor), there will always be a problem identifying the process in case the folder is shared on the network, and a process on another computer modifies the file over the network.
Even the kernel drivers would in this case identify the network share process as the one accessing the file, not the process on the other computer.
I can't seem to comment yet. I would be interested in your Python code that creates a security audit on files or paths. It's a bit of a shame if it messes with the system security event log, but you can't have everything! :-)
Up until this point, I have been using GetForegroundWindow at the time of the change to eventually get the associated process. It only works well for changes initiated by the user, but that is primarily what I've been interested in. Besides background processes, the only minor issue is that sometimes a process is spawned just to accomplish a task (like a batch file) and it is non-existent by the time you want to learn more about it (like what process spawned it). I imagine that is a problem even with a security audit, though.

How can I spawn a process on Windows and see which files it uses?

I would like to write a C++ function, on Microsoft Windows, that spawns a process and returns, in addition to the process's termination status, a list of all the files the process read or wrote. It should not require any cooperation from the spawned application.
For example, if the program spawned is the Visual Studio C++ compiler, the function would produce a list containing the source file the compiler opened, all header files it read, and the .OBJ file it created. If it also contained things like .DLL files the program contained, that would be fine. But again, it should work regardless of the program spawned; the compiler is just an example.
A twist: if the process creates subprocesses, I need to monitor their file accesses as well.
A second twist: if the process tries to open a file, I would like to be able to make it wait until I can create that file—and only then let it resume and open the file. (I think this rules out ETW.)
I know this probably sounds like an ingredient for some horrible kludge. But if I can get this working, the end result will be really cool.
A second twist: if the process tries to open a file, I would like to be able to make it wait until I can create that file—and only then let it resume and open the file
You just put yourself into Hack City with that requirement - you're right that ETW would've been a far easier solution, but it also has no way to block the file call.
Basically, here's what you're going to have to do:
Create the process suspended
Create two named pipes in opposite directions whose names are well known (perhaps containing the PID of the process)
Hook LoadModule, and the hook will watch for Kernel32 to get loaded
When Kernel32 gets loaded, hook CreateFileW and CreateFileA - also hook CreateProcessEx and ShellExecute
When your CreateFile hook hits, you write the name to one of the named pipes, then execute a ReadFile on the other one until the parent process signals you to continue.
When your CreateProcessEx hook hits, you get to do the same process all over again from inside the current process (remember that you can't have the parent process do the CreateProcess'ing because it'll mess up inherited handles).
Start the child process.
Keep in mind that you'll be injecting code and making fixups to an in-memory image that may be a different bitness than yours (i.e. your app is 64-bit, but it's starting a 32-bit process), so you'll have to have both x86 and amd64 versions of your shim code to inject. I hope by writing this lengthy diatribe you have convinced yourself that this is actually an awful idea that is very difficult to get right and that people who hook Win32 functions make Windows OS developers sad.

Does OpenProcess always write lock the file?

I want to call the Windows API OpenProcess function on another process running on the machine. Will this always cause the file whose process I am opening to be write locked? Or does it depend on the access rights I request?
Yes, it is a fundamental property of Windows. When an executable file gets loaded (EXE or DLL), Windows creates a memory mapped view of the file. Chunks of code or data from the executable file get page-faulted into RAM, as needed to keep the program running. It works the other way around too, when Windows needs to make RAM available for another program then it throws chunks of mapped pages away, the ones that weren't used in a while. Those pages don't take up space in the paging file if they are code, they can be reloaded from the executable file.
Very efficient, code that was written when 16 megabytes of RAM was a luxury. The memory mapped section keeps a write lock on the file. Still useful in this day and age, it prevents some kind of malware with fecking with the code of a running process.
The process file is locked while the process is running; it doesn't have anything to do with OpenProcess. The file is unlocked when the process terminates.

Resources