I have multiple processes in an IPC system that I am designing. Each process creates a FileMapping and MapViewOfFile as its own memory area. Also, each process creates two semaphores that manage the FileMapping and MapViewOfFile that it already created.
To describe the problem I have, assume 3 processes:
ProcessA creates FileMapping, MapViewOfFile and the two semaphores.
ProcessB and ProcessC do similar to ProcessA.
ProcessA wants to send something to ProcessB. It connects to the FileMapping and MapViewOfFile and the two semaphores of ProcessB and sends whatever it wanted to send.
ProcessB wants to reopen its communication. It closes its FileMapping and MapViewOfFile and its two semaphores. It sends closing message to Processes A and C. When Processes A and C receive the closing message, they will close handles to FileMapping, MapViewOfFile and the two semaphores associated with ProcessB.
ProcessB closed all its handles and wants to reopen. When it tries to create MapViewOfFile it fails since Processes A and C have not closed the handles to ProcessB yet.
Right now (as a temporary fix), I am letting ProcessB sleep for 100ms when it is closing in order to give time for Processes A and C to close their handles to ProcessB. However, I want a solution that does not involve sleeping.
Is there a way for ProcessB to know when all references to its handles (FileMapping, MapViewOfFile and the two semaphores) are released by all other processes? If there is, how can I wait on it?
There's definitely no winapi function to do this. You can only try to open a handle to a (named) object and see if it fails with the appropriate reason.
So the simplest workaround is to do what are you trying in a loop with Sleep(0) or something similar. You may also add an additional synchronization object for this specific purpose, an auto-reset named event which your ProcessB may open and WaitFor.....
Related
I'm reading Windows Internals (7th Edition), and they write about processes in Chapter 1:
Processes
[...] a Windows process comprises the following:
[...]
At least one thread of execution Although an "empty" process is possible, it is (mostly) not useful.
What does "mostly" mean in this context? What could a process with no threads do, and how would that be useful?
EDIT: Also, in a 2015 talk, Mark Russinovich says that a process has "at least one thread" (19:12). Was that a generalization?
Disclaimer: I work for Microsoft.
I think the answer has come out in the comments. There seem to be at least two scenarios where a threadless process would be useful.
Scenario 1: capturing process snapshots
This is probably the most straightforward one. As RbMm commented, PssCaptureSnapshot can be called with the PSS_CAPTURE_VA_CLONE option to create a threadless (or "empty") process (using ZwCreateProcessEx, presumably to duplicate the target process's memory in kernel mode).
The primary use here would be for debugging, if a developer wanted to inspect a process's memory at a certain point in time.
Notably, Eryk Sun points out that an empty process is not necessary for inspecting handles (even though an empty process holds both its own memory space and handles), since there is already a way to inspect a process's handles without creating a new process or duplicating memory.
Scenario 2: forking processes with specific inherited handles---safely
Raymond Chen explains another use for a threadless process: creating new "real" processes with inherited handles safely.
When a thread wants to create a new process (CreateProcess), there are several ways for it to pass handles to the new process:
Make a handle inheritable and CreateProcess with bInheritHandles = true.
Make a handle inheritable, add it to a PROC_THREAD_ATTRIBUTE_LIST, and pass that list to the CreateProcess call.
However, they offer conflicting guarantees that can cause problems when callers want to create two threads with different handles concurrently. As Raymond puts it in Why do people take a lock around CreateProcess calls?:
In order for a handle to be inherited, you not only have to put it in the PROC_THREAD_ATTRIBUTE_LIST, but you also must make the handle inheritable. This means that if another thread is not on board with the PROC_THREAD_ATTRIBUTE_LIST trick and does a straight CreateProcess with bInheritHandles = true, it will inadvertently inherit your handles.
You can use a threadless process to mitigate this. In general:
Create a threadless process.
DuplicateHandle all of the handles you want to capture into this new threadless process.
CreateProcess your new, real forked process, using the PROC_THREAD_ATTRIBUTE_LIST, but set the nominal parent process of this process to be the threadless process (with PROC_THREAD_ATTRIBUTE_PARENT_PROCESS).
You can now CreateProcess concurrently without worrying about other callers, and you can now close the duplicate handles and the empty process.
There are 2 processes running on Windows. They communicate with each other through named pipe. When one of them is ready to send a message, I want to notificate the other process asynchronously like signal on Linux so that the other process don't need to check for the pipe continously. Are there some similar methods like the signal mechanism on Windows or other way to solve my problem?
A direct signal mechanism which conceptually works the same way does not exist (one could probably simulate it with a thread injection hack, but don't even think about that). It is not much of a problem, since you can do otherwise.
Every waitable kernel object which can take a name such as an event or a semaphore can be accessed by different processes.
You can WaitForSingleObject on the synchronization primitive until the other process signals it. That would be a Unix-like readiness notification mechanism (not quite as elegant, but to the same effect).
However, that isn't even necessary. Named pipes (not true for anyonymous pipes!) can be used with overlapped I/O. Which means you can use ReadFileEx to initiate a read from the pipe, and it will linger there in the background until it can complete.
You can think of this kind of I/O as "fire and forget". Your process continues running while the read operation is blocked. When the read operation completes, it signals an event or posts a completion message to a completion port (which you can query) or posts an asynchronous procedure call ("APC", a more fancy name for "callback") to the thread that originally called it. That's as close to a "signal" as you can get under Windows.
Unluckily, APCs don't quite work as one would wish, since they only execute at well-defined points (when a thread is in an "alertable wait state", which you must do explicitly by setting the altertable flag in a wait function or calling NtTestAlert).
The likely reasoning why the Windows designers made it that way that this is "safer", but it is also more annoying from an usability point of view. Alas, that is how it works.
Note that the overlapped I/O model is the exact opposite of the readiness notification system under e.g. Linux. Rather than asking the OS whether a descriptor is ready to be read, you tell the OS to read it, and you can have yourself be notified (or verify) whether this has completed.
I need to get (or pipe) the output from a process that is already running, using the windows api.
Basically my application should allow the user to select a window to pipe the input from, and all input will be displayed in a console. I would also be looking on how to get a pipe on stderr later on.
Important: I did not start the process using CreateProcess() or otherwise. The process is already running, and all I have is the handle to the process (returned from GetWindowThreadProcessId()).
The cleanest way of doing this without causing any ill effects, such that may occur if you used the method Adam implied of swapping the existing stdout handle with your own, is to use hooking.
If you inject a thread into the existing application and swap calls to WriteFile with an intercepted version that will first give you a copy of what's being written (filtered by handle, source, whatever) then pass it along to the real ::WriteFile with no harm done. Or you can intercept the call higher up by only swapping out printf or whichever call it is that the software is using (some experimentation needed, obviously).
HOWEVER, Adam is spot-on when he says this isn't what you want to do. This is a last resort, so think very, very carefully before going down this line!
Came across this article from MS while searching on the topic.
http://support.microsoft.com/kb/190351
The concept of piping input and output on Unix is trivial, there seems no great reason for it to be so complex on Windows. - Karl
Whatever you're trying to do, you're doing it wrong. If you're interacting with a program for which you have the source code, create a defined interface for your IPC: create a socket, a named pipe, windows messaging, shared memory segment, COM server, or whatever your preferred IPC mechanism is. Do not try to graft IPC onto a program that wasn't intending to do IPC.
You have no control over how that process's stdout was set up, and it is not yours to mess with. It was created by its parent process and handed off to the child, and from there on out, it's in control of the child. You don't go in and change the carpets in somebody else's house.
Do not even think of going into that process, trying to CloseHandle its stdout, and CreateFile a new stdout pointing to your pipe. That's a recipe for disaster and will result in quirky behavior and "impossible" crashes.
Even if you could do what you wanted to do, what would happen if two programs did this?
if I have a handle to some windows process which has stopped (killed or just ended):
Will the handle (or better the memory behind it) be re-used for another process?
Or will GetExitCodeProcess() for example get the correct result forever from now on?
If 1. is true: How "long" would GetExitCodeProcess() work?
If 2. is true: Wouldn't that mean that I can bring down the OS with starting/killing new processes, since I create more and more handles (and the OS reserves memory for them)?
I'm a bit confused about the concept of handles.
Thank you in advance!
The handle indirectly points to an kernel object. As long as there are open handles, the object will be kept alive.
Will the handle (or better the memory behind it) be re-used for another process?
The numeric value of the handle (or however it is implemented) might get reused, but that doesn't mean it'll always point to the same thing. Just like process IDs.
Or will GetExitCodeProcess() for example get the correct result forever from now on?
No. When all handles to the process are closed, the process object is freed (along with its exit code). Note that running process holds an implicit handle to itself. You can hold an open handle, though, as long as you need it.
If 2. is true: Wouldn't that mean that I can bring down the OS with starting/killing new processes, since I create more and more handles (and the OS reserves memory for them)?
There are many ways to starve the system. It will either start heavily swapping or just fail to spawn a new process at some point.
Short answer:
GetExitCodeProcess works until you call CloseHandle, after what the process object will be released and may be reused.
Long answer:
See Cat Plus Plus's answer.
I had a leaking handle problem ("Not enough quota available to process this command.") in some inherited C# winforms code, so I went and used Sysinternals' Handle tool to track it down. Turns out it was Event Handles that were leaking, so I tried googled it (took a couple tries to find a query that didn't return "Did you mean: event handler?"). According to Junfeng Zhang, event handles are generated by the use of Monitor, and there may be some weird rules as far as event handle disposal and the synchonization primitives.
I'm not entirely sure that the source of my leaking handles are entirely due to simply long-lived objects calling lots of synchronization stuff, as this code is also dealing with HID interfaces and lots of win32 marshaling and interop, and was not doing any synchronization that I was aware of. Either way, I'm just going to run this in windbg and start tracing down where the handles are originating from, and also spend a lot of time learning this section of the code, but I had a very hard time finding information about what event handles are in the first place.
The msdn page for the event kernel object just links to the generic synchronization overview... so what are event handles, and how are they different from mutexes/semaphores/whatever?
The NT kernel uses event objects to allow signals to transferred to entities that wait on the signal. A mutex and a semaphore are also waitable kernel objects (Kernel Dispatcher Objects), but with different semantics. The only time I ever came across them was when waiting for IO to complete in drivers.
So my theory on your problem is possibly a faulty driver, or are you relying on specialised hardware?
Edit: More info (from Windows Internals 5th Edition - Chapter 3 System Mechanics)
Some Kernel Dispatcher Objects (e.g. mutex, semaphore) have the of concept ownership. So when signalled the released one waiting thread will be released will grab these resources. And others will have to continue to wait. Events are not owned hence are available to be reset by any thread.
Also there are three types of events:
Notification : On signalled all waiting threads are released
Synchronisation : On signalled one waiting thread is released but the event is reset
Keyed : On signalled one waiting thread in the same process as the signaller is released.
Another interesting thing that I've learned is that critical sections (the lock primitive in c#) are actually not kernel objects, rather they are implemented out of a keyed event, or mutex or semaphore as required.
If you're talking about kernel Event Objects, then an event handle will be a handle (Int) that the system keeps on this object so other objects can reference it. IE Keep a 'handle' on it.
Hope this helps!