Mutex owner state - winapi

Using the Windows Mutex functions to make an application one instance I'm wondering how to tell if the Mutex object, if it exists, is 'owned' or not so I can ignore it being a valid object should the previous instance have crashed?

Your main goal is to have a single instance of the application.
You could create a mutex without acquiring it, set bInitialOwner to FALSE, so you can use as a label.
On the start up, check if the mutex exits, if so, cleanup, e.g. notify the existing process, and exit.
If not, create one without acquiring it.
for example:
HANDLE Mutex;
DWORD Error;
Mutex = CreateMutex(NULL, FALSE, TEXT("UniqueMutexName"));
Error = GetLastError();
if(Mutex != NULL && Error == ERROR_ALREADY_EXISTS)
{
/* another instance running */
CloseHandle(Mutex);
ExitProcess(0);
}
else if(Mutex == NULL)
{
/* different error */
SetLastError(Error);
}
...
CloseHandle(Mutex);
If you want to check if the mutex is owned, you can call WaitForSingleObject with zero timeout:
switch(WaitForSingleObject(Mutex, 0))
{
case WAIT_ABANDONED:
/* similar to the bellow, but be careful with this one, if
* there's some protected shared data it may left corrupted */
case WAIT_OBJECT_0:
/* was not acquired, you just acquired it */
ReleaseMutex(Mutex);
break;
case WAIT_TIMEOUT:
/* already owned */
break;
default:
/* some error */
}
If the process was terminated or crushed without calling CloseHandle, the system will close the handle automatically, CreateMutex:
Use the CloseHandle function to close the handle. The system closes
the handle automatically when the process terminates. The mutex object
is destroyed when its last handle has been closed.

Related

MFC: How to use MsgWaitForMultipleObjects() from the main thread to wait for multiple threads to complete that use SendMessage()?

I have a main thread that fires off several other threads to complete various items of work based on what the user choose from the main UI. Normally I'd use WaitForMultipleObjects() with bWaitAll set to TRUE. However, in this case those other threads will log output to another window that uses a mutex to ensure the threads only output one at a time. Part of that process uses SendMessage() to send get the text size and send the text to the windows which will hang if using WaitForMultipleObjects() since it's running from the main UI thread. So I moved over to use MsgWaitForMultipleObjects with QS_SENDMESSAGE flag, only it's problem is the logic for bWaitAll which states it will only return if all objects are signaled AND an input event occurred (instead of returning when all objects are signaled OR an input event occurred). Had the logic been OR this should have worked:
DWORD waitres=WAIT_FAILED;
while (1)
{
MSG msg;
while (::PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE)) {
// mfc message pump
if (!theApp.PumpMessage()) {
// program end request
// TO DO
}
}
// MFC idel processing
LONG lidlecount = 0;
while (theApp.OnIdle(lidlecount++));
// our wait
waitres = ::MsgWaitForMultipleObjects(threadcount, threadhandles, TRUE, INFINITE, QS_SENDMESSAGE);
// check if ended due to message
if (waitres!=WAIT_OBJECT_0+threadcount) {
// no, exit loop
break;
}
}
Rather than fire off a thread that then fires off the other threads I wondered what is the correct way to handle this from the main thread? I thought about using bWaitAll FALSE then using WaitForMultipleObjects() with bWaitAll set to TRUE and the dwMilliseconds set to 0 (or 1) and checking the result to see if completed. If not, it would need to loop back to the top of the loop and then to MsgWaitForMultipleObjects() which when using bWaitAll FALSE could return right away if one of the many threads completed (say 1 thread of 10 completed, I could check as mentioned above if all completed, but when going back with bWaitAll FALSE it will just return and not wait).
So what is the proper way to handle waiting for multiple threads (that use SendMessage()) to complete in the main thread of an MFC application?
Thanks.
So what is the proper way to handle waiting for multiple threads to
complete
need create some structure, with reference count and pass pointer to this structure to every thread. here also probably exist sense have some common task data. and HWND of some window in main(GUI) thread. when worked thread exit - it release reference on object. when last thread exit - delete object and post some message to window, from main thread.
so we not need store thread handles (can just close it) and wait om multiple handles. instead we got some window message when all thread finish task
example of code
struct Task
{
HWND _hwnd;
LONG _dwRefCount = 1;
// some common task data probably ..
Task(HWND hwnd) : _hwnd(hwnd) {}
~Task() {
PostMessageW(_hwnd, WM_USER, 0, 0);// WM_USER as demo only
}
void AddRef(){
InterlockedIncrementNoFence(&_dwRefCount);
}
void Release(){
if (!InterlockedDecrement(&_dwRefCount)) delete this;
}
};
ULONG CALLBACK WorkThread(void* pTask)
{
WCHAR sz[16];
swprintf_s(sz, _countof(sz), L"%x", GetCurrentThreadId());
MessageBoxW(0, L"working...", sz, MB_ICONINFORMATION|MB_OK);
reinterpret_cast<Task*>(pTask)->Release();
return 0;
}
void StartTask(HWND hwnd, ULONG n)
{
if (Task* pTask = new Task(hwnd))
{
do
{
pTask->AddRef();
if (HANDLE hThread = CreateThread(0, 0, WorkThread, pTask, 0, 0))
{
CloseHandle(hThread);
}
else
{
pTask->Release();
}
} while (--n);
pTask->Release();
}
}

Detecting named pipe disconnects with I/O completion

I have a question about the correct approach for detecting client disconnects using named pipes with I/O completion ports. We have a server that creates child processes with stdin/stdout redirected to named pipes. The pipes are opened OVERLAPPED.
We've seen that after the client issues CreateFile() the I/O completion port
receives a packet with lpNumberOfBytes of zero -- which quite effectively indicates a connection from the client. But detecting when the child process has closed its' stdin/stdout and exited does not generate a similar event.
We've come up with two approaches to detecting the named pipe disconnects;
1) periodically poll the process HANDLE of the child process to detect when the process has ended,
OR
2) create a separate thread which blocks on WaitForSingleObject() on the child process's HANDLE and when it becomes signaled the process has ended, to then generate PostQueuedCompletionStatus() to the I/O completion port with a prearranged COMPLETION_KEY.
Neither of these is difficult -- but I wanted to make sure I wasn't missing something obvious. Has anyone found an alternative to being notified when a named pipe associated with IOCP has been closed?
Ok, I discovered why the IOCP was not delivering disconnect packets, and it had todo with how I was testing the issue. We had developed a unittest harness and our unittest was acting as both server and client. When the child process ended, the child's write-pipe handle was still open in the unittest, and therefore IOCP did not unblock any handler threads.
To effectively run a pipe server requires you create a new thread and within that thread to do the work of connecting to the pipe, creating the child process and waiting for the process to end. After the child ends to then close the pipe handle which causes IOCP to then deliver a dequeue packet with lpNumberOfBytes set to zero.
Here is a sample of how we did this from a thread created with _beginthread().
void __cdecl childproc(void* p) {
TCHAR* pipename = (TCHAR*)p;
/* make sure pipe handle is "inheritable" */
SECURITY_ATTRIBUTES sattr;
sattr.nLength = sizeof(SECURITY_ATTRIBUTES);
sattr.bInheritHandle = TRUE;
sattr.lpSecurityDescriptor = NULL;
HANDLE pipe = ::CreateFile(
pipename,
GENERIC_READ | GENERIC_WRITE,
0,
&sattr,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
NULL);
if (pipe == INVALID_HANDLE_VALUE) {
_tprintf(_T("connect to named pipe failed %ld\n", GetLastError());
_endthread();
}
/* redirect stdin/stdout/stderr to pipe */
PROCESS_INFORMATION procinfo;
STARTUPINFO startinfo;
memset(&procinfo, 0, sizeof(procinfo));
memset(&startinfo, 0, sizeof(startinfo));
startinfo.cb = sizeof(startinfo);
startinfo.hStdError = pipe;
startinfo.hStdOutput = pipe;
startinfo.hStdInput = pipe;
startinfo.dwFlags |= STARTF_USESTDHANDLES;
/* create child to do a simple "cmd.exe /c dir" */
DWORD rc = ::CreateProcess(
_T("C:\\Windows\\System32\\cmd.exe"),
_T("C:\\Windows\\System32\\cmd.exe /C dir"),
NULL,
NULL,
TRUE,
0,
NULL,
NULL,
&startinfo,
&procinfo);
if (rc == 0) {
_tprintf(_T("cannot create child process: %ld\n"), GetLastError());
_endthread();
}
if (::WaitForSingleObject(procinfo.hProcess, INFINITE) != WAIT_OBJECT_0) {
_tprintf(_T("error waiting for child to end: %ld\n"), GetLastError());
}
/* cleanup */
::CloseHandle(procinfo.hProcess);
::CloseHandle(procinfo.hThread);
::CloseHandle(pipe);
_endthread();
}

Mac OS X: How to handle iflt_detach() not completing in KEXT stop function

In my kext's stop() function, I call iflt_detach() to detach a registered iff filter. However, it appears that (for whatever reasons), the filter's detach() function may be called outside of the stop() function. In that case, what should I do in the stop function? I can't return KERN_SUCCESS since that would cause the KEXT to get unloaded with obvious side-effects for the delayed call to the detach() function.
The following snippet is from enetlognke.c and shows the stop() function:
kern_return_t com_dts_apple_kext_enetlognke_stop (kmod_info_t * ki, void * d)
{
kern_return_t retval = KERN_FAILURE; // default result, unless we know that we are
// detached from the interface.
if (gFilterRegistered == FALSE)
return KERN_SUCCESS;
if (gUnregisterProc_started == FALSE)
{
// only want to start the detach process once.
iflt_detach(gEnetFilter);
gUnregisterProc_started = TRUE;
}
if (gUnregisterProc_complete)
{
retval = KERN_SUCCESS;
}
else
{
el_printf("enetlognke_stop: incomplete\n");
}
if (retval == KERN_SUCCESS)
{
// Free KEXT resources
}
return retval;
}
gUnregisterProc_complete is set to TRUE from within this module's dispatch() function. So, if that function call is delayed (and gUnregisterProc_complete is FALSE), the stop function would veto the unload by returning KERN_FAILURE.
So, my questions are:
If KERN_FAILURE is returned, will the kernel call the KEXT's stop() function again? If not, what triggers a retry of the KEXT unload and the call to the stop() function?
Is KERN_FAILURE the correct code to return is the filter has not been detached?
Presumably, the detach function will in this case be called on another thread, once there are no threads remaining running your callbacks?
If so, this becomes a fairly straightforward thread synchronisation problem. Set up a flag variable, e.g. has_detached and protect it by a recursive mutex.
In the stop function: Lock the mutex before calling iflt_detach(). If on return, the flag hasn't been set, sleep on the flag's address while suspending the mutex, until the flag is set. Finally, unlock, and return from the stop function.
At the very end of your detach function: lock the mutex, set the flag, send a wakeup to the potentially sleeping thread and unlock. If the unlock call is in the tail position, there is no race condition between executing your detach function's code and unloading said code.
Effectively, this will block the unloading of the kext until your filter has fully detached.
Note: I haven't tried this in this particular case of network filters (I have yet to write a filter kext), but it's generally a pattern I've used a lot in other kexts.
Note 2: I say use a recursive lock to guard against deadlock in case your detach function does get called on the same thread while inside iflt_detach().

inter-process condition variables in Windows

I know that I can use condition variable to synchronize work between the threads, but is there any class like this (condition variable) to synchronize work between the processes, thanks in advance
Use a pair of named Semaphore objects, one to signal and one as a lock. Named sync objects on Windows are automatically inter-process, which takes care of that part of the job for you.
A class like this would do the trick.
class InterprocessCondVar {
private:
HANDLE mSem; // Used to signal waiters
HANDLE mLock; // Semaphore used as inter-process lock
int mWaiters; // # current waiters
protected:
public:
InterprocessCondVar(std::string name)
: mWaiters(0), mLock(NULL), mSem(NULL)
{
// NOTE: You'll need a real "security attributes" pointer
// for child processes to see the semaphore!
// "CreateSemaphore" will do nothing but give you the handle if
// the semaphore already exists.
mSem = CreateSemaphore( NULL, 0, std::numeric_limits<LONG>::max(), name.c_str());
std::string lockName = name + "_Lock";
mLock = CreateSemaphore( NULL, 0, 1, lockName.c_str());
if(!mSem || !mLock) {
throw std::runtime_exception("Semaphore create failed");
}
}
virtual ~InterprocessCondVar() {
CloseHandle( mSem);
CloseHandle( mLock);
}
bool Signal();
bool Broadcast();
bool Wait(unsigned int waitTimeMs = INFINITE);
}
A genuine condition variable offers 3 calls:
1) "Signal()": Wake up ONE waiting thread
bool InterprocessCondVar::Signal() {
WaitForSingleObject( mLock, INFINITE); // Lock
mWaiters--; // Lower wait count
bool result = ReleaseSemaphore( mSem, 1, NULL); // Signal 1 waiter
ReleaseSemaphore( mLock, 1, NULL); // Unlock
return result;
}
2) "Broadcast()": Wake up ALL threads
bool InterprocessCondVar::Broadcast() {
WaitForSingleObject( mLock, INFINITE); // Lock
bool result = ReleaseSemaphore( mSem, nWaiters, NULL); // Signal all
mWaiters = 0; // All waiters clear;
ReleaseSemaphore( mLock, 1, NULL); // Unlock
return result;
}
3) "Wait()": Wait for the signal
bool InterprocessCondVar::Wait(unsigned int waitTimeMs) {
WaitForSingleObject( mLock, INFINITE); // Lock
mWaiters++; // Add to wait count
ReleaseSemaphore( mLock, 1, NULL); // Unlock
// This must be outside the lock
return (WaitForSingleObject( mSem, waitTimeMs) == WAIT_OBJECT_0);
}
This should ensure that Broadcast() ONLY wakes up threads & processes that are already waiting, not all future ones too. This is also a VERY heavyweight object. For CondVars that don't need to exist across processes I would create a different class w/ the same API, and use unnamed objects.
You could use named semaphore or named mutex. You could also share memory between processes by shared memory.
For a project I'm working on I needed a condition variable and mutex implementation which can handle dead processes and won't cause other processes to end up in a deadlock in such a case. I implemented the mutex with the native named mutexes provided by the WIN32 api because they can indicate whether a dead process owns the lock by returning WAIT_ABANDONED. The next issue was that I also needed a condition variable I could use across processes together with these mutexes. I started of with the suggestion from user3726672 but soon discovered that there are several issues in which the state of the counter variable and the state of the semaphore ends up being invalid.
After doing some research, I found a paper by Microsoft Research which explains exactly this scenario: Implementing Condition Variables with Semaphores . It uses a separate semaphore for every single thread to solve the mentioned issues.
My final implementation uses a portion of shared memory in which I store a ringbuffer of thread-ids (the id's of the waiting threads). The processes then create their own handle for every named semaphore/thread-id which they have not encountered yet and cache it. The signal/broadcast/wait functions are then quite straight forward and follow the idea of the proposed solution in the paper. Just remember to remove your thread-id from the ringbuffer if your wait operation fails or results in a timeout.
For the Win32 implementation I recommend reading the following documents:
Semaphore Objects and Using Mutex Objects as those describe the functions you'll need for the implementation.
Alternatives: boost::interprocess has some robust mutex emulation support but it is based on spin locks and caused a very high cpu load on our embedded system which was the final reason why we were looking into our own implementation.
#user3726672: Could you update your post to point to this post or to the referenced paper?
Best Regards,
Michael
Update:
I also had a look at an implementation for linux/posix. Turns out pthread already provides everything you'll need. Just put pthread_cond_t and pthread_mutex_t in some shared memory to share it with the other process and initialize both with PTHREAD_PROCESS_SHARED. Also set PTHREAD_MUTEX_ROBUST on the mutex.
Yes. You can use a (named) Mutex for that. Use CreateMutex to create one. You then wait for it (with functions like WaitForSingleObject), and release it when you're done with ReleaseMutex.
For reference, Boost.Interprocess (documentation for version 1.59) has condition variables and much more. Please note, however, that as of this writing, that "Win32 synchronization is too basic".

Waiting for grandchild processes in windows

Is it possible to wait for all processes launched by a child process in Windows? I can't modify the child or grandchild processes.
Specifically, here's what I want to do. My process launches uninstallA.exe. The process uninistallA.exe launches uninstallB.exe and immediately exits, and uninstallB.exe runs for a while. I'd like to wait for uninstallB.exe to exit so that I can know when the uninstall is finished.
Create a Job Object with CreateJobObject. Use CreateProcess to start UninstallA.exe in a suspended state. Assign that new process to your job object with AssignProcessToJobObject. Start UninstallA.exe running by calling ResumeThread on the handle of the thread you got back from CreateProcess.
Then the hard part: wait for the job object to complete its execution. Unfortunately, this is quite a bit more complex than anybody would reasonably hope for. The basic idea is that you create an I/O completion port, then you create the object object, associate it with the I/O completion port, and finally wait on the I/O completion port (getting its status with GetQueuedCompletionStatus). Raymond Chen has a demonstration (and explanation of how this came about) on his blog.
Here's a technique that, while not infallible, can be useful if for some reason you can't use a job object. The idea is to create an anonymous pipe and let the child process inherit the handle to the write end of the pipe.
Typically, grandchild processes will also inherit the write end of the pipe. In particular, processes launched by cmd.exe (e.g., from a batch file) will inherit handles.
Once the child process has exited, the parent process closes its handle to the write end of the pipe, and then attempts to read from the pipe. Since nobody is writing to the pipe, the read operation will block indefinitely. (Of course you can use threads or asynchronous I/O if you want to keep doing stuff while waiting for the grandchildren.)
When (and only when) the last handle to the write end of the pipe is closed, the write end of the pipe is automatically destroyed. This breaks the pipe and the read operation completes and reports an ERROR_BROKEN_PIPE failure.
I've been using this code (and earlier versions of the same code) in production for a number of years.
// pwatch.c
//
// Written in 2011 by Harry Johnston, University of Waikato, New Zealand.
// This code has been placed in the public domain. It may be freely
// used, modified, and distributed. However it is provided with no
// warranty, either express or implied.
//
// Launches a process with an inherited pipe handle,
// and doesn't exit until (a) the process has exited
// and (b) all instances of the pipe handle have been closed.
//
// This effectively waits for any child processes to exit,
// PROVIDED the child processes were created with handle
// inheritance enabled. This is usually but not always
// true.
//
// In particular if you launch a command shell (cmd.exe)
// any commands launched from that command shell will be
// waited on.
#include <windows.h>
#include <stdio.h>
void error(const wchar_t * message, DWORD err) {
wchar_t msg[512];
swprintf_s(msg, sizeof(msg)/sizeof(*msg), message, err);
printf("pwatch: %ws\n", msg);
MessageBox(NULL, msg, L"Error in pwatch utility", MB_OK | MB_ICONEXCLAMATION | MB_SYSTEMMODAL);
ExitProcess(err);
}
int main(int argc, char ** argv) {
LPWSTR lpCmdLine = GetCommandLine();
wchar_t ch;
DWORD dw, returncode;
HANDLE piperead, pipewrite;
STARTUPINFO si;
PROCESS_INFORMATION pi;
SECURITY_ATTRIBUTES sa;
char buffer[1];
while (ch = *(lpCmdLine++)) {
if (ch == '"') while (ch = *(lpCmdLine++)) if (ch == '"') break;
if (ch == ' ') break;
}
while (*lpCmdLine == ' ') lpCmdLine++;
sa.nLength = sizeof(sa);
sa.bInheritHandle = TRUE;
sa.lpSecurityDescriptor = NULL;
if (!CreatePipe(&piperead, &pipewrite, &sa, 1)) error(L"Unable to create pipes: %u", GetLastError());
GetStartupInfo(&si);
if (!CreateProcess(NULL, lpCmdLine, NULL, NULL, TRUE, 0, NULL, NULL, &si, &pi))
error(L"Error %u creating process.", GetLastError());
if (WaitForSingleObject(pi.hProcess, INFINITE) == WAIT_FAILED) error(L"Error %u waiting for process.", GetLastError());
if (!GetExitCodeProcess(pi.hProcess, &returncode)) error(L"Error %u getting exit code.", GetLastError());
CloseHandle(pipewrite);
if (ReadFile(piperead, buffer, 1, &dw, NULL)) {
error(L"Unexpected data received from pipe; bug in application being watched?", ERROR_INVALID_HANDLE);
}
dw = GetLastError();
if (dw != ERROR_BROKEN_PIPE) error(L"Unexpected error %u reading from pipe.", dw);
return returncode;
}
There is not a generic way to wait for all grandchildren but for your specific case you may be able to hack something together. You know you are looking for a specific process instance. I would first wait for uninstallA.exe to exit (using WaitForSingleObject) because at that point you know that uninstallB.exe has been started. Then use EnumProcesses and GetProcessImageFileName from PSAPI to find the running uninstallB.exe instance. If you don't find it you know it has already finished, otherwise you can wait for it.
An additional complication is that if you need to support versions of Windows older than XP you can't use GetProcessImageFileName, and for Windows NT you can't use PSAPI at all. For Windows 2000 you can use GetModuleFileNameEx but it has some caveats that mean it might fail sometimes (check docs). If you have to support NT then look up Toolhelp32.
Yes this is super ugly.
Use a named mutex.
One possibility is to install Cygwin and then use the ps command to watch for the grandchild to exit

Resources