Win32 Mutex not waiting - winapi

I am creating an application that implements inter process communication.
For this purpose I have set up a shared buffer, which seems to work fine.
Now, I need a way for the data generating application (written in c++)
to tell the data receiving application (written in freepascal/lazarus)
when it should read the data.
I was trying to use a mutex for this purpose. I do not have much experience with windows api programming.
So, my problem is, in the FreePascal code below, the mutex won't wait. I can call the TMutex.Wait() function, it doesn't return an error or anything, but it simply won't wait.
constructor TMutex.Create(sName: AnsiString);
begin
sName := 'Local\Mutex'+sName;
hMutex := CreateMutexA(
nil, // default access
True, // initially not owned
PChar(sName)); // named mutex
if hMutex = 0 then
begin
raise Exception.Create('mutex creation failed');
end;
end;
destructor TMutex.Destroy;
begin
CloseHandle(hMutex);
end;
procedure TMutex.Wait;
begin
if (WaitForSingleObject(hMutex, INFINITE) <> 0) then ShowMessage('debug: wait returned something');
end;
procedure TMutex.Post;
begin
ReleaseMutex(hMutex);
end;

It looks like your problem is at:
True, // initially not owned
You have things backwards -- true means it initially IS owned, so waiting on it will return immediately.

you don't show us the code that calls the Wait, method of TMutex. however, you have to know that a mutex is reentrant: if a thread owns a mutex, it will always be granted access to it, thus a wait will never block. this is built into the mutex to avoid deadlocks.
try acquiring the mutex from another thread, the wait should block.

Related

how do I get windows SCM to restart my service when it fails

I have some windows services that i have written in delphi and they generally work very well, however on occasion i do get an exception thrown that can be considered fatal. When this happens the service is designed to stop.
My question is how do i exit the service in such a way that the SCM will automatically try to restart the service. (I have already set the recovery options for the service in the service manager)
MSDN states
A service is considered failed when it terminates without reporting a status of SERVICE_STOPPED to the service controller.
i have read this blog post Using the Automatic Recovery Features of Windows Services but i am not sure how to implement this in delphi.
i have allready tried the following
Setting the ErrCode Property of the TService to a non zero value.
Setting the stopped Parameter of the ServiceStop Event to false.
Raising an exception in the servicestop event handler.
EDIT 2013-08-06 added code example
Code Now Updated to show working example
Here is the code im using,
procedure TTestService.ServiceExecute(Sender: TService);
begin
while not (Terminated or FFatalError) do
begin
ServiceThread.ProcessRequests(False);
ReportStatus;
Sleep(100);
end;
if FFatalError then
Halt(1);
end;
FFatalError is a private boolean field on the TTestService class and is initialized to false on startup, it is only set to true if the worker thread (started in the TTestService.ServiceStart event) terminates with a fatal exception.
here is the OnTerminate event handler for the worker thread.
procedure TTestService.ThdTerm(Sender: Tobject);
var
E : Exception;
Thread : TThread;
begin
Thread := TThread(Sender);
if (Thread.FatalException <> nil) then
begin
E := Exception(Thread.FatalException);
GetExcCallStack(E);
EventLog.LogError(Thread.ClassName + ': ID:'+ IntToStr(Thread.ThreadID) +
' Stopped Unexpectedly!, '+ NEWLINE + E.ClassName +': ' + E.Message);
FFatalError := True;
end;
end;
The SCM will restart your service if it fails. But all the normal termination modes from a Delphi service do not count as failure. If you could raise an exception from the main service thread that was unhandled, then that would count as a failure.
However, I think the simplest way for you to force a process termination that is treated as a failure is to call ExitProcess. You could equally well call the Delphi RTL function Halt which will ultimately call ExitProcess. However, since your process is probably in a bad state I'd be inclined to go straight to ExitProcess.
As has already been commented, avoiding the exception in the first place would be the ideal solution.

INDY 10 TCP Server - Combine with non thread safe VCL Code

VCL is not thread safe. Therefore I guess it is not a good idea to write information to the gui in the INDY 10 TCP server.execute(...) function .
How to send information from the server execute to the VCL ?
I need to modify a TBitmap inside a tcpserver.execute function. How to make that thread safe ?
Write stuff to the VCL thread from Indy the same way to write stuff to the VCL thread from anywhere else. Common options include TThread.Synchronize and TThread.Queue.
Modifying a standalone TBitmap should not require synchronization with the main thread. You can modify it from any thread you want, as long as you do it from only one thread at a time. You can use the standard synchronization objects like critical sections and events to make sure only one thread uses it at a time.
the best way to synch is by creating and using a TidNotify descendant.
define a tidnotify descendant and vcl proc like this with the appropriate private fields.
TVclProc= procedure(aBMP: TBitmap) of object;
TBmpNotify = class(TIdNotify)
protected
FBMP: TBitmap;
FProc: TVclProc;
procedure DoNotify; override;
public
constructor Create(aBMP: TBitmap; aProc: TVclProc); reintroduce;
class procedure NewBMP(aBMP: TBitmap; aProc: TVclProc);
end;
then implement it like this
{ TBmpNotify }
constructor TBmpNotify.Create(aBMP: TBitmap; aProc: TVclProc);
begin
inherited Create;
FBMP:= aBMP;
FProc:= aProc;
end;
procedure TBmpNotify.DoNotify;
begin
inherited;
FProc(FBMP);
end;
class procedure TBmpNotify.NewBMP(aBMP: TBitmap; aProc: TVclProc);
begin
with Create(aBMP, aProc) do
begin
Notify;
end;
end;
then from the
server.execute(...)
call it like this
procedure TTCPServer.DoExecute(aContext: TIdContext);
var
NewBMP: TBitmap;
begin
TBmpNotify.NewBMP(NewBMP, FVclBmpProc);
end;
Where the FVclBmpProcis a private field pointing to a procedure on the form that matches the parameter signature of TVclProc. This field should be set via a property on the server object just after creation and before starting the server.
the method on the form will be free to use the bitmap it receives without fear of thread contention, deadlock and other nasties created by accessing the VCL controls without synchronisation.
One simple PostMessage (inside the thread) and handling message (outside the thread) was necessary to make UI updates...

Is there a way to sleep unless a message is received?

I'm working in a service whose main loop looks like this:
while (fServer.ServerState = ssStarted) and (Self.Terminated = false) do
begin
Self.ServiceThread.ProcessRequests(false);
ProcessFiles;
Sleep(3000);
end;
ProcessRequests is a lot like Application.ProcessMessages. I can't pass true to it because if I do then it blocks until a message is received from Windows, and ProcessFiles won't run, and it has to run continually. The Sleep is there to keep the CPU usage down.
This works just fine until I try to shut down the service from Windows's service management list. When I hit Stop, it sends a message and expects to get a response almost immediately, and if it's in the middle of that Sleep command, Windows will give me an error that the service didn't respond to the Stop command.
So what I need is to say "Sleep for 3000 or until you receive a message, whichever comes first." I'm sure there's an API for that, but I'm not sure what it is. Does anyone know?
This kind of stuff is hard to get right, so I usually start at the API documentation at MSDN.
The WaitForSingleObject documention specifically directs to MsgWaitForMultipleObjects for these kinds of situations:
Use caution when calling the wait
functions and code that directly or
indirectly creates windows. If a
thread creates any windows, it must
process messages. Message broadcasts
are sent to all windows in the system.
A thread that uses a wait function
with no time-out interval may cause
the system to become deadlocked. Two
examples of code that indirectly
creates windows are DDE and the
CoInitialize function. Therefore, if
you have a thread that creates
windows, use MsgWaitForMultipleObjects
or MsgWaitForMultipleObjectsEx, rather
than WaitForSingleObject.
In MsgWaitForMultipleObjects, you have a dwWakeMask parameter specifying on which queued messages to return, and a table describing the masks you can use.
Edit because of comment by Warren P:
If your main loop can be continued because of a ReadFileEx, WriteFileEx or QueueUserAPC, then you can use SleepEx.
--jeroen
MsgWaitForMultipleObjects() is the way to go, ie:
while (fServer.ServerState = ssStarted) and (not Self.Terminated) do
begin
ProcessFiles;
if MsgWaitForMultipleObjects(0, nil, FALSE, 3000, QS_ALLINPUT) = WAIT_OBJECT_0 then
Self.ServiceThread.ProcessRequests(false);
end;
If you want to call ProcessFiles() at 3 second intervals regardless of any messages arriving, then you can use a waitable timer for that, ie:
var
iDue: TLargeInteger;
hTimer: array[0..0] of THandle;
begin
iDue := -30000000; // 3 second relative interval, specified in nanoseconds
hTimer[0] := CreateWaitableTimer(nil, False, nil);
SetWaitableTimer(hTimer[0], iDue, 0, nil, nil, False);
while (fServer.ServerState = ssStarted) and (not Self.Terminated) do
begin
// using a timeout interval so the loop conditions can still be checked periodically
case MsgWaitForMultipleObjects(1, hTimer, False, 1000, QS_ALLINPUT) of
WAIT_OBJECT_0:
begin
ProcessFiles;
SetWaitableTimer(hTimer[0], iDue, 0, nil, nil, False);
end;
WAIT_OBJECT_0+1: Self.ServiceThread.ProcessRequests(false);
end;
end;
CancelWaitableTimer(hTimer[0]);
CloseHandle(hTimer[0]);
end;
Use a timer to run ProcessFiles instead of hacking it into main application loop. Then ProcessFiles will run in the interval you want and the messages will be processed correctly, not taking 100 % CPU.
I used a TTimer in a multithreaded application with strange results, so now i use Events.
while (fServer.ServerState = ssStarted) and (Self.Terminated = false) do
begin
Self.ServiceThread.ProcessRequests(false);
ProcessFiles;
if ExitEvent.WaitFor(3000) <> wrTimeout then
Exit;
end;
You create the event with
ExitEvent := TEvent.Create(nil, False, False, '');
Now the last thing is to fire the event in case of service stop. I think the Stop event of the service is the right place to put this.
ExitEvent.SetEvent;
I use this code for an cleanup thread in my DB connections pooling system, but it should work well in your case too.
You don't need to sleep for 3 full seconds to keep the CPU usage low. Even something like Sleep(500) should keep your usage pretty low (if there are no messages waiting to process it should blow through the loop pretty quick and hit the sleep again. If your loop takes a few ms to run it still means your thread is spending the vast majority of time in sleep.
That being said, your code may benefit from some refactoring. You say you don't want ProcessRequests to block waiting for a message? The only other thing in that loop is ProcessFiles. If that is dependent on the message being processed then why can't it block? And if it's not dependent on the message being processed then can it be split onto another thread? (the previous suggestion of firing ProcessFiles via a timer is an excellent suggestion on how to do this).
Use an TEvent that you signal when the thread should wake up. Then block on the tevent (using waitformultiple as Jeroen says if you have multiple events to wait on)
Is it not possible to move ProcessFiles to a seperate thread? In your MainThread you just wait for messages and when the service is being terminated you terminate the ProcessFiles thread.

Why does WaitForSingleObject(INVALID_HANDLE_VALUE, INFINITE) block?

Why does
HANDLE mutexHandle = INVALID_HANDLE_VALUE;
WaitForSingleObject(mutexHandle, INFINITE);
block? It does not return with an error message. Checking the handle for INVALID_HANDLE would be stupid for a mutex as I would need a mutex for accessing the mutex handle...
BTW: It does return with WAIT_FAILED if the handle was closed.
From http://blogs.msdn.com/oldnewthing/archive/2004/03/02/82639.aspx:
Fourth, you have to be particularly careful with the INVALID_HANDLE_VALUE value: By coincidence, the value INVALID_HANDLE_VALUE happens to be numerically equal to the pseudohandle returned by GetCurrentProcess(). Many kernel functions accept pseudohandles, so if if you mess up and accidentally call, say, WaitForSingleObject on a failed INVALID_HANDLE_VALUE handle, you will actually end up waiting on your own process. This wait will, of course, never complete, because a process is signalled when it exits, so you ended up waiting for yourself.

PostMessage occasionally loses a message

I wrote a multi-threaded windows application where thread:
A – is a windows form that handles user interaction and process the data from B.
B – occasionally generates data and passes it two A.
A thread safe queue is used to pass the data from thread B to A. The enqueue and dequeue functions are guarded using a windows critical section objects.
If the queue is empty when the enqueue function is called, the function will use PostMessage to tell A that there is data in the queue. The function checks to make sure the call to PostMessage is executed successfully and repeatedly calls PostMessage if it is not successful (PostMessage has yet to fail).
This worked well for quite some time until one specific computer started to lose the occasional message. By lose I mean that, PostMessage returns successfully in B but A never receives the message. This causes the software to appear frozen.
I have already come up with a couple acceptable workarounds. I am interesting in knowing why windows is loosing these messages and why this is only happening on the one computer.
Here is the relevant portions of the code.
// Only called by B
procedure TSharedQueue.Enqueue(AItem: TSQItem);
var
B: boolean;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
FLast.FNext := AItem;
FLast := AItem;
end
else
begin
FFirst := AItem;
FLast := AItem;
end;
if (FCount = 0) or (FCount mod 10 = 0) then // just in case a message is lost
repeat
B := PostMessage(FConsumer, SQ_HAS_DATA, 0, 0);
if not B then
Sleep(1000); // this line of code has never been reached
until B;
Inc(FCount);
LeaveCriticalSection(FQueueLock);
end;
// Only called by A
function TSharedQueue.Dequeue: TSQItem;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
Result := FFirst;
FFirst := FFirst.FNext;
Result.FNext := nil;
Dec(FCount);
end
else
Result := nil;
LeaveCriticalSection(FQueueLock);
end;
// procedure called when SQ_HAS_DATA is received
procedure TfrmMonitor.SQHasData(var AMessage: TMessage);
var
Item: TSQItem;
begin
while FMessageQueue.Count > 0 do
begin
Item := FMessageQueue.Dequeue;
// use the Item somehow
end;
end;
Is FCount also protected by FQueueLock? If not, then your problem lies with FCount being incremented after the posted message is already processed.
Here's what might be happening:
B enters critical section
B calls PostMessage
A receives the message but doesn't do anything since FCount is 0
B increments FCount
B leaves critical section
A sits there like a duck
A quick remedy would be to increment FCount before calling PostMessage.
Keep in mind that things can happen quicker than one would expect (i.e. the message posted with PostMessage being caught and processed by another thread before you have a chance to increment FCount a few lines later), especially when you're in a true multi-threaded environment (multiple CPUs). That's why I asked earlier if the "problem machine" had multiple CPUs/cores.
An easy way to troubleshoot problems like these is to scaffold the code with additonal logging to log every time you enter a method, enter/leave a critical section etc. Then you can analyze the log to see the true order of events.
On a separate note, a nice little optimization that can be done in a producer/consumer scenario like this is to use two queues instead of one. When the consumer wakes up to process the full queue, you swap the full queue with an empty one and just lock/process the full queue while the new empty queue can be populated without the two threads trying to lock each other's queues. You'd still need some locking in the swapping of the two queues though.
If the queue is empty when the enqueue
function is called, the function will
use PostMessage to tell A that there
is data in the queue.
Are you locking the message queue before checking the queue size and issuing the PostMessage? You may be experiencing a race condition where you check the queue and find it non-empty when in fact A is processing the very last message and is about to go idle.
To see if you're in fact experiencing a race condition and not a problem with PostMessage, you could switch to using an event. The worker thread (A) would wait on the event instead of waiting for a message. B would simply set that event instead of posting a message.
This worked well for quite some time
until one specific computer started to
lose the occasional message.
By any chance, does the number of CPUs or cores that this specific computer have different than the others where you see no problem? Sometimes when you switch from a single-CPU machine to a machine with more than one physical CPU/core, new race conditions or deadlocks may arise.
Could there be a second instance unknowingly running and eating the messages, marking them as handled?

Resources