Order of receiving Win32 Events - winapi

I have 3 different threads that use the Windows SetEvent API to set an event each.
DWORD thFunc1(LPVOID)
{
...
SetEvent(event1);
...
}
DWORD thFunc2(LPVOID)
{
...
SetEvent(event2);
...
}
DWORD thFunc3(LPVOID)
{
...
SetEvent(event3);
...
}
A 4th thread is waiting on all of these events using WaitForMultipleObjects API.
DWORD thCatch(LPVOID)
{
...
DWORD ret = WaitForMultipleObjects(3, arrHandles, FALSE, INFINITE);
...
}
My question is twofold:
If the 3 threads signal the event at almost the same timestamp, is the order of the events guaranteed to be received in the same order as they are sent?
If the answer to question 1 is NO, then is there any way that this can be achieved using Windows APIs?
Any inputs would be appreciated.

The WaitForMultipleObjects API call makes a single promise when more than one object entered the signaled state:
If multiple objects become signaled, the function returns the index of the first handle in the array whose object was signaled.
The order in which multiple objects are signaled cannot be determined from the WaitForMultipleObjects call.
If the order in which events happened is important, you will have to record that information elsewhere. You could use a singly linked list with atomic push and pop operations, and signal an event when a new entry arrived.

Related

RxJS ShareReplay with retries every n-th second and no refCount

I'm trying to cache http calls in the service so all subsequent calls returns same response. This is fairly easy with shareReplay:
data = this.http.get(url).pipe(
shareReplay(1)
);
But it doesn't work in case of backend / network errors. ShareReplay spams the backend with requests in case of any error when this Observable is bound to the view through async pipe.
I tried with retryWhen etc but the solution I got is untestable:
data = this.http.get(url).pipe(
retryWhen(errors => errors.pipe(delay(10000))),
shareReplay(1)
);
fakeAsync tests fails with "1 timer(s) still in the queue" error because delay timer has no end condition. I also don't want to have some hanging endless timer in the background - it should stop with the last subscription.
The behavior I would like:
Multicast - make only one subscription to source even with many subscribers.
Do not count refs for successful queries - reuse same result when subscriber count goes to 0 and back to 1.
In case of error - retry every 10 seconds but only if there are any subscribers.
My 2 cents:
This code is for rxjs > 6.4 (here V6.6)
To use a shared observable, you need to return the same observable for all the subscribers (or you will create an observable which has nothing to share)
Multicasting can be done using shareReplay and you can replay the last emitted value (even after the last subscriber to have unsubscribed) using the {refCount: false} option.
As long as there is no subscription, the observable does nothing. You will not have any fetch on the server before the first subscriber.
beware:
If refCount is false, the source will not be
unsubscribed meaning that the inner ReplaySubject will still be
subscribed to the source (and potentially run for ever).
Also:
A successfully completed source will stay cached in the shareReplayed
observable forever, but an errored source can be retried.
The problem is using shareReplay, you have to choose between:
Always getting the last value even if the refCount went back to 0 and having possible never ending retries in case of error (remember shareReplay with refCount to false never unsubscribes)
Or keeping the default refCount:true which mean you won't have the second "first subscriber" cache benefit. Conversely the retry will also stop if no subscriber is there.
Here is a dummy example:
class MyServiceClass {
private data;
// assuming you are injecting the http service
constructor(private http: HttpService){
this.data = this.buildData("http://some/data")
}
// use this accessor to get the unique (shared) instance of data observable.
public getData(){
return this.data;
}
private buildData(url: string){
return this.http.get(url).pipe(
retryWhen(errors => errors.pipe(delay(10000))),
shareReplay({refCount: false})
);
}
}
Now in my opinion, to fix the flow you should prevent your retry to run forever, adding for instance a maximum number of retries

Avoiding deadlock in reentrant code C++11

I am working on refactoring some legacy code that suffers from deadlocks. There are two main root causes:
1) the same thread locking the same mutex multiple times, which should not difficult to resolve, and
2) the code occasionally calls into user defined functions which can enter the same code at the top level. I need to lock the mutex before calling user defined functions, but I might end up executing the same code again which will result in a deadlock situation. So, I need some mechanism to tell me that the mutex has already been locked and I should not lock it again. Any suggestions?
Here is a (very) brief summary of what the code does:
class TreeNode {
public:
// Assign a new value to this tree node
void set(const boost::any& value, boost::function<void, const TreeNode&> validator) {
boost::upgrade_lock<boost::shared_mutex> lock(mutexToTree_);
// call validator here
boost::upgrade_to_unique_lock<boost::shared_mutex> ulock(lock);
// set this TreeNode to value
}
// Retrieve the value of this tree node
boost::any get() {
boost::shared_lock<boost::shared_mutex> lock(mutexToTree_);
// get value for this tree node
}
private:
static boost::shared_mutex mutexToRoot_;
};
The problem is that the validator function can call into get(), which locks mutexToRoot_ on the same thread. I could modify mutexToRoot_ to be a recursive mutex but that would prevent other threads from reading the tree during get() operation, which is unwanted behavior.
Since C++11 you can use std::recursive_mutex, which allows the owning thread to call lock or try_lock without blocking/reporting failure, whereas the other threads will block on lock/receive false on try_lock until the owning thread calls unlock as many times as it called lock/try_lock before.

Execution context in Event driven programming

I am reading about event driven programming from the book:
Practical UML Statecharts in C/C++, 2nd Edition:
Event-Driven Programming for Embedded Systems
On page no. xxviii Introduction , the author says:
...the event-driven application must return control after handling
each event, so the execution context cannot be preserved in the
stack-based variables and the program counter as it is in a sequential
program. Instead, the event-driven application becomes a state
machine, or actually a set of collaborating state machines that
preserve the context from one event to the next in the static
variables.
I am unable to understand why the execution context cannot be preserved in the stack-based variables and the program counter once the control is returned after handling the event?
Let's start with how the traditional sequential programming paradigm works. Suppose that you want to blink an LED on an embedded board. A common solution would be to write a program like this (e.g., see Arduino Blink tutorial):
while (1) { /* RTOS task or a "superloop" */
turn_LED_on(); /* turn the LED on (computation) */
delay(500); /* wait for 500 ms (polling or blocking) */
turn_LED_off(); /* turn the LED off (computation) */
delay(1000); /* wait for 1000 ms (polling or blocking) */
}
The key point here is the delay() function, which waits in-line until the delay elapses. This waiting is called "blocking", because the calling program is blocked until delay() returns.
Please note that the Blinky program calls delay() in two different contexts: first time after turn_LED_on() and the second time after turn_LED_off(). Each time, delay() returns to a different place in the code. This means that while the program is blocked, the information about the place in the code (the context of the call) is automatically preserved.
The trivial Blinky program is very simple, but in principle a blocking function, like delay(), could be called from other functions each with
complex if-else-while code. Still, delay() will be able to return to the exact point of the call, because the C programming language preserves the context of the call (in the call stack and the program counter).
But blocking makes the whole program unresponsive to any other events and therefore people came up with event-driven programming.
An event-driven program is structured around an event-loop. An example event-driven code could look like this:
while (1) { /* event-loop */
Event *e = queue_get(); /* block when event queue is empty */
dispatch(e); /* handle the event, cannot block! */
}
The main point is that the dispatch() "event-handler" function cannot call a blocking function like delay(). Instead, dispatch() can only perform some immediate action and must quickly return back to the event-loop. That way, the event-loop remains responsive at all times.
But, by returning the dispatch() function removes its own stack frame from the call stack. So the call stack and program counter associated with calling dispatch() is always the same and is useless to "remember" the execution context.
Instead, to blink the LED, the dispatch() function must rely on some variable (state) that remembers the state (on/off) of the LED. An example how you could write such dispatch() function is as follows:
static enum {OFF, ON } state = OFF; /* start in the OFF state */
timer_arm(1000); /* arm a timer to generate TIMEOUT event in 1000 ms */
void dispatch(Event *e) {
switch (state) {
case OFF:
if (e->sig == TIMEOUT) {
turn_LED_on();
timer_arm(500);
state = ON; /* transition to "ON" state */
}
break;
case ON:
if (e->sig == TIMEOUT) {
turn_LED_off();
timer_arm(1000);
state = OFF; /* transition to "OFF" state */
}
break;
}
}
I hope you can see that dispatch() implements a state machine with states ON and OFF driven by one event TIMEOUT.

SetTimer returning non-zero but its not the id i supplied, and my callback never triggers

I have setup a message loop.
I call SetTimer like this:
SetTimer(null, 5, 1000, timerFunc_c);
The return value of this is a random number like 11422. And it never triggers my callback. If i set timer like this:
SetTimer(msgWinHwnd, 5, 1000, timerFunc_c);
Then it returns 0 and it then makes GetMessage with 0 for min and max, trip with WM_TIME message, however my callback is never called.
Do you know why in first situation my callback does not return the id i told it? And why it never fires the callback?
Thanks
This is documented behaviour for the SetTimer function:
nIDEvent [in]
Type: UINT_PTR
A nonzero timer identifier. If the hWnd
parameter is NULL, and the nIDEvent does not match an existing timer
then it is ignored and a new timer ID is generated
If your callback isn't ever called (it's hard to tell for sure from your question), check your GetMessage loop and make sure you're not specifying a window filter (e.g. you should be calling GetMessage(&msg, 0, ...); rather than GetMessage(&msg, msgWinHwnd, ...);.

boost signals - How control lifetime of objects sent to subscribers? Smart pointers?

I am using boost::signals2 under Red Hat Enterprise Linux 5.3.
My signal creates an object copy and sends it's pointer to subscribers. This was implemented for thread safety to prevent the worker thread from updating a string property on the object at the same time it is being read ( perhaps I should revisit the use of locks? ).
Anyway, my concern is with multiple subscribers that dereference the pointer to the copied object on their own thread. How can I control object lifetime? How can I know all subscribers are done with the object and it is safe to delete the object?
typedef boost::signals2::signal< void ( Parameter* ) > signalParameterChanged_t;
signalParameterChanged_t m_signalParameterChanged;
// Worker Thread - Raises the signal
void Parameter::raiseParameterChangedSignal()
{
Parameter* pParameterDeepCopied = new Parameter(*this);
m_signalParameterChanged(pParameterDeepCopied);
}
// Read-Only Subscriber Thread(s) - GUI (and Event Logging thread ) handles signal
void ClientGui::onDeviceParameterChangedHandler( Parameter* pParameter)
{
cout << pParameter->toString() << endl;
delete pParameter; // **** This only works for a single subscriber !!!
}
Thanks in advance for any tips or direction,
-Ed
If you really have to pass Parameter by pointer to your subscribers, then you should use boost::shared_ptr:
typedef boost::shared_ptr<Parameter> SharedParameterPtr;
typedef boost::signals2::signal< void ( SharedParameterPtr ) > signalParameterChanged_t;
signalParameterChanged_t m_signalParameterChanged;
// The signal source
void Parameter::raiseParameterChangedSignal()
{
SharedParameterPtr pParameterDeepCopied = new Parameter(*this);
m_signalParameterChanged(pParameterDeepCopied);
}
// The subscriber's handler
void ClientGui::onDeviceParameterChangedHandler( SharedParameterPtr pParameter)
{
cout << pParameter->toString() << endl;
}
The shared parameter object sent to your subscribers will be automatically deleted when its reference count becomes zero (i.e. it goes out of scope in all the handlers).
Is Parameter really so heavyweight that you need to send it to your subscribers via pointer?
EDIT:
Please note that using shared_ptr takes care of lifetime management, but will not relieve you of the responsibility to make concurrent reads/writes to/from the shared parameter object thread-safe. You may well want to pass-by-copy to your subscribers for thread-safety reasons alone. In your question, it's not clear enough to me what goes on thread-wise, so I can't give you more specific recommendations.
Is the thread calling raiseParameterChangedSignal() the same as your GUI thread? Some GUI toolkits don't allow concurrent use of their API by multiple threads.

Resources