Reliable Asynchronous Handling of Domain Events - go

In concurrent systems, domain events are typically handled asynchronously. In Go, a simple approach for asynchronous event handling can be implemented via channels, but the issue is that if something bad happens for handling an event, or worst, for the whole program, the event will be lost.
How asynchronous domain events can be handled properly in a Go program, i.e.:
When an event handler fails, the event should not be purged from the event queue, in order to be handled properly in a later time.
If the whole program goes down, the events have to be recovered and processed accordingly.

The first is relatively easy; you can have an error handler within the worker that re-queues the work in the event of an error.
The second is much harder; your options are a) roll your own bulletproof mechanism for writing events to disk and purging them when they're completed in a thread-safe way or b) use one of the many, many popular systems available that's already proven reliable, e.g. RabbitMQ or Kafka, with the appropriate replication and redundancy to ensure the level of reliability you require. I would strongly recommend the latter.

Related

ZeroMQ PUSH / PULL - how to know which events are pending in SEND BUFFER queue?

We have a service pair doing PUSH/PULL pattern of message communication. As mentioned in the docs, if the PULL service is down or not running, then a sender will queue up to high water mark number of events and by default a .send() after that will block.
Now, while an app is in the blocking state, the app could be killed or something else may happen, leading up to loosing those messages in the queue.
I understand PUSH/PULL is not the best method if we want that kind of reliability and should probably use some of the other method listed at: https://zguide.zeromq.org/docs/chapter4/ but is there a way in PUSH/PULL method to get event call back on the events still on queue on say app exit/periodic callbacks/signals?
I also understand, that I could use NOBLOCK or ZMQ_IMMEDIATE or ZMQ_SNDTIMEO in such situation and catch the error and use application level recovery (similar to DLQ pattern) but I was looking into things available from the ZeroMQ library.
Q : "... how to know which events are pending in SEND BUFFER queue ?"
A :Well,having used ZeroMQ since v2.1, v3.x, till v4.x in 2022-Q1, there has never been a way, how a user-level code may interact with ZeroMQ internal queues and/or state(s) as there was no such method in c-API to do so.
Q : "... is there a way in PUSH/PULL method to get event call back on the events still on queue on say app exit/periodic callbacks/signals?"
A :Well, let's solve this by using a concurrently operated signalling-socket, for receiving POSACK-messages from "live"-clients, i.e. those, that can and do receive messages - thus being able to back-throttle messages for those, that did not respond in reasonable TAT. Using a mix of several, properly selected Scalable Formal Communications Patterns archetypes to work in cooperation, helps solve this "soft"-signalling control. Without an ambition to solve all details, a set of one-PUB.bind() / many-SUB.connect()-sockets for selectively directed payload-transport with subscription-based controls and one-PULL.bind() / many-PUSH.connect()-s for "soft"-control signalling of still-alive-heartbeats, traffic back-throttling and similar services

If nobody needs reliable messaging on transport level, how to implement reliable PubSub on business level?

This question is mostly out of curiosity. I read this article about WS-ReliableMessaging by Marc de Graauw some time ago and agreed that reliable messaging should be applied on the business level as whenever possible.
Now, the question is, he explains clearly what his approach is in a point-to-point fashion. However, I fail to see how you could implement reliable messaging on the business level in a Publish/Subscribe situation.
I will try to demonstrate the difference by showing commands (point-to-point) vs. events (publish/subscribe). Note that these examples are highly simplified.
Command: Transfer(uniqueId, amount, sourceAccount, recipientAccount)
If the account holder sends this transfer, he could wait for the confirmation MoneyTransferred (assuming this event will contain a reference to the uniqueId in the Transfer command.
If the account holder doesn't received the MoneyTransferred within a given timeout period, he could send the same command again. (of course assuming the command processor is idempotent)
So I see how reliable messaging could work on business level in a point-to-point fashion.
Now, say we the previous command succeeded and produced a MoneyTransferred event. Somewhere in the system we have an event processor (MoneyTransferEmailNotifier) that handles MoneyTransferred events and will send an email notification to the recipient of the transfer.
This MoneyTransferEmailNotifier is subscribed to MoneyTransferred events. But note that system sending the MoneyTransferred event does not really care who or how many listeners there are to this event. The whole point is the decoupling here. I raise an event and don't care if there zero or 20 listeners that subscribe to this event.
At this point, if there is no reliable messaging (minimally at-least-once-delivery) provided by the infrastructure, how can we prevent the loss of the MoneyTransferred event? I do want the recipient to get his e-mail notification.
I fail to see how any real 'business-level' solution will resolve this.
(1) One of the solutions I can think of is by explicitly subscribing to events on 'business level' and thereby bypassing any infrastructure component. But aren't we at that moment introducing infrastructure in our business?
(2) The other 'solution' would be by introducing a process manager that does something like this:
PM receives Transfer command
PM forwards Transfer command to the accounts subsystem
If successful, sends command SendEmailNotification(recipient) to the notification subsystem
This does seem to be the solution that DDD prescribes, correct? But doesn't this introduce more coupling?
What do you think?
Edit 2016-04-16
Maybe the root question is a little bit more simplistic: If you do not have an infrastructural component that ensures at-least or exactly-once delivery, how can you ensure (when you're in an at-most-once infrastructure) that your events emitted will be received?
Not all events need to be delivered but there are many that are key (like the example of sending the confirmation email)
This MoneyTransferEmailNotifier is subscribed to MoneyTransferred events. But note that system sending the MoneyTransferred event does not really care who or how many listeners there are to this event. The whole point is the decoupling here. I raise an event and don't care if there zero or 20 listeners that subscribe to this event.
Your tangle, I believe, is here - that only the publish subscribe middleware can deliver events to where they need to go.
Greg Young covers this in his talk on polyglot data (slides).
Summarizing: the pub/sub middleware is in the way. A pull based model, where consumers retrieve data from the durable event store gives you a reliable way to retrieve the messages from the store. So you pull the data from the store, and then use the business level data to recognize previous work as before.
For instance, upon retrieving the MoneyTransferred event with its business data, the process manager looks around for an EmailSent event with matching business data. If the second event is found, the process manager knows that at least one copy of the email was successfully delivered, and no more work need be done.
The push based models (pub/sub, UDP multicast) become latency optimizations -- the arrival of the push message tells the subscriber to pull earlier than it normally would.
In the extreme push case, you pack into the pushed message enough information that the subscriber(s) can act upon it immediately, and trust that the idempotent handling of the message will prevent problems when the redundant copy of the message arrives on the slower channel.
If nobody needs reliable messaging on transport level, how to implement reliable PubSub on business level?
The original article does not state that "nobody needs reliable messaging on transport level", it states that the ordering of messages should be enforced at the business level because, in some cases, if this ordering is an important characteristic of the business.
In any case, PubSub is at the infrastructure level, you can't say that you implement PubSub at the business level. It doesn't make sense.
But then how you could ensure only-once-delivery at the business level? By using a Saga/Process manager. On of the important responsibilities of them is exactly that. You can combine that with idempotent Aggregates. Also, you could identify terms that emphasis ordering from the Ubiquitous language like transaction phase and include them in your domain models (for example as properties of the events).
If you do not have an infrastructural component that ensures at-least
or exactly-once delivery, how can you ensure (when you're in an
at-most-once infrastructure) that your events emitted will be
received?
If you do not have at-least-once then you could use the first event that it is initiating the hole process. I would use event polling and a Saga that ensure that every important step in the process is reached at the right moment.
In your case, as the sending of the email is an important business aspect, I would include it as a step in the process.

Long running processing in handlers with boost::asio

I'm designing a network sever based on boost::asio. I need to perform long running processing jobs in handlers and think that these processing should be moved from handlers to separate thread pool where I would have better control (e.g. prioritize tasks). Handlers would just enqueue a new task in job queue.
There would be also a response queue where responses would be dequeued and send back to the clients. (client send requests synchronously)
I wonder if this make sense or just miss something.
Short answer is Yes. Long answer it depends. Generally speaking, if you want a higher network though put you should minimize processing that is performed in the handlers and offload it to a thread. This is especially important if you have causality requirements for the data that you receive, since async_receive doesn't guarantee execution order of handlers.

What are alternatives to Win32 PulseEvent() function?

The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can.
I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular).
Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments.
Can anyone please suggest one?
Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.
I think you're going to need something a little more complex to hit your reliability target.
My understanding of your problem is that you have one producer and an unknown number of consumers all of which are different processes. Each consumer can NEVER miss any events.
I'd like more clarification as to what missing an event means.
i) if a consumer started to run and got to just before it waited on your notification method and an event occurred should it process it even though it wasn't quite ready at the point that the notification was sent? (i.e. when is a consumer considered to be active? when it starts or when it processes its first event)
ii) likewise, if the consumer is processing an event and the code that waits on the next notification hasn't yet begun its wait (I'm assuming a Wait -> Process -> Loop to Wait code structure) then should it know that another event occurred whilst it was looping around?
I'd assume that i) is a "not really" as it's a race between process start up and being "ready" and ii) is "yes"; that is notifications are, effectively, queued per consumer once the consumer is present and each consumer gets to consume all events that are produced whilst it's active and doesn't get to skip any.
So, what you're after is the ability to send a stream of notifications to a set of consumers where a consumer is guaranteed to act on all notifications in that stream from the point where it acts on the first to the point where it shuts down. i.e. if the producer produces the following stream of notifications
1 2 3 4 5 6 7 8 9 0
and consumer a) starts up and processes 3, it should also process 4-0
if consumer b) starts up and processes 5 but is shut down after 9 then it should have processed 5,6,7,8,9
if consumer c) was running when the notifications began it should have processed 1-0
etc.
Simply pulsing an event wont work. If a consumer is not actively waiting on the event when the event is pulsed then it will miss the event so we will fail if events are produced faster than we can loop around to wait on the event again.
Using a semaphore also wont work as if one consumer runs faster than another consumer to such an extent that it can loop around to the semaphore call before the other completes processing and if there's another notification within that time then one consumer could process an event more than once and one could miss one. That is you may well release 3 threads (if the producer knows there are 3 consumers) but you cant ensure that each consumer is released just the once.
A ring buffer of events (tick counts) in shared memory with each consumer knowing the value of the event it last processed and with consumers alerted via a pulsed event should work at the expense of some of the consumers being out of sync with the ticks sometimes; that is if they miss one they will catch up next time they get pulsed. As long as the ring buffer is big enough so that all consumers can process the events before the producer loops in the buffer you should be OK.
With the example above, if consumer d misses the pulse for event 4 because it wasn't waiting on its event at the time and it then settles into a wait it will be woken when event 5 is produced and since it's last processed counted is 3 it will process 4 and 5 and then loop back to the event...
If this isn't good enough then I'd suggest something like PGM via sockets to give you a reliable multicast; the advantage of this would be that you could move your consumers off onto different machines...
The reason PulseEvent is "unreliable" is not so much because of anything wrong in the function itself, just that if your consumer doesn't happen to be waiting on the event at the exact moment that PulseEvent is called, it'll miss it.
In your scenario, I think the best solution is to manually keep the counter yourself. So the producer thread keeps a count of the current "clock tick" and when a consumer thread starts up, it reads the current value of that counter. Then, instead of using PulseEvent, increment the "clock ticks" counter and use SetEvent to wake all threads waiting on the tick. When the consumer thread wakes up, it checks it's "clock tick" value against the producer's "clock ticks" and it'll know how many ticks have elapsed. Just before it waits on the event again, it can check to see if another tick has occurred.
I'm not sure if I described the above very well, but hopefully that gives you an idea :)
There are two inherent problems with PulseEvent:
if it's used with auto-reset events, it releases one waiter only.
threads might never be awaken if they happen to be removed from the waiting queue due to APC at the moment of the PulseEvent.
An alternative is to broadcast a window message and have any listener have a top-level message -only window that listens to this particular message.
The main advantage of this approach is that you don't have to block your thread explicitly. The disadvantage of this approach is that your listeners have to be STA (can't have a message queue on an MTA thread).
The biggest problem with that approach would be that the processing of the event by the listener will be delayed with the amount of time it takes the queue to get to that message.
You can also make sure you use manual-reset events (so that all waiting threads are awaken) and do SetEvent/ResetEvent with some small delay (say 150ms) to give a bigger chance for threads temporarily woken by APC to pick up your event.
Of course, whether any of these alternative approaches will work for you depends on how often you need to fire your events and whether you need the listeners to process each event or just the last one they get.
If I understand your question correctly, it seems like you can simply use SetEvent. It will release one thread. Just make sure it is an auto-reset event.
If you need to allow multiple threads, you could use a named semaphore with CreateSemaphore. Each call to ReleaseSemaphore increases the count. If the count is 3, for example, and 3 threads wait on it, they will all run.
Events are more suitable for communications between the treads inside one process (unnamed events). As you have described, you have zero ore more clients that need to read something interested. I understand that the number of clients changes dynamically. In this case, the best chose will be a named pipe.
Named Pipe is King
If you need to just send data to multiple processes, it’s better to use named pipes, not the events. Unlike auto-reset events, you don't need own pipe for each of the client processes. Each named pipe has an associated server process and one or more associated client processes (and even zero). When there are many clients, many instances of the same named pipe are automatically created by the operating system for each of the clients. All instances of a named pipe share the same pipe name, but each instance has its own buffers and handles, and provides a separate conduit for client/server communication. The use of instances enables multiple pipe clients to use the same named pipe simultaneously. Any process can act as both a server for one pipe and a client for another pipe, and vice versa, making peer-to-peer communication possible.
If you will use a named pipe, there would be no need in the events at all in your scenario, and the data will have guaranteed delivery no matter what happens with the processes – each of the processes may get long delays (e.g. by a swap) but the data will be finally delivered ASAP without your special involvement.
On The Events
If you are still interested in the events -- the auto-reset event is king! ☺
The CreateEvent function has the bManualReset argument. If this parameter is TRUE, the function creates a manual-reset event object, which requires the use of the ResetEvent function to set the event state to non-signaled. This is not what you need. If this parameter is FALSE, the function creates an auto-reset event object, and system automatically resets the event state to non-signaled after a single waiting thread has been released.
These auto-reset events are very reliable and easy to use.
If you wait for an auto-reset event object with WaitForMultipleObjects or WaitForSingleObject, it reliably resets the event upon exit from these wait functions.
So create events the following way:
EventHandle := CreateEvent(nil, FALSE, FALSE, nil);
Wait for the event from one thread and do SetEvent from another thread. This is very simple and very reliable.
Don’t' ever call ResetEvent (since it automatically reset) or PulseEvent (since it is not reliable and deprecated). Even Microsoft has admitted that PulseEvent should not be used. See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684914(v=vs.85).aspx
This function is unreliable and should not be used, because only those threads will be notified that are in the "wait" state at the moment PulseEvent is called. If they are in any other state, they will not be notified, and you may never know for sure what the thread state is. A thread waiting on a synchronization object can be momentarily removed from the wait state by a kernel-mode Asynchronous Procedure Call, and then returned to the wait state after the APC is complete. If the call to PulseEvent occurs during the time when the thread has been removed from the wait state, the thread will not be released because PulseEvent releases only those threads that are waiting at the moment it is called.
You can find out more about the kernel-mode Asynchronous Procedure Calls at the following links:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms681951(v=vs.85).aspx
http://www.drdobbs.com/inside-nts-asynchronous-procedure-call/184416590
http://www.osronline.com/article.cfm?id=75
We have never used PulseEvent in our applications. As about auto-reset events, we are using them since Windows NT 3.51 (although they appeared in the first 32-bit version of NT - 3.1) and they work very well.
Your Inter-Process Scenario
Unfortunately, your case is a little bit more complicated. You have multiple threads in multiple processes waiting for an event, and you have to make sure that all the threads did in fact receive the notification. There is no other reliable way other than to create own event for each consumer. So, you will need to have as many events as are the consumers. Besides that, you will need to keep a list of registered consumers, where each consumer has an associated event name. So, to notify all the consumers, you will have to do SetEvent in a loop for all the consumer events. This is a very fast, reliable and cheap way. Since you are using cross-process communication, the consumers will have to register and de-register its events via other means of inter-process communication, like SendMessage. For example, when a consumer process registers itself at your main notifier process, it sends SendMessage to your process to request a unique event name. You just increment the counter and return something like Event1, Event2, etc, and creating events with that name, so the consumers will open existing events. When the consumer de-registers – it closes the event handle that it opened before, and sends another SendMessage, to let you know that you should CloseHandle too on your side to finally release this event object. If the consumer process crashes, you will end up with a dummy event, since you will not know that you should do CloseHandle, but this should not be a problem - the events are very fast and very cheap, and there is virtually no limit on the kernel objects - the per-process limit on kernel handles is 2^24. If you are still concerned, you may to the opposite – the clients create the events but you open them. If they won’t open – then the client has crashed and you just remove it from the list.

I Need an Analogy: Triggers and Events

For another question, I'm running into a misconception that seems to arise here at SO occasionally. Some questioners seem to think that Triggers are to Databases as Events are to OOP.
Does anyone have a good analogy to explain why this is a flawed comparison, and the consequences of misapplying it?
EDIT:
Bill K. has hit it correctly, but maybe doesn't see the importance of the critical differeence between the event and the callback function that strikes me, anyway. Triggers actually cause code to execute every time the event occurs; callbacks only occur whenever one has been registered for an event (which is not true for the vast majority of events); and even then, in most cases the callback's first action is to deregister itself (or at least the callback contains a qualifcation exit so it only executes once.)
If you write a trigger, it will unfailingly execute every time the event occurs, because there's no way to register or deregister to code segment.
Triggers are a way to interpose repeating logic synchronously into the thread of execution (i.e. synchronicity). Events are a means to defer logic until later (i.e. implement asynchronicity).
There are exceptions and mitigations in both cases, but the basic patterns of triggers and callbacks are mostly opposite in intention and implementation. Often the distinction doesn't seem to have fully sunk in. (IMHO, YMMV). :D
They're not the same thing, but they're not unrelated.
In both cases, the mechanism can be described approximately as follows:
Some block of code declares "interest" for changes in state.
Your application affects some change.
The system runs the block of code in response to the change.
Perhaps a database trigger is more like a callback function that has registered interest in a specific event.
Here's an analogy: the event is a rubber ball that you throw. The trigger is a dog that chases after a thrown ball.
If there's some other difference that you have in mind that makes it "dangerous" (note: OP has edited this choice of word out of the question) to compare triggers and events, you can describe what you mean.
Triggers are a way to interpose
repeating logic synchronously into the
thread of execution (i.e.
synchronicity). Events are a means to
defer logic until later (i.e.
implement asynchronicity).
Okay, I see what you mean more clearly. But I think it's in some ways subject to the implementation. I wouldn't assume an event handler has to deregister itself; it depends on the system you're using. A UNIX signal handler, for example, has to prevent itself from catching a new signal while it's already handling one. But a Java servlet inside a Tomcat container should be thread-safe because it may be called concurrently by multiple threads. They're both event handlers, of different kinds.
Event handlers may be synchronous or asynchronous. Can a handler in a publish/subscribe system read messages that were posted recently, but prior to the handler registering its interest? Or only messages posted concurrently?
There's another important reason to treat triggers as different from event handlers: I frequently recommend against doing anything in a trigger that affects state outside the database.
For example, sending an email, writing to a file, posting to a web service, or forking a process is inappropriate inside a trigger. If for no other reason than the transaction that spawned the trigger may be rolled back, but you can't roll back those external effects. You may not even be using explicit transactions, but say you send an email in a BEFORE trigger, but the operation fails because of a NOT NULL constraint or something.
Instead, all such work should be done by code in one's application, after one has confirmed that the SQL operation was successful and the transaction committed.
It's too bad that people keep trying to do inappropriate work inside a trigger. There are senior developers at MySQL who promote UDFs to read and write data in memcached. Wow -- I just noticed these have made it into the MySQL 6.0 product!! Shocking!
So here's another attempt at an analogy, comparing triggers and events to the process of a criminal trial:
A BEFORE trigger is an allegation.
An AFTER trigger is an indictment.
COMMIT is a conviction after a guilty verdict.
ROLLBACK is an acquittal after an innocent verdict.
You only want to put the perpetrator in prison after they are convicted.
Whereas an EVENT is the crime itself.

Resources