Track causality in message queue - parallel-processing

I'm working on a system where I have to process multiple messages but keep partial order of these messages. I have RabbitMQ queue with messages of two types: item create and item update.
Consider we have queue with 6 messages:
CREATE1 CREATE2 UPDATE1 UPDATE2 UPDATE1 UPDATE1
If I process them one by one, then it's completely fine, but it's very slow, because I have lots of messages
If I read them into some buffer, then I can process them in parallel, but I can process UPDATE1 for first item which wasn't created yet. The worse, the last update may be processed before previous one and thus erase latest item state
I can create some extra field in the message or put it in the queue with some extra header, e.g. MESSAGE_ID:10 to make sure that all messages for one item have the same MESSAGE_ID. The problem is that I don't know what to do with it.
How can I read from the queue multiple items at once without breaking causality between messages?
The pseudocode that I imagine for this task could be:
const prefetchItemsCount = 20
let buffer = new Message[prefetchItemsCount]
let set = new Set()
foreach item in queue
if !set.Contains(item.MessageId)
set.Add(item.MessageId)
buffer.Add(item)
if set.Count == buffer.Count
break
return buffer
So in our example it will return following sequences of items:
CREATE1 CREATE2
UPDATE1 UPDATE2
UPDATE1
UPDATE1
Which makes it almost twice as faster

How can I read from the queue multiple items at once without breaking causality between messages?
Nice case, indeed.
If indeed performed in the desired manner, the TimeDOMAIN singularity of "at once" goes principally against a hidden morphology of what was expressed as "causality".
Given together with a QUEUE-ingress side, which is by definition a pure-[SERIAL] ( nothing may happen at once, just a pure one-goes-after-another, even if a "just"-[CONCURENT] scheduling may get exposed to external agents, the internal QUEUE-management conforms to a pure sequential ordering of messages internal flow and delivery ( also ref. to time-stamping, persistence and other artefacts thereof ) ).
Causality also means some cause -> effect ordering of events, both in the abstracted causality sense of the relation and also in the flow of real time of how thing indeed do happen, so practically an anti-pattern to the "at once".
Last, but not least, the Causality also has to handle an additional paradigm, the latency between the cause -> side and the -> effect side of the ( often Finite-State-Automata, typically having much richer state-space than a just { 0 -> CREATE -> UPDATE [ -> UPDATE [...] ] -> } ) series of events.
Result?
While one may "read" using some degree of [CONCURRENT]-scheduling of processes, the FSA / Causality conditions principally avoid moving anywhere out of the principal pure-[SERIAL] post-processing of event-messages delivered.
More reguirements on this come, if the messaging framework is broker-less and without guarranteed robustness against lost messages / messages ordering / messages authenticity / messages content.
There the Devils start to dance against your attempts to build a consistent, distributed transaction processing robust, distributed FSA :o)

Related

EventStore Competing Consumer Ordering

I am in the process of scaling out an application horizontally, and have realised read model updates (external projection via event handler) will need to be handled on a competing consumer basis.
I initially assumed that I would need to ensure ordering, but this requirement is message dependent. In the case of shopping cart checkouts where i want to know totals, i can add totals regardless of the order - get the message, update the SQL database, and ACK the message.
I am now racking my brains to even think of a scenario/messages that would be anything but, however i know this is not the case. Some extra clarity and examples would be immensely useful.
My questions i need help with please are:
What type of messages would the ordering need to be important, and
how would this be resolved using the messages as-is?
How would we know which event to resubscribe from when the processes
join/leave I can see possible timing issues that could cause a
subscription to be requested on a message that had just been
processed by another process?
I see there is a Pinned consumer strategy for best efforts affinity of stream to subscriber, however this is not guaranteed. I could solve this making a specific stream single threaded processing only those messages in order - is it possible for a process to have multiple subscriptions to different streams?
To use your example of a shopping cart, ordering would be potentially important for the following events:
Add item
Update item count
Remove item
You might have sequences like A: 'Add item, remove item' or B: 'Add item, Update item count (to 2), Update item count (to 3)'. For A, if you process the remove before the add, obviously you're in trouble. For B, if you process two update item counts out of order, you'll end up with the wrong final count.
This is normally scaled out by using some kind of sharding scheme, where a subset of all aggregates are allocated to each shard. For Event Store, I believe this can be done by creating a user-defined projection using partitionBy to partition the stream into multiple streams (aka 'shards'). Then you need to allocate partitions/shards to processing nodes in a some way. Some technologies are built around this approach to horizontal scaling (Kafka and Kinesis spring to mind).

CQRS - out of order messages

Suppose we have 3 different services producing events, each of them publishing to its own event store.
Each of these services consumes other producers services events.
This because each service has to process another service's events AND to create its own projection. Each of the service runs on multiple instances.
The most straight forward way to do it (for me) was to put "something" in front of each ES which is picking events and publishing (pub/sub) them in queues of every other service.
This is perfect because every service can subscribe to each topics it likes, while the event publisher is doing the job and if a service is unavailable events are still delivered. This seems to me to guarantee high scalability and availability.
My problem is the queue. I can't get an easily scalable queue that guarantees ordering of the messages. It actually guarantees "slightly out of order" with at-least once delivery: to be clear, it's AWS SQS.
So, the ordering problems are:
No order guaranteed across events from the same event stream.
No order guaranteed across events from the same ES.
No order guaranteed across events from different ES (different services).
I though I could solve the first two problems just by keeping track of the "sequence number" of the events coming from the same ES.
This would be done by tracking the last sequence number of each topic from which we are consuming events
This should be easy for reacting to events and also building our projection.
Then, when I pop an event from the queue, if the eventSequenceNumber > previousAppliedEventSequenceNumber + 1 i renqueue it (or make it invisible for a certain time).
But it turns out that using this solution, it will destroy performances when events are produced at high rates (I can use a visibility timeout or other stuff, the result should be the same).
This because when I'm expecting event 10 and I ignore event 11 for a moment, I should ignore also all events (from ES) with sequence numbers coming after that event 11, until event 11 shows up again and it's effectively processed.
Other difficulties were:
where to keep track of the event's sequence number for build the projection.
how to keep track of the event's sequence number for build the projection so that when appling it, I have a consistent lastSequenceNumber.
What I'm missing?
P.S.: for the third problem think at the following scenario. We have a UserService and a CartService. The CartService has a projection where for each user keeps track of the products in the cart. Each cart's projection must have also user's name and other info's that are coming from the UserCreated event published from the UserService. If UserCreated comes after ProductAddedToCart the normal flow requires to throw an exception because the user doesn't exist yet.
What I'm missing?
You are missing flow -- consumers pull messages from sources, rather than having sources push the messages to the consumers.
When I wake up, I check my bookmark to find out which of your messages I read last, and then ask you if there have been any since. If there have, I retrieve them from you in order (think "document message"), also writing down the new bookmarks. Then I go back to sleep.
The primary purpose of push notifications is to interrupt the sleep period (thereby reducing latency).
With SQS acting as a queue, the idea is that you read all of the enqueued messages at once. If there are no gaps, then you can order the collection then start processing them and acking them. If there are gaps, you either wait (leaving the messages in the queue) or you go to the event store to fetch copies of the missing messages.
There's no magic -- if the message pipeline is promising "at least once" delivery, then the consumers must take steps to recognize duplicate messages as they arrive.
If UserCreated comes after ProductAddedToCart the normal flow requires to throw an exception because the user doesn't exist yet.
Review Race Conditions Don't Exist, by Udi Dahan: "A microsecond difference in timing shouldn’t make a difference to core business behaviors."
The basic issue is assuming we can get messages IN ORDER...
This is a fallacy in distributed computing...
I suggest you design for no message ordering in your system.
As for your issues, try and use UTC time in the message body/header created by the originator and try and work around this data point. Sequence numbers are going to fail unless you have a central deterministic sequence creator (which will be a non-scalable, single point of failure).
Using Sagas/State machine is a path that can help to make sense of (business) events ordering.

Multi-Thread Processing in .NET

I already have a few ideas, but I'd like to hear some differing opinions and alternatives from everyone if possible.
I have a Windows console app that uses Exchange web services to connect to Exchange and download e-mail messages. The goal is to take each individual message object, extract metadata, parse attachments, etc. The app is checking the inbox every 60 seconds. I have no problems connecting to the inbox and getting the message objects. This is all good.
Here's where I am accepting input from you: When I get a message object, I immediately want to process the message and do all of the busy work explained above. I was considering a few different approaches to this:
Queuing the e-mail objects up in a table and processing them one-by-one.
Passing the e-mail object off to a local Windows service to do the busy work.
I don't think db queuing would be a good approach because, at times, multiple e-mail objects need to be processed. It's not fair if a low-priority e-mail with 30 attachments is processed before a high-priority e-mail with 5 attachments is processed. In other words, e-mails lower in the stack shouldn't need to wait in line to be processed. It's like waiting in line at the store with a single register for the bonehead in front of you to scan 100 items. It's just not fair. Same concept for my e-mail objects.
I'm somewhat unsure about the Windows service approach. However, I'm pretty confident that I could have an installed service listening, waiting on demand for an instruction to process a new e-mail. If I have 5 separate e-mail objects, can I make 5 separate calls to the Windows service and process without collisions?
I'm open to suggestions or alternative approaches. However, the solution must be presented using .NET technology stack.
One option is to do the processing in the console application. What you have looks like a standard producer-consumer problem with one producer (the thread that gets the emails) and multiple consumers. This is easily handled with BlockingCollection.
I'll assume that your message type (what you get from the mail server) is called MailMessage.
So you create a BlockingCollection<MailMessage> at class scope. I'll also assume that you have a timer that ticks every 60 seconds to gather messages and enqueue them:
private BlockingCollection<MailMessage> MailMessageQueue =
new BlockingCollection<MailMessage>();
// Timer is created as a one-shot and re-initialized at each tick.
// This prevents the timer proc from being re-entered if it takes
// longer than 60 seconds to run.
System.Threading.Timer ProducerTimer = new System.Threading.Timer(
TimerProc, null, TimeSpan.FromSeconds(60), TimeSpan.FromMilliseconds(-1));
void TimerProc(object state)
{
var newMessages = GetMessagesFromServer();
foreach (var msg in newMessages)
{
MailMessageQueue.Add(msg);
}
ProducerTimer.Change(TimeSpan.FromSeconds(60), TimeSpan.FromMilliseconds(-1));
}
Your consumer threads just read the queue:
void MessageProcessor()
{
foreach (var msg in MailMessageQueue.GetConsumingEnumerable())
{
ProcessMessage();
}
}
The timer will cause the producer to run once per minute. To start the consumers (say you want two of them):
var t1 = Task.Factory.StartNew(MessageProcessor, TaskCreationOptions.LongRunning);
var t2 = Task.Factory.StartNew(MessageProcessor, TaskCreationOptions.LongRunning);
So you'll have two threads processing messages.
It makes no sense to have more processing threads than you have available CPU cores. The producer thread presumably won't require a lot of CPU resources, so you don't have to dedicate a thread to it. It'll just slow down message processing briefly whenever it's doing its thing.
I've skipped over some detail in the description above, particularly cancellation of the threads. When you want to stop the program, but let the consumers finish processing messages, just kill the producer timer and set the queue as complete for adding:
MailMessageQueue.CompleteAdding();
The consumers will empty the queue and exit. You'll of course want to wait for the tasks to complete (see Task.Wait).
If you want the ability to kill the consumers without emptying the queue, you'll need to look into Cancellation.
The default backing store for BlockingCollection is a ConcurrentQueue, which is a strict FIFO. If you want to prioritize things, you'll need to come up with a concurrent priority queue that implements the IProducerConsumerCollection interface. .NET doesn't have such a thing (or even a priority queue class), but a simple binary heap that uses locks to prevent concurrent access would suffice in your situation; you're not talking about hitting this thing very hard.
Of course you'd need some way to prioritize the messages. Probably sort by number of attachments so that messages with no attachments are processed quicker. Another option would be to have two separate queues: one for messages with 0 or 1 attachments, and a separate queue for those with lots of attachments. You could have one of your consumers dedicated to the 0 or 1 queue so that easy messages always have a good chance of being processed first, and the other consumers take from the 0 or 1 queue unless it's empty, and then take from the other queue. It would make your consumers a little more complicated, but not hugely so.
If you choose to move the message processing to a separate program, you'll need some way to persist the data from the producer to the consumer. There are many possible ways to do that, but I just don't see the advantage of it.
I'm somewhat a novice here, but it seems like an initial approach could be to have a separate high-priority queue. Every time a worker is available to obtain a new message, it could do something like:
If DateTime.Now - lowPriorityQueue.Peek.AddedTime < maxWaitTime Then
ProcessMessage(lowPriorityQueue.Dequeue())
Else If highPriorityQueue.Count > 0 Then
ProcessMessage(highPriorityQueue.Dequeue())
Else
ProcessMessage(lowPriorityQueue.Dequeue())
End If
In a single thread, while you can still have one message blocking the others, higher priority messages could be processed sooner.
Depending on how fast most messages get processed, the application could create a new worker on a new thread if the queues are getting too big or too old.
Please tell me if I'm completely off-base here though.

When multi MessageConsumer connect to same queue(Websphere MQ),how to load balance message-consumer?

I am Using WebSphere MQ 7,and I have two clients connected to the same QMgr and consuming messages from same queue, like following code:
while (true) {
TextMessage message = (TextMessage) consumer.receive(1000);
if (message != null) {
System.out.println("*********************" + message.getText());
}
}
I found only one client always retrieve messages. Is there any method to let consume-message load balancing in two client? Any config options in MQ Server side?
When managing queue handles, it is MUCH faster for WMQ to put them in a stack rather than a LIFO queue. So if the messages arrive on the queue slower than it takes to process them, it is possible that an instance will process the message and perform another GET, which WMQ pushes down on the stack. The result is that only one instance will see messages in a low-volume use case.
In larger environments where there are many instances waiting on messages, it is possible that activity will round-robin amongst a portion of those instances while the other instances starve for messages. For example, with 10 GETters on the queue you may see three processing messages and 7 idle.
Although this is considerably faster for MQ, it is confusing to customers who are not aware of how it works internally and so they open PMRs asking this exact question. IBM had to choose among several alternatives:
Adding several code paths to manage by stack for performance when fully loaded, versus manage by LIFO for apparent balancing when lightly loaded. This bloats the code, adds many new decision points to introduce errors and solves a problem that was one of perception rather than reliability or performance.
Educate the customers as to how it works. Of course, once you document it, then you can't change it. The way I found out about this was attending the "WMQ Internals" presentation at IMPACT. It's not in the Infocenter so IBM can change it, but it is available for customers.
Do nothing. Although this is the best result from the code design point of view, the behavior is counter-intuitive. Users need to understand why things do not behave as expected and will waste time trying to find the configuration that results in the desired behavior, or open a PMR.
I don't know for sure that it still works this way but I expect that it does. The way I used to test it was to put many messages on the queue at once and then see how they were distributed. If you drop about 50 messages on the queue in one unit of work, you should see a better distribution between the two instances.
How do you drop 50 messages on the queue at once? First generate them with the applications turned off or to a spare queue. If you generated them in the target queue, use the Q program to move them to the spare queue. Now start the apps and make sure the queue's IPPROC count equals however many instances of the app you started. Using Q again, copy all of the messages to the original queue in a single unit of work. Since they all become available on the queue at once, your two app instances should both immediately be passed a message. If you used copy instead of move, you can repeat this as often as required.
Your client is not doing much, so one instance can probably handle the full load. Try implementing a more realistic workload, or, simpler yet, put a Thread.sleep in the client.

data structures for scheduling workflow?

I'm wondering what kind(s) of data structures / algorithms might help facilitate handling the following situation; I'm not sure if I need a single FIFO, or a priority queue, or multiple FIFOs.
I have N objects that must proceed through a predefined workflow. Each object must complete step 1, then step 2, then step 3, then step 4, etc. Each step is either done quickly or involves a "wait" that depends on something external to finish (like the completion of a file operation or whatever). Each object maintains its own state. If I had to define an interface for these objects, it would be something like this (written below in pseudo-Java, but this question is language-agnostic):
public interface TaskObject
{
public enum State { READY, WAITING, DONE };
// READY = ready to execute next step
// WAITING = awaiting some external condition
// DONE = finished all steps
public int getCurrentStep();
// returns # of current step
public int getEndStep();
// returns # of step which is the DONE case.
public State getState();
// checks state and returns it.
// multiple calls will always be identical,
// except WAITING which can transition to READY or DONE.
public State executeStep();
// if READY, executes next step and returns getState().
// otherwise, returns getState().
}
I need to write a single-threaded scheduler that calls executeStep() on the "next" object. My problem is, I'm not sure exactly what technique I should use to determine what the "next" object is. I want it to be fair (first-come, first-serve for objects not in the WAITING state).
My gut call is to have 3 FIFOs, READY, WAITING and DONE. In the beginning all objects are placed in the READY queue, and the scheduler repeats a loop where it takes the first object off the READY queue, calls executeStep(), and places it onto the queue that's appropriate the the result of executeStep(). Except that items in the WAITING queue need to be put into the READY or DONE queue when their state changes.... argh!
Any advice?
If this has to be single threaded you can use a single FIFO queue for the ready and waiting objects and use your thread to process each object as it comes out. If it's state changes to WAITING then simply stick it back into the queue and it will be reprocessed.
Something like (psuedocode):
var item = queue.getNextItem();
var state = item.executeStep ();
if (state == WAITING)
queue.AddItem (item);
else if (state == DONE)
// add to collection of done objects
Depending on the time executeStep takes to run you may need to introduce a delay (Sleep not for) to prevent a tight polling loop. Ideally you would have the objects publish state change events and do-away with the polling altogether.
This is the kind of timeslicing approach that was commonplace in hardware and comms software before multithreading was widespread.
You don't have any way for the task object to notify you when it changes from WAITING to READY except polling it, so the WAITING and READY queues could really just be one. You can just loop around it calling executeStep() on each one in turn. If as a return value from executeStep() you receive DONE, then you remove it from that queue and stick it on the DONE queue and forget about it.
If you wanted to give "more priority" towards READY objects and attempt to run through all possible READY objects before wasting any resources polling WAITING you can maintain 3 queues like you said and only process the WAITING queue when you have nothing in the READY queue.
I personally would spend some effort to eliminate the polling of the state, and instead define an interface that the object could use to notify your scheduler when a state changes.
You might want to study the design of an operating system scheduler. Check out the Linux and *BSD for example.
Some pointers for the Linux scheduler: Inside the Linux scheduler and Understanding the Linux Kernel
NOTE - this does not address your question of how to schedule, but I would use a separate state class that defines the states and transitions. The objects should not know what states they should go through. They can be informed of what "Step" they are at, etc.
there are some patterns for that as well.
You should read up a little on operating systems - specifically the scheduler. Your example is a scaled down set of that problem and if you copy the relevant parts it should work great for you.
You can then add priority, etc.
The simplest technique that satisfies the requirements in your question is to repeatedly iterate over all TaskObjects calling executeStep() on each one.
This requires only one construct to hold the TaskObjects, and it can be any iterable structure, e.g. an array.
Since a TaskObject can transition from WAITING to READY asynchronously, you have to poll every TaskObject that you don't know is DONE.
The performance gained from not polling the DONE TaskObjects may be negligible. It depends on the processing load of calling executeStep() on a DONE TaskObject, which should be small.
A simple round-robin polling assures that once a READY TaskObject has executed a step, it will not execute another step until all other TaskObjects have had a chance to execute.
One obvious additional requirement is detecting when all TaskObjects are in the DONE state so you can stop processing.
To avoid polling DONE TaskObjects you will need to either maintain a flag for each one, or chain the TaskObjects in two queues: READY/WAITING and DONE.
If you store the TaskObjects in an array, make it an array of records, with members DoneFlag and TaskObject.
If for some reason you are storing the TaskObjects in a queue, with available enqueue() and dequeue() methods, then the overhead of two queues instead of one may be small.
-Al.
Take a look a this link.
Boost state machines vs uml
Boost has state machines. Why reinvent?

Resources