Spring integration priority channel with round robin consumer - spring

I am trying to implement a kind of Priority Channel with spring integration but I am blocked and didn't find a solution on the web.
I would like to read multiples channel (6) alternatively with a service activator. Each channel is for a priority level (CRITICAL, HIGHEST, HIGH, NORMAL, LOW, LOWEST). Message come from RabbitMQ and are distributed on correct channel with a Router.
The problem is that I would like to create a Service Activator who read alternatively in the channels using a round robin based on time.
For example, CRITICAL should have a 5 secondes running time, and then the service switch to HIGHEST for 3 seconds, and then to HIGH for 1 second, ...
Is it possible to do it properly with spring integration ?
Maybe I don't use the correct component to do it ?
Regards

The Priority Channel pattern works a bit different way.
It is a queue with sort support. When a new message arrives to the queue it is sorted to the proper place according to its priority. That absolutely does matter how your consumer of this channel works. The priority happens only in the channel. The consumer just polls messages from that queue like they are ordered for it: the CRITICAL, than HIGHEST, if CRITICAL aren't present and so on.
On the other hand, if you distribute messages by priority do different channels, why just don't have separate Service Activators for each of those channels? And each priority will be read by its own process.
There is no such a solution based on the "time to run". It just doesn't seem with a good fit for messaging architecture. Although you might can implement via scheduled task cancel() or Quartz to "perform task until...".
UPDATE
Regarding time control, I think you can come up with the solution which in the infinite loop really start()s different service activators and stop()s them after an appropriate scheduled time. All those service activators should listen to different queue channels.

Related

Azure Queues - Functions - Message Visibility - Workers?

I have some questions regarding the capabilities regarding Azure Queues, Functions, and Workers. I'm not really sure how this works.
Scenario:
q-notifications is an queue in an Azure storage account.
f-process-notification is a function in Azure that is bound to q-notifications. Its job is to get the first message on the queue and process it.
In theory when a message is added to q-notifications, the function f-process-notification should be called.
Questions:
Does the triggered function replace the need to have workers? In other words, is f-process-notification called each time a message is placed in the queue.
Suppose I place a message on the queue that has a visibility timeout of 5 minutes. Basically I am queueing the message but it shouldn't be acted on until 5 minutes pass. Does the queue trigger f-process-notification immediately when the message is placed on the queue, or will it only trigger f-process-notification when the message becomes visible, i.e. 5 minutes after it is placed on the queue?
In Azure Functions, each Function App instance running your queue triggered function will have its own listener for the target queue. It monitors the queue for new work using an exponential backoff strategy. When new items are added to the queue the listener will pull multiple items off of the queue (batching behavior is configurable) and dispatch then in parallel to your function. If your function is successful, the message is deleted, otherwise it will remain on the queue to be reprocessed. To answer your question - yes we respect any visibility timeout you specify. If a message is added with a 5 minute timeout it will only be processed after that.
Regarding scale out - when N instances of your Function App are running they will all cooperate in processing the queue. Each queue listener will independently pull batches of messages off the queue to process. In effect, the work will be load balanced across the N instances. Exactly what you want :) Azure Functions is implementing all the complexities of the multiple consumer/worker pattern for you behind the scenes.
I typically use a listener logic as opposed to triggers. The consumer(s) are constantly monitoring the queue for messages. If you have multiple consumers, for example 5 instances of the consuming code in different Azure worker roles processing the same bus/queue, the first consumer to get the message wins (they are "competing"). This provides a scaling scenario common in a SOA architecture..
This article describes some of the ways to defer processing.
http://markheath.net/post/defer-processing-azure-service-bus-message
good luck!

Spring Integration message processing partitioned by header information

I want to be able to process messages with Spring Integration in parallel. The messages come from multiple devices and we need to process messages from the same device in sequential order but the devices can be processed in multiple threads. There can be thousands of devices so I'm trying to figure out how to assign processor based on mod of the device ID using Spring Integration's semantics as much as possible. What approach should I be looking at?
It's difficult to generalize without knowing other requirements (transaction semantics etc) but probably the simplest approach would be a router sending messages to a number of QueueChannels using some kind of hash algorithm on the device id (so all messages for a particular device go to the same channel).
Then, have a single-threaded poller pulling messages from each queue.
EDIT: (response to comment)
Again, difficult to generalize, but...
See AbstractMessageRouter.determineTargetChannels() - a router actually returns a physical channel object (actually a list, but in most cases a list of 1). So, yes, you can create the QueueChannels programmatically and have the router return the appropriate one, based on the message.
Assuming you want all the messages to then be handled by the same downstream flow, you would also need to create a <bridge/> for each queue channel to bridge it to the input channel of the next component in the flow.
create a QueueChannel
create a BridgeHandler (set the outputChannel to the input channel of the next component)
create a PollingConsumer (constructor takes the channel and handler; set trigger etc)
start() the consumer.
All of this can be done in your custom router initialization and implement determineTargetChannels() to select the queue.
Depending on the processing time for your events, I would generally recommend running the downstream flow on the poller thread rather than setting a taskExecutor to avoid issues with the next poll trying to schedule another task before this one's done. You might need to increase the default taskScheduler's pool size.

How does ActiveMQ prevent starvation of low priority messages?

I have implemented a priority queue in ActiveMQ. If the queue is being continuously flooded with the high priority messages, the low priority messages will never get processed. How does ActiveMQ handle such situations or how can this situation be avoided or handled?
ActiveMQ doesn't attempt to do anything to prevent this as it's up to you to solve it based on the needs of your application. If you have such a situation you might want to consider instead using a Queue per priority to allow for load balancing across the Queues.
Extending to Tim Bish's answer, there are some features in ActiveMQ you can use to handle this situation.
You can setup a virtual destination to filter out high and low prio messages, like this (inside a virtualDestinationInterceptor tag).
<virtualDestinations>
<compositeQueue name="ALL">
<forwardTo>
<filteredDestination selector="JMSPriority < 5" queue="LOW.PRIO"/>
<filteredDestination selector="JMSPriority > 4" queue="HIGH.PRIO"/>
</forwardTo>
</compositeQueue>
</virtualDestinations>
Then you can follow the alternative strategy presented here.
You put one consumer on the LOW.PRIO queue and multiple consumers on the HIGH.PRIO queue. Then the LOW.PRIO messages will be handled, but with less threads than high prio messages.
You can also read messages directly from the "ALL" queues with said selectors in your consumer application.

I have multiple queues and i want to set priorities to these queues. Is it possible in JMS?

If I have the 3 queues of priority 1,2 & 3 respectively. I want my consumer to consume first from queue withe priority 1, then 2 & so on. If in case queue with higher priority is empty, the consumer can consume from the queue with lower priority.
Is it possible to achieve from JMS or ActiveMQ or any other way?How?
You'd have to control that logic yourself using this method. To ActiveMQ, or any other JMS provider, you are just using another queue.
However, you can use a single queue for message priority. There are a couple different ways on how to do this as described in the documentation.
If you want your consumer to be as simple as possible then have the broker figure out the priority. Otherwise you'll need to mess with multiple consumers or inefficient single consumer logic with selectors to consume.
In both cases, your producer will just need to be smart enough to set the JMSPriority header to whatever priority the logic says it should be.
The only downside really is the fact that you have a broker side config to set up for that queue specifically rather than everything being automatic.

How can I monitor/manage queue in ZeroMQ?

First of all, I'm new to ZeroMQ and message queue systems, so what I'm trying to do may be solved through a different approach. I'm designing a messaging system that does the following:
Multiple clients connect to a broker and send the id of an item that needs to be processed. The client disconnects immediately and does not wait for a response.
The broker sends items to workers, one item per worker, to perform some processing. Each return returns a signal that the processing was completed.
I have a rudimentary system setup which is processing requests/replies correctly, but I'd also like to be able to do the following:
Query the broker to see how many processes are actually running on the workers and how many are simply waiting to be run.
Have the broker ensure that only one process per id is running - if a duplicate id arrives and that item is not currently being processed by a worker, do not add it to the queue.
I'm using a poll setup with broker/dealer sockets. The code I'm using is very similar to this example from Ian Barber.
My first inclination (although I'm not sure how to implement it in zmq) is to have the broker keep track of the ids that have been received, and those that are actively being processed by workers. It seems that the broker forwards requests to workers immediately, regardless of whether or not they are available to actually run the processing. The workers then queue up the ids and process them in order. This isn't ideal since I'm looking to be able to monitor and control what is going on in the system centrally to achieve reliability.
Anyways, any hints, tips or examples of this type of setup would be greatly appreciated.
ZeroMQ is, in my opinion, best used in broker-less designs, for which the library is designed. If you want to monitor the number of items in a queue, or throughput, or whatever, you're going to have to build that into the application/device/producer yourself. Since you're new to messaging, that could get out of hand real quick. Given this, I'd suggest looking into RabbitMQ (or a similar broker), which would provide these services for you out of the box. If you do adopt RabbitMQ (or rather, AMQP), I'd suggest using a fanout exchange for the scenario you describe above.
The Python library for ZeroMQ seems to come with a pattern for dealing with this: http://zeromq.github.com/pyzmq/devices.html#monitoredqueue

Resources