JMS - one queue and many receivers (consumers) - jms

I have a JMS queue published by a third party.
I want to setup multiple consumers on different machines, with only one particular machine's consumer, acknowledging messages on that queue. In short, if a particular machine's consumer does not receive the message, then that message should not be removed from queue.
Is this achievable ?

Okay, you might have your reasons for this setup and it's easy to achieve.
I would go with local session transactions. It is rather easy to commit or rollback the transactions acording to some critera, such as which server is consuming the message. If rolled back, the message will end up first in the queue again.
Sample code might look like this:
public class MyConsumer implements MessageListener{
Session sess;
public void init(Connection conn, Destination dest){
// connection and destination from JNDI, or some other method.
sess = conn.createSession(true, Session.AUTO_ACKNOWLEDGE);
MessageConsumer cons = sess.createConsumer(dest);
cons.setMessageListener(this);
conn.start();
}
#Override
public void onMessage(Message msg) {
// Do whatever with message
if(isThisTheSpecialServer()){
sess.commit();
}else{
sess.rollback();
}
}
private boolean isThisTheSpecialServer(){
// figure out if this server should delete messages or not
}
}
If you are doing this inside a Java EE container with JTA and you are using UserTransactions, you could just call UserTransaction.setRollBack();
or if you are using declarative transactions you could just throw a Runtime exception to make the transaction fail and rollback the message to the queue, once you have read the message and done things. Note that database changes will roll back as well with this approach (if you are using JTA and not local JMS transactions).
UPDATE:
You should really do this using transactions, not acknowledgement.
A summary of this topic (for ActiveMQ, but written generally for JMS) is found here.
http://activemq.apache.org/should-i-use-transactions.html
I don't know if this behaviour is consistent with all JMS implementations, but for ActiveMQ if you try to use a non transacted session with Session.CLIENT_ACKNOWLEDGEMENT, then it will not really behave as you expect. A message that has been read, but not acknowledged, is still on the queue, but will not get "released" and delivered to other JMS consumers until the connection is broken to the first consumer (i.e. connection.close(), a crash or similar).
Using local transactions, you can controll this by session.commit() and session.rollback() explicitly. I see no real point in not using transactions. Acknowledgement is just there to guarantee delivery.

Another way to look at this is in the case of a forwarding queue. You could apply it to your design by doing the following:
Create a consumer on the published queue from the third party.
This consumer has one job - distribute every message to other queues.
Create additional queues that your real subscribers will listen to.
Code your message listener to take each message and forward it to the various destinations.
Change each of your listeners to read from their specific queue.
By doing this, you ensure that every listener sees every message, every transaction works as expected, and you don't make any assumptions about how the message is being sent (for example, what if the publisher side is doing AUTO_ACKNOWLEDGE ?)

Related

Subscribe specific message in RMQ message queue

I have a RM queue test-queue. In there I need to handle 3 separate messages (message-1, message-2, message-3) for 3 separate processes in 3 separate services.
I use #RabbitListener like below to access the message
#RabbitListener(queues = "test-queue")
public void getMessage1(Message message) {
System.out.println(message);
}
but I need to access specific message ex:message-1 only in this function.
Any heads up?
That’s wrong design for AMQP protocol . You need to think about 3 different queues for those messages and correct bindings for them from a single exchange with the proper routing. Then you can easy have 3 consumers from those queues.
My point is that queue entity is a consumer responsibility. The producer just dump a message into an exchange. So, you just dictate from your consumer application how you’d like to get produced messages.
Out of subject you can investigate a Spring Integration router pattern implementation if you really can’t change your RabbitMQ structure : https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#messaging-routing-chapter

Control consumption of multiple JMS queues

I can't find this information anywhere. I have two queues, #JmsListener(destination = "p1"), #JmsListener(destination = "p2"). How can I make sure I only process 1 message at a time, even though I am listening to 2 queues, and also how do I configure the polling of what queue I get messages from first, that is after processing a message I want to poll p1 first. Or do weighted polling: p1:90%, p2:10%. Etc.
Basically I am asking how to implement priority processing of messages for Spring. I'm using SQS which doesn't support priorities.
Use one of the JmsTemplate receive() or receiveAndConvert() methods instead of the message-driven model.
Use transactions if you want to ensure no message loss.

Spring Batch Integration: Increase throughput when consuming data from jms

I work on a task that requires:
consuming data from JMS;
processing it;
loading it into a database.
As the documentation suggests:
I start with <int-jms:message-driven-channel-adapter channel="CHANNEL1" ... /> to send new JMS messages to the CHANNEL1 channel;
I apply the transformer that converts messages from the CHANNEL1 channel to JobLaunchRequest with a job that inserts data to the database and the payload that contains original JMS message's payload;
The transformed messages go to the CHANNEL2 channel;
<batch-int:job-launching-gateway request-channel="CHANNEL2"/> starts a new job execution when a new message appears in the channel;
The problem is that I start a new database transaction each time a new jms messages received.
The question: how should I handle such a flow? What is the common pattern for this?
UPDATE
I start the job for each message. One message contains one piece of data. If I resort in just using spring-batch then I will have to manage some sort of a poller (correct me if I am wrong), but I would like to apply a message driven approach like (either one):
Grace period: when a new message appears I wait for 10 more messages or start processing everything I received 10 seconds after the first message is received.
I simply read everything the JMS queue contains after I got notified that the queue contains a new message.
Of course, I would like the solution to be transnational: the order of message processing does not matter.
The BatchMessageListenerContainer can be used in your use case. It enables the batching of messages within a single transaction.
Note this class is not part of the main framework, it’s actually a test class, but you can use it if it fits your needs.
Hope this helps.

Need help to handle MDB Exception in two ways

I'm trying to handle two different types of problems while processing a message.
The first problem is if the remote database is down. In that case, the message should stop processing, and try again later. This message should never go to a DLQ, and should keep trying until the remote database is up.
The second problem is when there is a problem with the message. In that case, it should go to the DLQ.
How should I be structuring the following code?
#Override
public void onMessage(Message message) {
try {
// Do some processing
messageProcessing(message); // Should DLQ if message is bad
// Save to the database
putNamedLocation(message); // <<--- Exception when external DB is down
} catch (Exception e) {
logger.error(e.getMessage());
mdc.setRollbackOnly();
}
}
Assuming you can detect bad messages definitively in the code body of the MDB, I would write the bad messages to the DLQ directly. This gives you a bit more freedom to perhaps categorize the error and optionally send different types of bad messages to different "DLQ-Like" queues, and/or apply a time-to-live to DLQ'ed messages so that no-hope-of-ever-being-processed type messages don't pile up in the queue for ever. You can add #Resource annotated instance variables to your MDB class referencing the ConnectionFactory and Queue references to support the sending of the messages to the target DLQ. The bottom line is, make sure you detect the error and DLQ the message yourself.
As for the DB being down, you can detect this by catching exceptions when acquiring a connection or writing your updates. In this case, clean up your resources and throw a RuntimeException. This will cause the message to be redelivered, but you will want to check the JMS configuration for two things:
Make sure the max-redelivery count is high enough, otherwise the count will tick over and the message will be DLQed eventually anyway.
If your JMS implementation supports it, add a redelivery delay to rejected messages to allow some time for the DB to come back up, otherwise your messages will endlessly spin in a deliver/reject loop.
To avoid #2 (which is tricky if your JMS implementation does not support redilvery delay, like WebSphereMQ), you can use the JBoss JMX management interface for the MDB to stop (and later restart) delivery on the MDB. However, you can't do this inside the MDB in the same thread that is processing the message because the MDB will wait for the message to complete processing, which it can't because it is waiting for the MDB to stop, which it can't because...[and so on] so... your best bet is to start some sort of sentry that polls the DB and when it finds it down, stops the MDB and when it finds it up again, restarts it. See this question for a snippet on how to do that.
That last part should help deal with any unexpected exceptions resulting from message validations. (i.e. the DB is fine, but for some reason the message is totally fubar resulting in uncaught exceptions which causes the message to be redelivered). Since down-DB messages should not be redelivered more than a few times (on account of your sentry), you can check a message's redelivery count and if it is ridiculously high then you know you have poison message and you can ditch it, or DLQ it.
Hope that's helpful.

JMS queue redelivery order in jboss

I send a java object to a queue from a thread. The relavent MDB's onMessage is invoked with a message from the queue. onMessage, I match a key present in the message with a key in a cache, if key is not present I throw a custom runtimeexception just to make the container redeliver this message. (I have another autonomous system that adds key to the cache from the external system response, it may be little slow by 3-5 seconds)
In such case, does this container add this unprocessed message to the end of the queue, or is it redelivered immediately? is there a way to delay the redelivery time? assuming the queue is always filled with ~550 messages every second.
regards
There's current a redelivery delay feature on HornetQ but all the subsequent messages are delivered fine.
There's a feature request in place to hold the queue for some time if a redelivery happens but that has not been implemented yet.
but if you have multiple consumers on the queue the order will be spread with your consumers anyways. You could use message-grouping and add a sleep on your onMessage if deliveryCount > 1. The message grouping is to guarantee no other consumer (or another MDB instance) will receive the messages out of order.
Depending on how you're application is done, and depending on your requirements you may want to only allow a single instance of your MDB.
Also: look at the consumer-window-size where you can select no buffering on the client which has a better behaviour when you have multiple consumers or multiple mdb instances.

Resources