Need help to handle MDB Exception in two ways - jms

I'm trying to handle two different types of problems while processing a message.
The first problem is if the remote database is down. In that case, the message should stop processing, and try again later. This message should never go to a DLQ, and should keep trying until the remote database is up.
The second problem is when there is a problem with the message. In that case, it should go to the DLQ.
How should I be structuring the following code?
#Override
public void onMessage(Message message) {
try {
// Do some processing
messageProcessing(message); // Should DLQ if message is bad
// Save to the database
putNamedLocation(message); // <<--- Exception when external DB is down
} catch (Exception e) {
logger.error(e.getMessage());
mdc.setRollbackOnly();
}
}

Assuming you can detect bad messages definitively in the code body of the MDB, I would write the bad messages to the DLQ directly. This gives you a bit more freedom to perhaps categorize the error and optionally send different types of bad messages to different "DLQ-Like" queues, and/or apply a time-to-live to DLQ'ed messages so that no-hope-of-ever-being-processed type messages don't pile up in the queue for ever. You can add #Resource annotated instance variables to your MDB class referencing the ConnectionFactory and Queue references to support the sending of the messages to the target DLQ. The bottom line is, make sure you detect the error and DLQ the message yourself.
As for the DB being down, you can detect this by catching exceptions when acquiring a connection or writing your updates. In this case, clean up your resources and throw a RuntimeException. This will cause the message to be redelivered, but you will want to check the JMS configuration for two things:
Make sure the max-redelivery count is high enough, otherwise the count will tick over and the message will be DLQed eventually anyway.
If your JMS implementation supports it, add a redelivery delay to rejected messages to allow some time for the DB to come back up, otherwise your messages will endlessly spin in a deliver/reject loop.
To avoid #2 (which is tricky if your JMS implementation does not support redilvery delay, like WebSphereMQ), you can use the JBoss JMX management interface for the MDB to stop (and later restart) delivery on the MDB. However, you can't do this inside the MDB in the same thread that is processing the message because the MDB will wait for the message to complete processing, which it can't because it is waiting for the MDB to stop, which it can't because...[and so on] so... your best bet is to start some sort of sentry that polls the DB and when it finds it down, stops the MDB and when it finds it up again, restarts it. See this question for a snippet on how to do that.
That last part should help deal with any unexpected exceptions resulting from message validations. (i.e. the DB is fine, but for some reason the message is totally fubar resulting in uncaught exceptions which causes the message to be redelivered). Since down-DB messages should not be redelivered more than a few times (on account of your sentry), you can check a message's redelivery count and if it is ridiculously high then you know you have poison message and you can ditch it, or DLQ it.
Hope that's helpful.

Related

GCP PubSub Spring Boot repeat extract message

I need help with a problem with gcp pub/sus. I have a process that send 100 messages with filters to pubsub and another application (in spring boot) receive these messages. When spring boot application receive message from pubsub (not pull), process 100 messages but, into the process, receive more messages, in diferents times receive diferents numbers of messages, any times receive 120, another 140, and the others more than 200. I wasn't found any solution of this, this is my code:
#Bean
#ServiceActivator(inputChannel = "pubsubInputChannel")
public MessageHandler messageReceiver() {
return message -> {
System.out.println("Message arrived! Payload: " + new String((byte[]) message.getPayload()));
//other process of app (call other api)
AckReplyConsumer consumer = (AckReplyConsumer) message.getHeaders().get(GcpPubSubHeaders.ACKNOWLEDGEMENT);
consumer.ack();
};
}
please help me!!!
Duplicate messages can happen for different reasons in Google Cloud Pub/Sub. One thing to keep in mind is that Cloud Pub/Sub offers at-least-once delivery meaning that some amount of duplicates is always possible, so your application must be resilient to them. That many duplicates does seem a bit high, though. In general duplicates can generally happen for the following reasons:
Messages are being sent by the publisher more than once. This can happen if the publisher got disconnected from Cloud Pub/Sub and sent the same message again. If this type of duplication occurs, then the messages will have different message IDs.
The subscriber is taking too long to acknowledge messages. In your code, you have //other process of app (call other api). How long does this process take? If it is longer than the deadline for acknowledging the message, then the message will be redelivered. Keep in mind that if this other process requires locks be grabbed for all messages, there could be a contention issue with too many requests trying to get those locks at the same time, resulting in processing delays. By default, the ack deadline for a message is ten seconds. When using the Java client library, the deadline is automatically extended by the maxAckExtensionPeriod, which defaults to one hour. This property can be set in the DefaultSubscriberFactory for Spring as well.
Messages are not acked at all. If an exception prevents the call to ack or there is deadlock resulting in that line of code never being reached, then the message will be redelivered.
The use case is one of a large backlog of small messages. In this situation, buffers are prone to fill up in the client in a way that results in redelivery of messages.

MassTransit 3.2.1 - Validation

I want to validate an incoming message, using FluentValidation in my case, and if it fails it should return immediately. I looked into http://docs.masstransit-project.com/en/latest/usage/observers.html, and in my case, I like the idea of
public class ConsumeObserver : IConsumeObserver
{
Task IConsumeObserver.PreConsume<T>(ConsumeContext<T> context)
{
//1.Validate here
//2. If success go on to consumer
//3. If fails exit with the result of validation and don't go through consumer.
}
Task IConsumeObserver.PostConsume<T>(ConsumeContext<T> context)
{
}
Task IConsumeObserver.ConsumeFault<T>(ConsumeContext<T> context, Exception exception)
{
}
}
because I get the message already deserialized and so is easy to use the validator. The problem is that I don't know how to return without going through consumer and a the same time keep the validation errors.
Thank you.
Observers typically watch versus take action, and that's the approach with observers in MassTransit. While you could throw an exception from the PreConsume method, which would cause the message to either retry or be sent to the error queue, it's not the most obvious behavior to developers down the road who may not understand why the message is failing.
Another approach would be to create middleware component that can validate the message, and if it's not valid, perform a specific action on the message (such as moving it to an invalid queue, or dumping it to a log, or whatever) so that the message is removed from the queue. It's important to understand how this might impact the message producer.
For instance, if it was a request message, and the sender is waiting on a response, discarding the message means that no response will be received. The default behavior of a consumer that throws an exception is to propagate the fault back to the requestor, completing the cycle, so keep that in mind.
Another option is to just add the validation behavior to the consumer, using either an injected validation interface, or within the consumer itself. That way, the handling of the message is close to the consumer which improves code cohesion and makes it easy to see what is happening.
Ideally, validating at the message producer is the best option, to avoid flooding the queue with invalid messages. So that's another option.
So, several choices, your requirements will dictate which makes the most sense.

Duplicate Events in Message Broker

In Message Broker (v8.0.0.0) we are using the event monitoring framework to drive our flow-level auditing. We're looking at three types of audit - start, and and rollback; and the corresponding transaction.Start/End/Rollback events as defined by Message Broker are being used for this.
For rollback, within each flow, we have a generic exception handler that's catching the exception terminal from the input node, does some processing, and then throws an exception again. This means we get a rollback event from broker and the original message is backed out to DLQ.
However, for these cases, we are getting four events instead of the two expected (i.e. Start and Rollback). There is an extra Start event and an End event being generated.
I looked around and there's a possible duplicate of this issue in the MQSeries forums, where somebody suggested that this is because the message is being backed out. (Link at the end of the post.)
Can anybody suggest a mitigation/workaround? I looked at the event messages themselves and there's no way of distinguishing one from the other.
MQSeries Forum Thread
The extra start and end is because it is actually the MQ input node that sends the message to the DLQ, not MQ itself.
Since the transaction start message is raised before the node knows that it needs to DLQ the message we also need a transaction event to close the open transaction.
In fact we do this work under it's own MQ transaction so the events to correspond with the actual transaction boundaries, it is just that in your case the message never makes the flow on the last iteration.
It would be nice to distinguish between a successful transaction and one where we performed a back, like we do with rollback events but IIB does not currently allow for this.
I would suggest raising it as a requirement at the following URI:
https://www.ibm.com/developerworks/rfe/?PROD_ID=532

Camel JMS ensuring ordering when unsidelining from dead letter channel

I am using camel to integrate with ActiveMQ JMS. I am receiving prices for products on this queue. I am using JMSXGroupID on productId to ensure ordering across a productId. Now if I fail to process this message I move it to a DeadLetterQueue. This could be because of a connection error on a dependent service or because of error with the message itself.
In case of the former I would have to manually remove it from the DLQ and put it back into the JMS queue.
Now the problem is that I dont know if any other message on that groupId has been received and processed or not. And hence unsidelining from DLQ will disrupt the order. On the other hand if I dont unsideline it and no other message has been received the product Id will not get the correct price.
1 solution that I have in mind is to use a fast key-value store(Redis) to store the last messageId or JMSTimestamp against a productId(message group). This is updated everytime I dequeue a message. Any other solution for this?
Relying on message order in JMS is a risky business - at best.
The best thing to do is to make the receiver handle messages out of sequence as a special case (but may take advantage message order during normal operation).
You may also want to distinguish between two errors: posion messages and temporary connection problems, maybe even use two different error queues for them. In the case of a posion message (invalid payload etc.) then there is nothing you can really do about it except starting a bug investigation. In such cases, you can probably send along "something else", such as dummy message to not interfere with order.
For the issues with connection problems, you can have another strategy - ActiveMQ Redelivery Policies. If there is network trouble, it's usually no use in trying to process the second message until the first has been handled. A Redelivery Policy ensures that (given you have a single consumer, that is). There is another question at SO where the poster actually has a solution to your problem and wants to avoid it. Read it. :)

JMS - one queue and many receivers (consumers)

I have a JMS queue published by a third party.
I want to setup multiple consumers on different machines, with only one particular machine's consumer, acknowledging messages on that queue. In short, if a particular machine's consumer does not receive the message, then that message should not be removed from queue.
Is this achievable ?
Okay, you might have your reasons for this setup and it's easy to achieve.
I would go with local session transactions. It is rather easy to commit or rollback the transactions acording to some critera, such as which server is consuming the message. If rolled back, the message will end up first in the queue again.
Sample code might look like this:
public class MyConsumer implements MessageListener{
Session sess;
public void init(Connection conn, Destination dest){
// connection and destination from JNDI, or some other method.
sess = conn.createSession(true, Session.AUTO_ACKNOWLEDGE);
MessageConsumer cons = sess.createConsumer(dest);
cons.setMessageListener(this);
conn.start();
}
#Override
public void onMessage(Message msg) {
// Do whatever with message
if(isThisTheSpecialServer()){
sess.commit();
}else{
sess.rollback();
}
}
private boolean isThisTheSpecialServer(){
// figure out if this server should delete messages or not
}
}
If you are doing this inside a Java EE container with JTA and you are using UserTransactions, you could just call UserTransaction.setRollBack();
or if you are using declarative transactions you could just throw a Runtime exception to make the transaction fail and rollback the message to the queue, once you have read the message and done things. Note that database changes will roll back as well with this approach (if you are using JTA and not local JMS transactions).
UPDATE:
You should really do this using transactions, not acknowledgement.
A summary of this topic (for ActiveMQ, but written generally for JMS) is found here.
http://activemq.apache.org/should-i-use-transactions.html
I don't know if this behaviour is consistent with all JMS implementations, but for ActiveMQ if you try to use a non transacted session with Session.CLIENT_ACKNOWLEDGEMENT, then it will not really behave as you expect. A message that has been read, but not acknowledged, is still on the queue, but will not get "released" and delivered to other JMS consumers until the connection is broken to the first consumer (i.e. connection.close(), a crash or similar).
Using local transactions, you can controll this by session.commit() and session.rollback() explicitly. I see no real point in not using transactions. Acknowledgement is just there to guarantee delivery.
Another way to look at this is in the case of a forwarding queue. You could apply it to your design by doing the following:
Create a consumer on the published queue from the third party.
This consumer has one job - distribute every message to other queues.
Create additional queues that your real subscribers will listen to.
Code your message listener to take each message and forward it to the various destinations.
Change each of your listeners to read from their specific queue.
By doing this, you ensure that every listener sees every message, every transaction works as expected, and you don't make any assumptions about how the message is being sent (for example, what if the publisher side is doing AUTO_ACKNOWLEDGE ?)

Resources