I am trying to understand the behavior when a message expires while it is being processed. I have the following flow setup in my test program.
queue.start -> sleepProcessor -> queue.end
The sleepProcessor takes the message from queue.start and sleeps for 5 seconds. I send a message to queue.start with a JMSExpiration of 1 second from System.currentTimeMillis(). I have setup dead letter queues for each queue named DLQ.queue.start and DLQ.queue.end.
The behavior I see is that 1 message ends up in DLQ.queue.start and another message ends up in DLQ.queue.end.
How does 1 message become 2?
The test program can be found here with source
http://s000.tinyupload.com/?file_id=04225732819763428273
I have included a maven pom.xml and the test program can be run with the following command
mvn camel:run
OS: Linux 3.5.0 (Mint 14)
JVM: 1.6
ActiveMQ: 5.7.0
Camel: 2.8.2
Any insight would be greatly appreciated.
Thank you
For the discussion, I think it is useful to show all the important parts of your code.
Your route looks as follows:
from("jms:queue.start")
.transacted()
.process(sleepProcessor)
.to("jms:queue.end");
The headers is set as follows:
final Map<String, Object> headers = new HashMap<String, Object>();
final long expiration = System.currentTimeMillis() + 1000; // <-- start time plus 1 second!
headers.put("JMSExpiration", expiration);
Finally, the processing is started as follows:
final ProducerTemplate template = camelContext.createProducerTemplate();
template.sendBodyAndHeaders("jms:queue.start", "Test message", headers);
What happend during the processing:
The route is started and the message is sent to jms:queue.start with expiration time set to 1 second
The jms:queue.start route receives the messages and the sleepProcessor waits 5 seconds. Note, that no acknowledgment is sent to the JMS broker yet (a message is only acknowledged at the end of the Camel route). During this time, the message is expired and we see the first log produced by the DLQ processor.
Afterwards, the message is put to jms:queue.end. The message header JMSExpriation indicates, that the message is expired (> start time plus 5 seconds), and the message is expired right away, and we see the second log produced by the DLQ processor looking as the message is duplicated.
Conclusion:
The route jms:queue.start is not cancelled even if the JMS message is expired during the processing. Of course, the reason for this is that it is just not possible/allowed for the JMS broker to cancel an ongoing message consumpation. We have to live with that and react accordingly in the DLQ processors.
Related
We have a PHP app that forwards messages from RabbitMQ to connected devices down a WebSocket connection (PHP AMQP pecl extension v1.7.1 & RabbitMQ 3.6.6).
Messages are consumed from an array of queues (1 per websocket connection), and are acknowledged by the consumer when we receive confirmation over the websocket that the message has been received (so we can requeue messages that are not delivered in an acceptable timeframe). This is done in a non-blocking fashion.
99% of the time, this works perfectly, but very occasionally we receive an error "RabbitMQ PRECONDITION_FAILED - unknown delivery tag ". This closes the channel. In my understanding, this exception is a result of one of the following conditions:
The message has already been acked or rejected.
An ack is attempted over a channel the message was not delivered on.
An ack is attempted after the message timeout (ttl) has expired.
We have implemented protections for each of the above cases but yet the problem continues.
I realise there are number of implementation details that could impact this, but at a conceptual level, are there any other failure cases that we have not considered and should be handling? or is there a better way of achieving the functionality described above?
"PRECONDITION_FAILED - unknown delivery tag" usually happens because of double ack-ing, ack-ing on wrong channels or ack-ing messages that should not be ack-ed.
So in same case you are tying to execute basic.ack two times or basic.ack using another channel
(Solution below)
Quoting Jan Grzegorowski from his blog:
If you are struggling with the 406 error message which is included in
title of this post you may be interested in reading the whole story.
Problem
I was using amqplib for conneting NodeJS based messages processor with
RabbitMQ broker. Everything seems to be working fine, but from time to
time 406 (PRECONDINTION-FAILED) message shows up in the log:
"Error: Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - unknown delivery tag 1"
Solution <--
Keeping things simple:
You have to ACK messages in same order as they arrive to your system
You can't ACK messages on a different channel than that they arrive on If you break any of these rules you will face 406
(PRECONDITION-FAILED) error message.
Original answer
It can happen if you set no-ack option of a Consumer to true that means you souldn't call ack function manually:
https://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume.no-ack
The solution: set no-ack flag to false.
If you aknowledge twice the same message you can have this error.
A variation of what they said above about acking it twice:
there is an "obscure" situation where you are acking a message more than once, which is when you ack a message with multiple parameter set to true, which means all previous messages to the one you are trying to ack, will be acked too.
And so if you try to ack one of the messages that were "auto acked" by setting multiple to true then you would be trying to "ack" it multiple times and so the error, confusing but hope you understand it after a few reads.
Make sure you have the correct application.properties:
If you use the RabbitTemplate without any channel configuration, use "simple":
spring.rabbitmq.listener.simple.acknowledge-mode=manual
In this case, if you use "direct" instead of "simple", you will get the same error message. Another one looks like this:
spring.rabbitmq.listener.direct.acknowledge-mode=manual
Trying to build a list of possible errors that can potentially happen during the execution of kafkaTemplate.send() method:
Errors related serialization process;
Some network issues or broker is down;
Some technical issues on broker side, for example acknowledgement not received from broker, etc.
And now I need to find a way how to handle all possible errors in the right way:
Based on the business requirements: in case of any exceptions I need to do the following things:
Retry 3 times;
If all 3 retries failed - log appropriate message.
I found that configuration property spring.kafka.producer.retries available, and I believe it exactly what I need.
But have can I configure recovery method (method that will be executed when all retries failed)?
Probably that spring.kafka.producer.retries is not what you are looking for.
This auto-configuration property is mapped directly to ConsumerConfig:
map.from(this::getRetries).to(properties.in(ProducerConfig.RETRIES_CONFIG));
and then we go and read docs for that ProducerConfig.RETRIES_CONFIG property:
private static final String RETRIES_DOC = "Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error."
+ " Note that this retry is no different than if the client resent the record upon receiving the error."
+ " Allowing retries without setting <code>" + MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION + "</code> to 1 will potentially change the"
+ " ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second"
+ " succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be"
+ " failed before the number of retries has been exhausted if the timeout configured by"
+ " <code>" + DELIVERY_TIMEOUT_MS_CONFIG + "</code> expires first before successful acknowledgement. Users should generally"
+ " prefer to leave this config unset and instead use <code>" + DELIVERY_TIMEOUT_MS_CONFIG + "</code> to control"
+ " retry behavior.";
As you see spring-retry is fully not involved in the process and all the retries are done directly inside Kafka Client and its KafkaProducer infrastructure.
Although this is not all. Pay attention to the KafkaProducer.send() contract:
Future<RecordMetadata> send(ProducerRecord<K, V> record);
It returns a Future. And if we take a look closer to the implementation, we will see that there is a synchronous part - topic metadata request and serialization, - and enqueuing for the batch for async sending to Kafka broker. The mentioned ProducerConfig.RETRIES_CONFIG has an effect only in that Sender.completeBatch().
I believe that the Future is completed with an error when those internal retries are exhausted. So, you probably should think about using a RetryTemplate manually in the service method around KafkaTemplate to be able to control a retry (and recovery, respectively) around metadata and serialization which are really sync and blocking in the current call. The actual send you also can control in that method with retry, but if you call Future.get() to block it for a response or error from Kafka client on send.
I am using ActiveMQ for messaging and there is one requirement that if message is duplicate then it should handled by AMQ automatically.
For that I generate unique message key and set to messageproccessor.
following is code :
jmsTemplate.convertAndSend(dataQueue, event, messagePostProccessor -> {
LocalDateTime dt = LocalDateTime.now();
long ms = dt.get(ChronoField.MILLI_OF_DAY) / 1000;
String messageUniqueId = event.getResource() + event.getEntityId() + ms;
System.out.println("messageUniqueId : " + messageUniqueId);
messagePostProccessor.setJMSMessageID(messageUniqueId);
messagePostProccessor.setJMSCorrelationID(messageUniqueId);
return messagePostProccessor;
});
As it can be seen code generates unique id and then set it to messagepostproccessor.
Can somehelp me on this, is there any other configuration that I need do.
A consumer can receive duplicate messages mainly for two reasons: a producer sent the same message more times or a consumer receive the same message more times.
Apache ActiveMQ Artemis includes powerful automatic duplicate message detection, filtering out messages sent by a producer more times.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html
I do have the following (multi-threaded) process in place:
Browse MQ queue (with lock) and get the next available message
Do something with it which might or might not fail
a. If successful, remove message from queue and start over or b. if not successful, leave message on queue
My problem arises from the fact that my application could die unexpectedly between step 2 and 3 and the application would then produce a duplicated message upon restart.
Is there a way to mark a message as 'dirty' or 'processing' on the queue (while or after reading it) with the mark persisting even if the application restarts?
I have tried to use the marks provided by MQ, but they do not survive a restart. Another possibility would be to move the message to a 'processing' queue, remove it on success or move it back to the source queue on failure, but this requires a second queue and is not trivial code anymore.
Rough code example:
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.options = MQConstants.MQGMO_BROWSE_FIRST | MQConstants.MQGMO_LOCK;
MQMessage message = new MQMessage();
message.correlationId = MQC.MQCI_NONE;
message.messageId = MQC.MQMI_NONE;
queue.get(message, gmo);
boolean success = processMessage(message);
// Application gets killed here after successful message processing.
// Produces duplicate after restart.
if (success) {
MQGetMessageOptions gmo2 = new MQGetMessageOptions();
gmo2.options = MQConstants.MQGMO_MSG_UNDER_CURSOR;
queue.get(new MQMessage(), gmo2);
}
Basically, I'd like to achieve this:
get message non-destructively from queue (only if not marked as "processing")
mark message as "processing" on queue
process message (including sending to some destination)
if successful delete from queue, or remove "processing" state on queue otherwise
If the application dies right after a successful third step 'process message', the message would be marked as "processing" and would not be processed again (as it might have been already).
Note: I do not want this process to have any knowledge about the message processing (other than success).
Have you tried SYNCPOINT?Commit or Backout kind of operation might help in this scenario.
Your solution is a horrible design. If you are updating a database then why are you not using 2 phase commits (i.e. XA transactions)?
Just have your MQAdmin setup up the queue manager to use the resource manager of the particular database you are using then it is as simple as:
Start transaction (2 phase commit)
Get message (destructive get NOT browse) from the queue
Update database
Commit transaction
Hence, everything in the transaction, MQGET and database update, will either be committed together or backed out together.
If your application were to crash, then the resource manager will automatically back out everything in the transaction.
Lets say you don't want to use 2 phase commit or you are not updating a database (updating a file) then you can use single phase UOW (Unit of Work).
Use MQGMO option of MQGMO_SYNCPOINT
Get message (destructive get NOT browse) from the queue
Update whatever you are updating
Issue MQCMIT
Things to know about MQ:
If an application issues an MQDISC or ends normally, with current uncommitted operations, an implied MQCMIT is executed by IBM MQ, i.e. all operations done under SYNCPOINT are committed.
If an application ends abnormally, with current uncommitted operations, an implied MQBACK is executed by IBM MQ, i.e. all operations done under SYNCPOINT are rolled back.
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763