MQ Queue MQTT_DEPT trying to generate multiple trigger messages - jms

We have an HA application which runs on a few servers pointing to the same queue manager. Because the message sequencing is important we use a semaphore queue to make sure not two apps are reading messages from a business queue even if there are multiple messages waiting to be processed on that queue.
The work flow is something like below:
The business queue has a MQTT_DEPT trigger set on it to fire up when the queue dept is equals to one
The business queue receives one or multiple massages.
The trigger fires up when the queue dept becomes one and put a trigger message into an initiation queue.
Being a queue dept trigger it should switch itself OFF after delivering the trigger message into the initiation queue
There is a MDB listening to that initiation queue which gets the trigger messaage
Because sequence is crucial for us we put a limit on the initiation queue to have the maximum queue dept of one. This way only one MDB can get a message at a time
When the trigger message arrives the MDB goes and reads all messages from the business queue and when done it sends a trigger activate message to a dedicated queue.
Once this is done it commits everything. The trigger generated message gets discarded. At this point the trigger activation message becomes available for processing.
A trigger activating MDB gets the activation message and go and switches the queue dept trigger back on on the business queue so the whole process can kick off again.
In most of the cases all works fine. However now and then we get some messages in the dead letter queue which causes support calls by our monitoring tools. Inspecting the messages in the DLQ we noticed they are trigger generated messages and the reason for being sent to the DLQ is that the initiation queue was full (remember our limit of maximum one message).
We had no explanation about how this is possible. The IBM documentation says the triggering is switched off after sending the first trigger generated message but however from what we are experimenting it looks like wile the first trigger generated message gets processed and such is in an uncommitted state a second trigger generated message gets created and sent to the initiation queue. Not being able to deliver it (because of the max dept = 1 limit) it sends it to the dead letter queue.
Any inputs about why this would be possible and how to get around it would be very welcome. False support calls at 2:00AM are for sure not funny.
Thank you in advance for your inputs.

The explanation for our problem was on the IBM MQ triggering documentation which we should read more carefully IBM note:
Note: If you stop and restart the queue manager, the TriggerInterval timer is reset. There is a small window during which it is possible to produce two trigger messages. The window exists when the trigger attribute of the queue is set to enabled at the same time as a message arrives and the queue was not previously empty (MQTT_FIRST) or had TriggerDepth or more messages (MQTT_DEPTH).
Once we had the explanation the solution was simple. Rather than configuring the business queue to send the trigger generated message straight to the initiation queue we configured it to send to a different queue whit ho maximum queue dept restriction set on it. Then we implemented another MDB to get these trigger generated messages and redirect them to the initiation queue. This MDB was implemented two swallow any exception caused by an unable to deliver trigger generated message to the initiation queue because the existing queue dept limitation.
In other words with this new MDB we took the role of delivering the trigger generated message from the MQ manager in our hands and made the process ignore any delivery failure caused by the queue dept.

Related

Requeue received SQS message through lambda

What works: Using the AWS-SQS SDK for .Net, I'm able to receive a message batch and delete individual messages within the message visibility timer window. I can also not do anything and effectively requeue the message which then gets dead lettered if it's requeued a configured number of times.
What doesn't work: I'm trying to do the same thing using a Lambda. I've created a trigger which works meaning SQS triggers the lambda sending it a batch of messages. These messages seem to get deleted from the queue automatically when this happens. I've no control over deleting an individual message or requeuing it.
Throwing an exception in the lambda seems to get all the messages in the batch to remain in the queue. Is there a more elegant way to do this and also is there a way to do it for individual messages instead of the entire batch?
When you use the SQS Lambda Trigger your messages will be automatically deleted from the Queue in case of successful processing.
If you want to, you can poll messages from SQS instead of having the messages trigger your Lambda. In order to do it, just don't configure a trigger for your Lambda function and have it execute every X amount of time via a Cloud Watch Event. Upon every execution, you then poll your SQS queue for new messages. I really don't see why you'd do it though, since having the trigger and auto deletion of messages is very handy. If you want to send failed messages to a DLQ, simply set the DLQ on your source SQS queue itself. You can then customise maxReceiveCount and, once this threshold is reached, messages will then go to the configured DLQ.
From the docs:
If a message fails processing multiple times, Amazon SQS can send it
to a dead letter queue. Configure a dead letter queue on your source
queue to retain messages that failed processing for troubleshooting.
Set the maxReceiveCount on the queue's redrive policy to at least 5 to
avoid sending messages to the dead letter queue due to throttling.
Set the batch size to one and throw an exception if you want to requeue it.

Camel JMS Queue Polling and data recovery

Hi I am new to Camel and have a design question related to JMS queues.
I am receiving set of data. These data have a reference date. These data are sent every 15 minutes by a batch process.
I have to process the received data and forward them to another route.
If a given data cannot be processed, I need to reprocess it. And I have to ensure it is processed before the next data set is processed.
So I was thinking of creating a JMS route to receive these data before processing. Then process it. Then send it to another queue.
FTP --> Process data rows (A) --> JMS Queue --> Processor (B) --> direct:call
If processor B fails I want the data to be processed before the next data set is sent by FTP. (because second data set may contain an update of the data of the first dataset)
So I was thinking using a queue, to make sure they are always processed in the order they are being received.
But my experience with JMS, without Camel, is that once the object is consumed from the queue it is not in the queue anymore.
Is it also the case with Camel?
In this case to I have to retry to process the data, or put them back in the queue?
This "recovery" part is not clear to me and I'd like to understand the patterns that do support this.
Many thanks for your help
Gilles
This part "once the object is consumed from the queue it is not in the queue anymore." is not fully correct. Actually, when you are subscribing to the queue and getting a message you need to process it and send acknowledge back to the JMS broker. If acknowledge is successful then the message will be removed from the queue. But if acknowledge will be not successful or if your process will die and connection to the broker will break then the message will not be removed from the queue and will be passed to another consumer.
Often most of the JMS libraries are using mode when acknowledgement is sent right when message was received by consumer but you always have possibility to change this mode and send acknowledgement manually when your processing part will be finished successfully.
What about camel jms (http://camel.apache.org/jms.html) you can use endpoint option "acknowledgementModeName" which has some different possible values like:
AUTO_ACKNOWLEDGE (default) - acknowledgement will be sent right after corresponded "from" in your route
CLIENT_ACKNOWLEDGE - allows the application to control when the acknowledgment is sent and if there are no exceptions will be thrown during exchange processing then message will be acknowledged and removed from queue.

Removing a message that is being redelivered

I have a set up of an ActiveMQ broker and a single consumer. Consumer gets a message that he is not able to process because a service that it depends has a bug (once fixed it will be fine). So the message keeps being redelivered (consumer redelivery) - we use JMS sessions. With our current configuration it will keep redelivering it every 10 minutes for 1 day. That obviously causes a problem because other messages are not being consumed.
In order to solve this problem I have accessed the queue through JMX and tried to delete that message but it is not there. I guess it is cached on the consumer and not visible at the broker.
Is there any way to delete this message other than restarting the application?
Is it possible to configure the redelivery mechanism so that such message (that causes a live lock eventually) is put at the end of the queue so that other messages can be processed?
The 10 minutes for 1 day redelivery policy should stay as is.
I think you're right that the messages are stuck in the consumer's prefetch buffer, and I don't know of a way to delete them from there.
I'd change your redelivery policy to send to the DLQ after the second failure, with a much shorter interval between them, like 30 seconds, and I'd configure the DLQ strategy as an individualDeadLetterStrategy so you get a separate DLQ containing only messages from this particular queue. Then set up a consumer on this DLQ to move the messages to (the end of) the main queue whenever your reprocessing condition is met (whether that's after a certain delay, or based on reading some flag value from a database, or whatever). This consumer is where you'd implement "every 10 minutes for 1 day" logic, instead of in the redelivery policy where you currently have it.
That will keep the garbage ones out of the main queue so they don't delay other messages from being consumed, but still ensure that they will be reprocessed later. And it will put them on the broker instead of in the consumer's prefetch buffer, where you can view and delete them.
The only way to get it to the back of the queue is to reproduce it to the queue. Redelivery polices can only be configured down to the destination on the connection factory.
Given that you already have a connection, it shouldn't be to hard to create a producer that can either move the given message to a DLQ or produce it back to the queue when you run into that particular bug.
Setting jms.nonBlockingRedelivery=true on the connection factory resolved the problem. Now even if there is a message redelivered it does not block processing of other Messages.

Blocking competing clients to take message from ActiveMQ

We have a JMS queue and multiple competing clients are reading from this queue.
Once the message is taken and successfully processed, we want to send the acknowledge to delete ( i.e. CLIENT ACKNOWLEDGE )
However, we want to make sure that if one client has picked the message another client should not take it from the queue.
Does activeMQ provide this feature out of the box using some configuration ?
Moreover:
If the message processing failed after picking the message, so it could not be acknowledged back, in this scenario we should like other client thread to pickup the message. Is it possible out of the box with configuration , may be specifying timeout values ?
Regards,
JE
You need to take some time to understand the difference between a Topic and a Qeueue in order to understand why the first question is not an issue.
For the second question it depends a bit on the ACK mode you are using and how you are processing messages sync or async. Normally for processing where you want to control redeliveries you would do the work inside of a transaction and if the processing fails the message would be redelivered when the TX is rolled back. ActiveMQ supports redelivery policies both client side and broker side that control how many time a message will be redelivered before sent to a DLQ.

Oracle AQ same message is delivered twice

I created a AQ in oracle and wrote 2 JMS consumers in Java to listen to the queue. I have observed sometimes that if I produce some message in to queue; the count of dequeued messages from queue is greater than what enqueued. It means that some messages are consumed twice.
I have created queue with property:- multiple_consumers => FALSE
And JMS consumers are working in CLIENT_ACKNOWLEDGE mode
Please help me learn the possible reasons for such behavior and it's solution. So, that I can replicate the problem and solve above issue and ensure that the number of message enqueued is equal to number of message dequeued in case of multiple JMS consumers listening to same AQ .
Without having seen your code, CLIENT_ACKNOWLEDGE typically says you are sending acknowledgements manually. If you do not send an ack, the message won't get deleted and the broker will try to redeliver it at a later stage (like when you restart the connection or similar). This might be the cause of your concern.

Resources