Requeue received SQS message through lambda - aws-lambda

What works: Using the AWS-SQS SDK for .Net, I'm able to receive a message batch and delete individual messages within the message visibility timer window. I can also not do anything and effectively requeue the message which then gets dead lettered if it's requeued a configured number of times.
What doesn't work: I'm trying to do the same thing using a Lambda. I've created a trigger which works meaning SQS triggers the lambda sending it a batch of messages. These messages seem to get deleted from the queue automatically when this happens. I've no control over deleting an individual message or requeuing it.
Throwing an exception in the lambda seems to get all the messages in the batch to remain in the queue. Is there a more elegant way to do this and also is there a way to do it for individual messages instead of the entire batch?

When you use the SQS Lambda Trigger your messages will be automatically deleted from the Queue in case of successful processing.
If you want to, you can poll messages from SQS instead of having the messages trigger your Lambda. In order to do it, just don't configure a trigger for your Lambda function and have it execute every X amount of time via a Cloud Watch Event. Upon every execution, you then poll your SQS queue for new messages. I really don't see why you'd do it though, since having the trigger and auto deletion of messages is very handy. If you want to send failed messages to a DLQ, simply set the DLQ on your source SQS queue itself. You can then customise maxReceiveCount and, once this threshold is reached, messages will then go to the configured DLQ.
From the docs:
If a message fails processing multiple times, Amazon SQS can send it
to a dead letter queue. Configure a dead letter queue on your source
queue to retain messages that failed processing for troubleshooting.
Set the maxReceiveCount on the queue's redrive policy to at least 5 to
avoid sending messages to the dead letter queue due to throttling.

Set the batch size to one and throw an exception if you want to requeue it.

Related

Difference between Mule's jms:consume and jms:listener

Using Mule 4.4 on premise along with Apache ActiveMQ. I am trying to get a better understanding of how Mule handles messaging.
I tried searching the internet but am not finding any details about the same.
I have a jms:listener:
<jms:listener doc:name="Listener" config-ref="JMS_Config" destination="Consumer.mine2.VirtualTopic.mine.test">
<jms:consumer-type >
<jms:queue-consumer />
</jms:consumer-type>
</jms:listener>
and I have a jms:consume:
<jms:consume doc:name="Consume" config-ref="JMS_Config" destination="Consumer.mine1.VirtualTopic.mine.test">
<jms:consumer-type >
<jms:queue-consumer />
</jms:consumer-type>
</jms:consume>
To me both seem to be doing the same job i.e. consume messages from a queue / topic. So why do we have these two components in Mule?
jms listener is a source, so using it you can trigger a flow whenever there is a new message in queue.
jms consume is an operation, so you can use it anywhere within a flow's execution, i.e. like a http request component that you put in the middle of a flow.
Both of them will consume a message from the queue/topic. However, when you are using a listener you are basically saying that "There is a queue, I do not know when a new message will come in, but whenever it comes I need to perform these actions"
When you use a consume operation, you are saying "I am expecting some message soon and with that I will to these actions".
Now in both cases a message may not come at all, and both have there own way to deal with it. A listener, since it is a source, will just not simply trigger the flow and keep on waiting. A consume will block your execution until a message is there, or you can configure a time out to not be blocked fore ever.
A common use case can be reprocessing messages from a DLQ. Generally when you use a queue, you also have a DLQ so that the messages that failed during processing, from the "main" queue, can be sent to the DLQ and reprocessed later.
Now, in this architecture, you will typically use the jms listener only with the main queue for the processing of the messages. And you will have a separate flow that can have a http listener so that you can trigger an HTTP Endpoint whenever you are ready to reprocess the messages from the DLQ. This flow with http listener with consume all the messages (probably in a loop) from the DLQ and publish them back to the main queue

Do messages get deleted from the queue after a read operation in IBM MQ?

I am using Nifi to get data from IBM MQ. It is working fine. My question is once the message is read from an MQ queue, does it get deleted from the queue? How to just read messages from the queue without deleting them from the queue?
My question is once the message is read from an MQ queue, does it get
deleted from the queue?
Yes, that is the default behavior.
How to just read messages from the queue without deleting them from
the queue?
You use the option: MQGMO_BROWSE_FIRST followed by MQGMO_BROWSE_NEXT on the MQGET API calls.
You can also open the queue for browse only. i.e. MQOO_BROWSE option for MQOPEN API call.
It sounds as if you would like to use a "publish/subscribe" model rather than a "point-to-point" model.
From ActiveMQ:
Topics In JMS a Topic implements publish and subscribe semantics. When
you publish a message it goes to all the subscribers who are
interested - so zero to many subscribers will receive a copy of the
message. Only subscribers who had an active subscription at the time
the broker receives the message will get a copy of the message.
Queues A JMS Queue implements load balancer semantics. A single
message will be received by exactly one consumer. If there are no
consumers available at the time the message is sent it will be kept
until a consumer is available that can process the message. If a
consumer receives a message and does not acknowledge it before closing
then the message will be redelivered to another consumer. A queue can
have many consumers with messages load balanced across the available
consumers.
If you have a queue, when a consumer consumes that message, it is removed from the queue so that the next consumer consumes the next message. With a topic, multiple consumers can be subscribed to that topic and retrieve the same message without being exclusive.
If neither of these work for you, I'm not sure what semantics you're looking for -- a "queue" which doesn't delete the message when it is consumed will never let a consumer access any but the first message.

How can I delete Messages from a JMS Queue?

I have several jobs that each have multiple messages queued.
The messages for each job are randomly interleaved.
If a user decides to cancel a job I want to remove all the messages that are part of that job from the queue.
I have been able to use browse()to find all the messages to remove but haven't been able to figure out how to remove them.
I tried getting rid of them by using receiveSelected() but it just hangs.
(I am using JmsTemplate)
JMS does not define administration type functions, such as deleting a message from the queue.
The programmatic way is to consume the message. Alternatively, there are messaging management tools that allow you to do this without programming.
There is no any JMS API to remove message. However seems you can invoke purge removeMessage or other operation as per your requirement on MBean org.apache.activemq:type=Broker,brokerName=amq,destinationType=Queue,destinationName=testQ to delete messages.
You are on the right track. Consuming those messages using a selector is the way to go - such as with JmsTemplate receiveSelected.
If it "hangs", it likely means you have no matching messages on the queue. Can you identify your messages on some Property, such as JMSType or other StringProperty? Make sure you can and supply a JMS Selector.
I.e. if your jobs are initiated by user X, then set some property such as "initiatingUser" to "x". Then to consume all messages, use the selector initiatingUser='X'.

MQ Queue MQTT_DEPT trying to generate multiple trigger messages

We have an HA application which runs on a few servers pointing to the same queue manager. Because the message sequencing is important we use a semaphore queue to make sure not two apps are reading messages from a business queue even if there are multiple messages waiting to be processed on that queue.
The work flow is something like below:
The business queue has a MQTT_DEPT trigger set on it to fire up when the queue dept is equals to one
The business queue receives one or multiple massages.
The trigger fires up when the queue dept becomes one and put a trigger message into an initiation queue.
Being a queue dept trigger it should switch itself OFF after delivering the trigger message into the initiation queue
There is a MDB listening to that initiation queue which gets the trigger messaage
Because sequence is crucial for us we put a limit on the initiation queue to have the maximum queue dept of one. This way only one MDB can get a message at a time
When the trigger message arrives the MDB goes and reads all messages from the business queue and when done it sends a trigger activate message to a dedicated queue.
Once this is done it commits everything. The trigger generated message gets discarded. At this point the trigger activation message becomes available for processing.
A trigger activating MDB gets the activation message and go and switches the queue dept trigger back on on the business queue so the whole process can kick off again.
In most of the cases all works fine. However now and then we get some messages in the dead letter queue which causes support calls by our monitoring tools. Inspecting the messages in the DLQ we noticed they are trigger generated messages and the reason for being sent to the DLQ is that the initiation queue was full (remember our limit of maximum one message).
We had no explanation about how this is possible. The IBM documentation says the triggering is switched off after sending the first trigger generated message but however from what we are experimenting it looks like wile the first trigger generated message gets processed and such is in an uncommitted state a second trigger generated message gets created and sent to the initiation queue. Not being able to deliver it (because of the max dept = 1 limit) it sends it to the dead letter queue.
Any inputs about why this would be possible and how to get around it would be very welcome. False support calls at 2:00AM are for sure not funny.
Thank you in advance for your inputs.
The explanation for our problem was on the IBM MQ triggering documentation which we should read more carefully IBM note:
Note: If you stop and restart the queue manager, the TriggerInterval timer is reset. There is a small window during which it is possible to produce two trigger messages. The window exists when the trigger attribute of the queue is set to enabled at the same time as a message arrives and the queue was not previously empty (MQTT_FIRST) or had TriggerDepth or more messages (MQTT_DEPTH).
Once we had the explanation the solution was simple. Rather than configuring the business queue to send the trigger generated message straight to the initiation queue we configured it to send to a different queue whit ho maximum queue dept restriction set on it. Then we implemented another MDB to get these trigger generated messages and redirect them to the initiation queue. This MDB was implemented two swallow any exception caused by an unable to deliver trigger generated message to the initiation queue because the existing queue dept limitation.
In other words with this new MDB we took the role of delivering the trigger generated message from the MQ manager in our hands and made the process ignore any delivery failure caused by the queue dept.

Removing a message that is being redelivered

I have a set up of an ActiveMQ broker and a single consumer. Consumer gets a message that he is not able to process because a service that it depends has a bug (once fixed it will be fine). So the message keeps being redelivered (consumer redelivery) - we use JMS sessions. With our current configuration it will keep redelivering it every 10 minutes for 1 day. That obviously causes a problem because other messages are not being consumed.
In order to solve this problem I have accessed the queue through JMX and tried to delete that message but it is not there. I guess it is cached on the consumer and not visible at the broker.
Is there any way to delete this message other than restarting the application?
Is it possible to configure the redelivery mechanism so that such message (that causes a live lock eventually) is put at the end of the queue so that other messages can be processed?
The 10 minutes for 1 day redelivery policy should stay as is.
I think you're right that the messages are stuck in the consumer's prefetch buffer, and I don't know of a way to delete them from there.
I'd change your redelivery policy to send to the DLQ after the second failure, with a much shorter interval between them, like 30 seconds, and I'd configure the DLQ strategy as an individualDeadLetterStrategy so you get a separate DLQ containing only messages from this particular queue. Then set up a consumer on this DLQ to move the messages to (the end of) the main queue whenever your reprocessing condition is met (whether that's after a certain delay, or based on reading some flag value from a database, or whatever). This consumer is where you'd implement "every 10 minutes for 1 day" logic, instead of in the redelivery policy where you currently have it.
That will keep the garbage ones out of the main queue so they don't delay other messages from being consumed, but still ensure that they will be reprocessed later. And it will put them on the broker instead of in the consumer's prefetch buffer, where you can view and delete them.
The only way to get it to the back of the queue is to reproduce it to the queue. Redelivery polices can only be configured down to the destination on the connection factory.
Given that you already have a connection, it shouldn't be to hard to create a producer that can either move the given message to a DLQ or produce it back to the queue when you run into that particular bug.
Setting jms.nonBlockingRedelivery=true on the connection factory resolved the problem. Now even if there is a message redelivered it does not block processing of other Messages.

Resources