AWS SQS, DLQ in spring boot - spring-boot

How do I add a DLQ configuration to my SQS configuration? I'm not sure how to integrate a DLQ with my existing queue. I'm using aws messaging and not JMS, so my annotation would be #SQSListener for my listener method. I have a config class that has following
#Bean
public SimpleMessageListenerContainer messageListenerContainer(AmazonSQSAsync amazonSQSAsync) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSQSAsync);
factory.setMaxNumberOfMessages(10);
SimpleMessageListenerContainer simpleMessageListenerContainer = factory.createSimpleMessageListenerContainer();
simpleMessageListenerContainer.setQueueStopTimeout(queueStopTimeout*1000);
simpleMessageListenerContainer.setMessageHandler(messageHandler(amazonSQSAsync));
return simpleMessageListenerContainer;
}
#Bean
public QueueMessageHandler messageHandler(AmazonSQSAsync amazonSQSAsync) {
QueueMessageHandlerFactory queueMessageHandlerFactory = new QueueMessageHandlerFactory();
queueMessageHandlerFactory.setAmazonSqs(amazonSQSAsync);
QueueMessageHandler messageHandler = queueMessageHandlerFactory.createQueueMessageHandler();
return messageHandler;
}
#Bean
public AmazonSQSAsync awsSqsAsync() {
AmazonSQSAsyncClient amazonSQSAsyncClient = new AmazonSQSAsyncClient(new DefaultAWSCredentialsProviderChain());
amazonSQSAsyncClient.setRegion(Region.getRegion(Regions.fromName(region)));
return new AmazonSQSBufferedAsyncClient(amazonSQSAsyncClient);
}
I couldn't find any right documentation to correctly set retries so that if the retries exceed the threshold, the message should go to a dead-letter queue

If I am not mistaken, setting the maximum retries, and associated DLQ is done on the Broker side, and not configurable as part of the listener.
Then in your code, you will do something like:
#SqsListener(value = "MainQueue", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void receive(String message, #Header("SenderId") String senderId, Acknowledgment ack) throws IOException {
ack.acknowledge();
}
#SqsListener(value = "DLQ-AssociatedWithMain")
public void receiveDlq(String message) throws IOException {
}
If Message is NOT acknowledged, then it will be retried for a specific max period, then sent to DLQ.
=== Edited ===
The below for LocalStack are suggestions (Never Tested), However LocalStack (Free Version) as of now is supposed to support the AWS CLI:
As such, if you look at the AWS CLI: you use aws create-queue to create a Queue, and --attributes if you wanted to specify DLQ Information, though I believe you must also create the DLQ queue as well before referencing the RN.
create-queue
--queue-name <value>
[--attributes <value>]
[--tags <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
DLQ Attribute Details:
The following attributes apply only to dead-letter queues:
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
RedriveAllowPolicy – The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:
redrivePermission – The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:
allowAll – (Default) Any source queues in this Amazon Web Services account in the same Region can specify this queue as the dead-letter queue.
denyAll – No source queues can specify this queue as the dead-letter queue.
byQueue – Only queues specified by the sourceQueueArns parameter can specify this queue as the dead-letter queue.
sourceQueueArns – The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the redrivePermission parameter is set to byQueue . You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the redrivePermission parameter to allowAll .
https://docs.aws.amazon.com/cli/latest/reference/sqs/create-queue.html
In LocalStack SQS Documentation, they have an example of creating an SQS Queue:
awslocal sqs create-queue --queue-name sample-queue
{
"QueueUrl": "http://localhost:4566/000000000000/sample-queue"
}
So just take this example, create your DLQ, then create your Queue with --attributes to point to the DLQ RN.
https://docs.localstack.cloud/aws/sqs/
Hope this helps guide you in the right direction,
==== Edited ===
Create Queue and DLQ using LocalStack:
Create DLQ First:
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name MyDLQ --region eu-west-1
Response:
{
"QueueUrl": "http://localhost:4566/000000000000/MyDLQ"
}
Create an attributes.json file with the contents below:
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:eu-west-1:000000000000:MyDLQ\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200"
}
Create your main queue pointing to aattributes.json file:
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name MyMainQueue --attributes file://attributes.json --region eu-west-1
Response:
{
"QueueUrl": "http://localhost:4566/000000000000/MyMainQueue"
}

Related

AWS SNS Queue Policy

I was trying to create a SNS queue programmatically via spring-cloud-starter-aws-messaging maven dependency. I could create the queue, send and receive messages
The next I would like to have is to set the queue policy so only particular user can consume the messages. This is for multi tenancy support, and each tenant will have its own queue. Just want to make sure the queue is not consumable by other tenants.
Here's the snippet to create a queue
AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.build();
Map<String, String> queueAttributes = new HashMap<>();
queueAttributes.put("FifoQueue", "true");
queueAttributes.put("ContentBasedDeduplication", "true");
CreateQueueRequest createFifoQueueRequest = new CreateQueueRequest(
queueName).withAttributes(queueAttributes);
CreateQueueResult createQueueResult = sqs.createQueue(createFifoQueueRequest);

Spring JMS DefaultMessageListenerContainer Polling frequency

I am using the DefaultMessageListenerContainer for consuming messages from ActiveMQ queue as below. With this implementation is there any polling mechanism, does the listener poll the queue to see if there is a new message every 1 second or so , or does the onMessage method get invoked whenever there is a new message in the queue? If it uses polling how can we increase or decrease the polling frequency (time) .
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setMessageListener(new MessageJmsListener ());
public class MessageJmsListener implements MessageListener {
#Override
public void onMessage(Message message) {
if (message instanceof TextMessage) {
try {
//process the message and create record in Data Base
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
}
The container polls the JMS client, but the broker pushes messages to the client.
So, no, the container does not poll the queue directly.
If there are no messages in the queue, the container will timeout after receiveTimeout and immediately re-poll and will get the next message as soon as the broker sends it.
The prefetch determines how many messages are sent to the consumer by the broker; so that might impact performance (but it's 1000 by default, I think, with recent ActiveMQ versions).
Setting the prefetch to 1 will give you the slowest delivery rate.
If you want to slow things down, you can add a Thread.sleep() in your listener.

How to route messages to JMS MessageListener/Consumer when using Camel Resequencer?

Scenario 1:
I have Message producer and consumer and the flow of the application is as follows:
producer -> Queue -> Consumer
Scenario 2:
Now we have introduced Camel to re-sequence the messages.So The flow of application is as follows :
producer -> Queue1 -> Camel(Resequence) ->Queue2 -> consumer
Question:
Can we have do the Scenario 2 without using the Queue2 in camel. I want the messages to be consumed directly by consumer after the camel re-sequence step so the application flow will be as follows:
producer -> Queue1 -> Camel(Resequence) -> consumer
To send message:
jmsTemplate.convertAndSend("mailbox", new Email("info#example.com", "Hello"));
Camel re-sequence
from("jms:queue1").resequence(header("myprop")).batch().to("queue2");
PS: I have used Message groups so that the messages to would be consumed by specific consumers, the solution should maintain this as well
In that case you wouldn't implement the JMS consumer yourself but delegate message consumption to Camel's JMS component - you already did that with from("jms:queue1").
The logic that you would invoke in your consumer in "scenario 2" would then be moved to a Camel processor:
from("jms:queue1")
.resequence(header("myprop")).batch()
.process(new MessageProcessor());
The Camel processor working with the received message:
public class MessageProcessor implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
Message in = exchange.getIn();
Object body = in.getBody();
// body contains the content of the received JMS message
...
}
}
or shorter using Java 8 lambda syntax:
from("jms:queue1")
.resequence(header("myprop")).batch()
.process().message(message -> {
Object body = message.getBody();
// body contains the content of the received JMS message
...
});
This consumer-side logic should be transparent regarding message groups. The broker automatically choses the right consumer owning a certain message group when dispatching the message, you don't need to worry in your consumer-side code.

Spring Cloud Stream with Kafka - message not being read after restarting the consumer

I have a micro service based application which reads messages from a Kafka topic. When the service is down, if there are any messages being written on the topic, I want the consumer to read those messages when it is up and running the next time. But I am missing all the messages when the service was down. How can I get the consumer to read the messages that were not read when the service was down?
I am getting all the messages when my micro service was up and any messages being return to the topic.
My application.properties:
spring.cloud.stream.bindings.input.destination=test
spring.cloud.stream.bindings.input.consumer.headerMode=raw
spring.cloud.stream.bindings.input.consumer.startOffset=latest
spring.cloud.stream.bindings.input.consumer.resetOffsets=true
spring.cloud.stream.bindings.input.consumer.instanceCount=3
spring.cloud.stream.bindings.input.consumer.autoCommitOffset=false
// this is my consumer code under my micro service root dir
#EnableBinding(Sink.class)
public class Consumer {
#ServiceActivator(inputChannel = Sink.INPUT)
public void consoleSink(Object payload){
logger.info("Type: "+ payload.getClass() + " which is byte array");
logger.info( "Payload: " + new String((byte[])payload));
} }
I appreciate any clue to fix this issue.
Setting the below properties helped me to fix my issue.
spring.cloud.stream.bindings.input.destination=test
spring.cloud.stream.bindings.input.consumer.headerMode=raw
spring.cloud.stream.bindings.input.consumer.startOffset=latest
spring.cloud.stream.bindings.input.consumer.resetOffsets=true
spring.cloud.stream.bindings.input.consumer.instanceCount=3
spring.cloud.stream.bindings.input.consumer.autoCommitOffset=false
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset=false
spring.cloud.stream.kafka.binder.autoCreateTopics=false
spring.cloud.stream.bindings.input.group=testGroup50
spring.cloud.stream.bindings.input.partitioned=false
Thanks,
BR

Spring Cloud Stream dynamic channels

I am using Spring Cloud Stream and want to programmatically create and bind channels. My use case is that during application startup I receive the dynamic list of Kafka topics to subscribe to. How can I then create a channel for each topic?
I ran into similar scenario recently and below is my sample of creating SubscriberChannels dynamically.
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
BindingProperties bindingProperties = new BindingProperties();
bindingProperties.setConsumer(consumerProperties);
bindingProperties.setDestination(retryTopic);
bindingProperties.setGroup(consumerGroup);
bindingServiceProperties.getBindings().put(consumerName, bindingProperties);
SubscribableChannel channel = (SubscribableChannel)bindingTargetFactory.createInput(consumerName);
beanFactory.registerSingleton(consumerName, channel);
channel = (SubscribableChannel)beanFactory.initializeBean(channel, consumerName);
bindingService.bindConsumer(channel, consumerName);
channel.subscribe(consumerMessageHandler);
I had to do something similar for the Camel Spring Cloud Stream component.
Perhaps the Consumer code to bind a destination "really just a String indicating the channel name" would be useful to you?
In my case I only bind a single destination, however I don't imagine it being much different conceptually for multiple destinations.
Below is the gist of it:
#Override
protected void doStart() throws Exception {
SubscribableChannel bindingTarget = createInputBindingTarget();
bindingTarget.subscribe(message -> {
// have your way with the received incoming message
});
endpoint.getBindingService().bindConsumer(bindingTarget,
endpoint.getDestination());
// at this point the binding is done
}
/**
* Create a {#link SubscribableChannel} and register in the
* {#link org.springframework.context.ApplicationContext}
*/
private SubscribableChannel createInputBindingTarget() {
SubscribableChannel channel = endpoint.getBindingTargetFactory()
.createInputChannel(endpoint.getDestination());
endpoint.getBeanFactory().registerSingleton(endpoint.getDestination(), channel);
channel = (SubscribableChannel) endpoint.getBeanFactory().initializeBean(channel,
endpoint.getDestination());
return channel;
}
See here for the full source for more context.
I had a task where I did not know the topics in advance. I solved it by having one input channel which listens to all the topics I need.
https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/_configuration_options.html
Destination
The target destination of a channel on the bound middleware (e.g., the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations and the destination names can be specified as comma-separated String values. If not set, the channel name is used instead.
So my configuration
spring:
cloud:
stream:
default:
consumer:
concurrency: 2
partitioned: true
bindings:
# inputs
input:
group: application_name_group
destination: topic-1,topic-2
content-type: application/json;charset=UTF-8
Then I defined one consumer which handles messages from all these topics.
#Component
#EnableBinding(Sink.class)
public class CommonConsumer {
private final static Logger logger = LoggerFactory.getLogger(CommonConsumer.class);
#StreamListener(target = Sink.INPUT)
public void consumeMessage(final Message<Object> message) {
logger.info("Received a message: \nmessage:\n{}", message.getPayload());
// Here I define logic which handles messages depending on message headers and topic.
// In my case I have configuration which forwards these messages to webhooks, so I need to have mapping topic name -> webhook URI.
}
}
Note, in your case it may not be a solution. I needed to forward messages to webhooks, so I could have configuration mapping.
I also thought about other ideas.
1) You kafka client consumer without Spring Cloud.
2) Create a predefined number of inputs, for example 50.
input-1
intput-2
...
intput-50
And then have a configuration for some of these inputs.
Related discussions
Spring cloud stream to support routing messages dynamically
https://github.com/spring-cloud/spring-cloud-stream/issues/690
https://github.com/spring-cloud/spring-cloud-stream/issues/1089
We use Spring Cloud 2.1.1 RELEASE
MessageChannel messageChannel = createMessageChannel(channelName);
messageChannel.send(getMessageBuilder().apply(data));
public MessageChannel createMessageChannel(String channelName) {
return (MessageChannel) applicationContext.getBean(channelName);}
public Function<Object, Message<Object>> getMessageBuilder() {
return payload -> MessageBuilder
.withPayload(payload)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.build();}
For the incoming messages, you can explicitly use BinderAwareChannelResolver to dynamically resolve the destination. You can check this example where router sink uses binder aware channel resolver.

Resources