AWS SNS Queue Policy - spring-boot

I was trying to create a SNS queue programmatically via spring-cloud-starter-aws-messaging maven dependency. I could create the queue, send and receive messages
The next I would like to have is to set the queue policy so only particular user can consume the messages. This is for multi tenancy support, and each tenant will have its own queue. Just want to make sure the queue is not consumable by other tenants.
Here's the snippet to create a queue
AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.build();
Map<String, String> queueAttributes = new HashMap<>();
queueAttributes.put("FifoQueue", "true");
queueAttributes.put("ContentBasedDeduplication", "true");
CreateQueueRequest createFifoQueueRequest = new CreateQueueRequest(
queueName).withAttributes(queueAttributes);
CreateQueueResult createQueueResult = sqs.createQueue(createFifoQueueRequest);

Related

AWS SQS, DLQ in spring boot

How do I add a DLQ configuration to my SQS configuration? I'm not sure how to integrate a DLQ with my existing queue. I'm using aws messaging and not JMS, so my annotation would be #SQSListener for my listener method. I have a config class that has following
#Bean
public SimpleMessageListenerContainer messageListenerContainer(AmazonSQSAsync amazonSQSAsync) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setAmazonSqs(amazonSQSAsync);
factory.setMaxNumberOfMessages(10);
SimpleMessageListenerContainer simpleMessageListenerContainer = factory.createSimpleMessageListenerContainer();
simpleMessageListenerContainer.setQueueStopTimeout(queueStopTimeout*1000);
simpleMessageListenerContainer.setMessageHandler(messageHandler(amazonSQSAsync));
return simpleMessageListenerContainer;
}
#Bean
public QueueMessageHandler messageHandler(AmazonSQSAsync amazonSQSAsync) {
QueueMessageHandlerFactory queueMessageHandlerFactory = new QueueMessageHandlerFactory();
queueMessageHandlerFactory.setAmazonSqs(amazonSQSAsync);
QueueMessageHandler messageHandler = queueMessageHandlerFactory.createQueueMessageHandler();
return messageHandler;
}
#Bean
public AmazonSQSAsync awsSqsAsync() {
AmazonSQSAsyncClient amazonSQSAsyncClient = new AmazonSQSAsyncClient(new DefaultAWSCredentialsProviderChain());
amazonSQSAsyncClient.setRegion(Region.getRegion(Regions.fromName(region)));
return new AmazonSQSBufferedAsyncClient(amazonSQSAsyncClient);
}
I couldn't find any right documentation to correctly set retries so that if the retries exceed the threshold, the message should go to a dead-letter queue
If I am not mistaken, setting the maximum retries, and associated DLQ is done on the Broker side, and not configurable as part of the listener.
Then in your code, you will do something like:
#SqsListener(value = "MainQueue", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void receive(String message, #Header("SenderId") String senderId, Acknowledgment ack) throws IOException {
ack.acknowledge();
}
#SqsListener(value = "DLQ-AssociatedWithMain")
public void receiveDlq(String message) throws IOException {
}
If Message is NOT acknowledged, then it will be retried for a specific max period, then sent to DLQ.
=== Edited ===
The below for LocalStack are suggestions (Never Tested), However LocalStack (Free Version) as of now is supposed to support the AWS CLI:
As such, if you look at the AWS CLI: you use aws create-queue to create a Queue, and --attributes if you wanted to specify DLQ Information, though I believe you must also create the DLQ queue as well before referencing the RN.
create-queue
--queue-name <value>
[--attributes <value>]
[--tags <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
DLQ Attribute Details:
The following attributes apply only to dead-letter queues:
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
RedriveAllowPolicy – The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:
redrivePermission – The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:
allowAll – (Default) Any source queues in this Amazon Web Services account in the same Region can specify this queue as the dead-letter queue.
denyAll – No source queues can specify this queue as the dead-letter queue.
byQueue – Only queues specified by the sourceQueueArns parameter can specify this queue as the dead-letter queue.
sourceQueueArns – The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the redrivePermission parameter is set to byQueue . You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the redrivePermission parameter to allowAll .
https://docs.aws.amazon.com/cli/latest/reference/sqs/create-queue.html
In LocalStack SQS Documentation, they have an example of creating an SQS Queue:
awslocal sqs create-queue --queue-name sample-queue
{
"QueueUrl": "http://localhost:4566/000000000000/sample-queue"
}
So just take this example, create your DLQ, then create your Queue with --attributes to point to the DLQ RN.
https://docs.localstack.cloud/aws/sqs/
Hope this helps guide you in the right direction,
==== Edited ===
Create Queue and DLQ using LocalStack:
Create DLQ First:
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name MyDLQ --region eu-west-1
Response:
{
"QueueUrl": "http://localhost:4566/000000000000/MyDLQ"
}
Create an attributes.json file with the contents below:
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:eu-west-1:000000000000:MyDLQ\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200"
}
Create your main queue pointing to aattributes.json file:
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name MyMainQueue --attributes file://attributes.json --region eu-west-1
Response:
{
"QueueUrl": "http://localhost:4566/000000000000/MyMainQueue"
}

Separate Kafka listener for each topic using annotations in spring kafka

My application is listening to multiple kafka topics. Right now, my listener looks like below, and my.topics in property file contains the list of comma separated kafka topics
#KafkaListener(topics = ["#{'\${my.topics}'.split(',')}"], groupId = "my.group", containerFactory = "myKafkaFactory")
fun genericMessageListener(myRequest: MyRequest, ack: Acknowledgment) {
//do Something with myRequest
ack.acknowledge()
}
My ConcurrentKafkaListenerContainerFactory is
#Bean
fun myKafkaFactory(): ConcurrentKafkaListenerContainerFactory<String, MyRequest> {
val factory = ConcurrentKafkaListenerContainerFactory<String, MyRequest>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configProps(), StringDeserializer(), MyRequestDeserializer())
factory.containerProperties.ackMode = ContainerProperties.AckMode.MANUAL
return factory
}
Is there a way i can create a separate consumer for each topic dynamically so that when I add one more topic to the list in my.topics, spring will automatically creates a separate consumer for that topic.
The problem I am facing now is, if something goes wrong with one of the messages in any of the topics, messages in other topics are also getting impacted.
On a high level, i am looking for something like this
#Bean
fun myKafkaFactory(): ConcurrentKafkaListenerContainerFactory<String, MyRequest> {
val factory = ConcurrentKafkaListenerContainerFactory<String, MyRequest>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configProps(), StringDeserializer(), MyRequestDeserializer())
factory.containerProperties.ackMode = ContainerProperties.AckMode.MANUAL
factory.isOneConsumerPerTopic(true)
return factory
}
so that factory.isOneConsumerPerTopic(true) will ensure a separate consumer is created for each topic in the Array.
I did go through How to create separate Kafka listener for each topic dynamically in springboot?. I am looking a bit more 'cleaner' solution. :)
You can add a custom PartitionAssignor to the Kafka consumer configuration.
Set the container concurrency to (at least) the number of topics and have your assignor assign the partitions for each topic to a specific consumer(s).
/**
* This interface is used to define custom partition assignment for use in
* {#link org.apache.kafka.clients.consumer.KafkaConsumer}. Members of the consumer group subscribe
* to the topics they are interested in and forward their subscriptions to a Kafka broker serving
* as the group coordinator. The coordinator selects one member to perform the group assignment and
* propagates the subscriptions of all members to it. Then {#link #assign(Cluster, Map)} is called
* to perform the assignment and the results are forwarded back to each respective members
*
* In some cases, it is useful to forward additional metadata to the assignor in order to make
* assignment decisions. For this, you can override {#link #subscription(Set)} and provide custom
* userData in the returned Subscription. For example, to have a rack-aware assignor, an implementation
* can use this user data to forward the rackId belonging to each member.
*/
public interface PartitionAssignor {
You could start with the AbstractPartitionAssignor.
See the spring-kafka documentation.
When listening to multiple topics, the default partition distribution may not be what you expect. For example, ...

Unable to send/receive messages in Multi Node Kafka cluster

I have a multi node kafka cluster, im able to create the topics
successfully, which is clear in the zookeeper logs. But i cant send/receive messages from some of the topics even though they are created.
Also i dont see the logs created for the topics for some of them in the /tmp/kafka-logs directory in any of the kafka brokers that are part of the 3 nodes.
For example: If i had created Topic1...Topic5. I'm able to send and receive messages for topic3,topic4. I have my producer & consumer running in node1.
Any idea if I'm doing anything wrong here?
On the producer side:
private Properties producerConfig() {
Properties props = new Properties();
props.put("bootstrap.servers", "host1:9092,host2:9092,host3:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
return props;
}
On Consumer side:
private Properties createConsumerConfig(String zookeeper, String groupId) {
Properties props = new Properties();
props.put("zookeeper.connect", "host1:2181,host2:2181,host3:2181");
props.put("group.id", groupId);
props.put("auto.commit.enable", "false");
props.put("auto.offset.reset", "smallest");
return props;
}
Multi Node cluster Setup:
I have used the following instructions to setup the multi node cluster.
Host1 :: zk1,kafkabroker1
Host2 :: zk2,kafkabroker2
Host3 :: zk3,kafkabroker3
https://itblog.adrian.citu.name/2014/01/30/how-to-set-an-apache-kafka-multi-node-multi-broker-cluster/

Create queue runtime in Grails with RabbitMQ plugin

I have a system where external systems can subscribe to events generated by my system. The system is written in Grails 2, using the RabbitMQ plugin for internal messaging. The events to external systems are communicated via HTTP.
I would like to create a queue for each subscriber to prevent that a slow subscriber endpoint slows down messages to an other subscriber. Subscriptions can occur runtime, that's why defining the queues in the application config is not desirable.
How can I create a queue with a topic binding runtime with the Grails RabbitMQ plugin?
As reading messages from RabbitMQ queues is directly coupled to services, a side-problem to creating the queue runtime could be to have multiple instances of that Grails service. Any ideas?
I don't have a ready solution for You but if you follow the code in the RabbitmqGrailsPlugin Descriptor especially the doWithSpring section
You should be able to recreate the steps necessary to initialize a new Queue and associated Listener dynamically at runtime.
It all comes down then to pass the needed parameters, register necessary spring beans and start the listeners.
To answer your second question I think you can come up with some naming convention and create a new queue handler for each queue. An example how to create spring beans dynamically can be found here: dynamically declare beans
Just a short example how I would quickly register a Queue it requires much more wiring etc...
def createQ(queueName) {
def queuesConfig = {
"${queueName}"(durable: true, autoDelete: false,)
}
def queueBuilder = new RabbitQueueBuilder()
queuesConfig.delegate = queueBuilder
queuesConfig.resolveStrategy = Closure.DELEGATE_FIRST
queuesConfig()
queueBuilder.queues?.each { queue ->
if (log.debugEnabled) {
log.debug "Registering queue '${queue.name}'"
}
BeanDefinitionBuilder builder = BeanDefinitionBuilder.rootBeanDefinition(Queue.class);
builder.addConstructorArgValue(queue.name)
builder.addConstructorArgValue(Boolean.valueOf(queue.durable))
builder.addConstructorArgValue(Boolean.valueOf(queue.exclusive))
builder.addConstructorArgValue(Boolean.valueOf(queue.autoDelete))
builder.addConstructorArgValue(queue.arguments)
DefaultListableBeanFactory factory = (DefaultListableBeanFactory) grailsApplication.mainContext.getBeanFactory();
factory.registerBeanDefinition("grails.rabbit.queue.${queue.name}", builder.getBeanDefinition());
}
}
I ended up using Spring AMQP which is used by the Grails RabbitMQ plugin. Removed some methods/arguments as they are not relevant to the sample:
class MyUpdater {
void handleMessage(Object message) {
String content = new String(message)
// do whatever you need with the message
}
}
import org.springframework.amqp.core.BindingBuilder
import org.springframework.amqp.core.Queue
import org.springframework.amqp.core.TopicExchange
import org.springframework.amqp.rabbit.core.RabbitAdmin
import org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer
import org.springframework.amqp.rabbit.listener.adapter.MessageListenerAdapter
import org.springframework.amqp.support.converter.SimpleMessageConverter
import org.springframework.amqp.rabbit.connection.ConnectionFactory
class ListenerInitiator {
// autowired
ConnectionFactory rabbitMQConnectionFactory
protected void initiateListener() {
RabbitAdmin admin = new RabbitAdmin(rabbitMQConnectionFactory)
// normally passed to this method, moved to local vars for simplicity
String queueName = "myQueueName"
String routingKey = "#"
String exchange = "myExchange"
Queue queue = new Queue(queueName)
admin.declareQueue(queue)
TopicExchange exchange = new TopicExchange(exchange)
admin.declareExchange(exchange)
admin.declareBinding( BindingBuilder.bind(queue).to(exchange).with(routingKey) )
// normally passed to this method, moved to local var for simplicity
MyUpdater listener = new MyUpdater()
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitMQConnectionFactory)
MessageListenerAdapter adapter = new MessageListenerAdapter(listener)
adapter.setMessageConverter(new SimpleMessageConverter())
container.setMessageListener(adapter)
container.setQueueNames(queueName)
container.start()
}

Topic not able to receive message

I have a non durable Topic client which is supposed to receive messages asynchronously using a listener.
When message is published on Topic, i can see on admin console that message is published and consumed but my client never receives it.
Client is able to establish connection properly as i can track it on console.
Any suggestions?
EDIT:
Did some more analysis and found that issue is with API used for connection.
I was able to listen to messages when i use following code:
TopicConnection conn;
TopicSession session = conn.createTopicSession(false, TopicSession.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(monacoSubscriberEmsTopic);
conn.start();
tsubs = session.createSubscriber(topic);
tsubs.setMessageListener(listener);
But when i use following code then it doesn't work:
DefaultMessageListenerContainer listenerContainer = createMessageListenerContainer();
private DefaultMessageListenerContainer createMessageListenerContainer() {
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setClientId(clientID);
listenerContainer.setDestinationName(destination);
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setConcurrentConsumers(minConsumerCount);
listenerContainer.setMaxConcurrentConsumers(maxConsumerCount);
listenerContainer.setPubSubDomain(true);
listenerContainer.setSessionAcknowledgeModeName(sessionAcknowledgeMode);
if (messageSelector != null)
listenerContainer.setMessageSelector(messageSelector);
listenerContainer.setSessionTransacted(true);
return listenerContainer;
}
listenerContainer.initialize();
listenerContainer.start();
What is wrong with second approach?

Resources