I have a system where external systems can subscribe to events generated by my system. The system is written in Grails 2, using the RabbitMQ plugin for internal messaging. The events to external systems are communicated via HTTP.
I would like to create a queue for each subscriber to prevent that a slow subscriber endpoint slows down messages to an other subscriber. Subscriptions can occur runtime, that's why defining the queues in the application config is not desirable.
How can I create a queue with a topic binding runtime with the Grails RabbitMQ plugin?
As reading messages from RabbitMQ queues is directly coupled to services, a side-problem to creating the queue runtime could be to have multiple instances of that Grails service. Any ideas?
I don't have a ready solution for You but if you follow the code in the RabbitmqGrailsPlugin Descriptor especially the doWithSpring section
You should be able to recreate the steps necessary to initialize a new Queue and associated Listener dynamically at runtime.
It all comes down then to pass the needed parameters, register necessary spring beans and start the listeners.
To answer your second question I think you can come up with some naming convention and create a new queue handler for each queue. An example how to create spring beans dynamically can be found here: dynamically declare beans
Just a short example how I would quickly register a Queue it requires much more wiring etc...
def createQ(queueName) {
def queuesConfig = {
"${queueName}"(durable: true, autoDelete: false,)
}
def queueBuilder = new RabbitQueueBuilder()
queuesConfig.delegate = queueBuilder
queuesConfig.resolveStrategy = Closure.DELEGATE_FIRST
queuesConfig()
queueBuilder.queues?.each { queue ->
if (log.debugEnabled) {
log.debug "Registering queue '${queue.name}'"
}
BeanDefinitionBuilder builder = BeanDefinitionBuilder.rootBeanDefinition(Queue.class);
builder.addConstructorArgValue(queue.name)
builder.addConstructorArgValue(Boolean.valueOf(queue.durable))
builder.addConstructorArgValue(Boolean.valueOf(queue.exclusive))
builder.addConstructorArgValue(Boolean.valueOf(queue.autoDelete))
builder.addConstructorArgValue(queue.arguments)
DefaultListableBeanFactory factory = (DefaultListableBeanFactory) grailsApplication.mainContext.getBeanFactory();
factory.registerBeanDefinition("grails.rabbit.queue.${queue.name}", builder.getBeanDefinition());
}
}
I ended up using Spring AMQP which is used by the Grails RabbitMQ plugin. Removed some methods/arguments as they are not relevant to the sample:
class MyUpdater {
void handleMessage(Object message) {
String content = new String(message)
// do whatever you need with the message
}
}
import org.springframework.amqp.core.BindingBuilder
import org.springframework.amqp.core.Queue
import org.springframework.amqp.core.TopicExchange
import org.springframework.amqp.rabbit.core.RabbitAdmin
import org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer
import org.springframework.amqp.rabbit.listener.adapter.MessageListenerAdapter
import org.springframework.amqp.support.converter.SimpleMessageConverter
import org.springframework.amqp.rabbit.connection.ConnectionFactory
class ListenerInitiator {
// autowired
ConnectionFactory rabbitMQConnectionFactory
protected void initiateListener() {
RabbitAdmin admin = new RabbitAdmin(rabbitMQConnectionFactory)
// normally passed to this method, moved to local vars for simplicity
String queueName = "myQueueName"
String routingKey = "#"
String exchange = "myExchange"
Queue queue = new Queue(queueName)
admin.declareQueue(queue)
TopicExchange exchange = new TopicExchange(exchange)
admin.declareExchange(exchange)
admin.declareBinding( BindingBuilder.bind(queue).to(exchange).with(routingKey) )
// normally passed to this method, moved to local var for simplicity
MyUpdater listener = new MyUpdater()
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitMQConnectionFactory)
MessageListenerAdapter adapter = new MessageListenerAdapter(listener)
adapter.setMessageConverter(new SimpleMessageConverter())
container.setMessageListener(adapter)
container.setQueueNames(queueName)
container.start()
}
Related
I am migrating a Spring Boot microservice that consumes data from 3 RabbitMQ queues on server A, saves it into Redis and finally produces messages into an exchange in a different RabbitMQ on server B so these messages can be consumed by another microservice. This flow is working fine but I would like to migrate it to Spring Cloud Stream using the RabbitMQ binder. All Spring AMQP configuration is customised in the properties file and no spring property is used to create connections, queues, bindings, etc...
My first idea was setting up two bindings in Spring Cloud Stream, one connected to server A (consumer) and the other connected to server B (producer), and migrate the existing code to a Processor but I discarded it because it seems connection names cannot be set yet if multiple binders are used and I need to add several bindings to consume from server A's queues and bindingRoutingKey property does not support a list of values (I know it can be done programmately as explained here).
So I decided to only refactor the part of code related to the producer to use Spring Cloud Stream over RabbitMQ so the same microservice should consume via Spring AMQP from server A (original code) and should produce into server B via Spring Cloud Stream.
The first issue I found was a NonUniqueBeanDefinitionException in Spring Cloud Stream because a org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory bean was defined twice with handlerMethodFactory and integrationMessageHandlerMethodFactory names.
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory' available: expected single matching bean but found 2: handlerMethodFactory,integrationMessageHandlerMethodFactory
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1144)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:411)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:344)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1123)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.injectAndPostProcessDependencies(StreamListenerAnnotationBeanPostProcessor.java:317)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.afterSingletonsInstantiated(StreamListenerAnnotationBeanPostProcessor.java:113)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:862)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:743)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:390)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1214)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1203)
It seems the former bean is created by Spring AMQP and the latter by Spring Cloud Stream so I created my own primary bean:
#Bean
#Primary
public MessageHandlerMethodFactory messageHandlerMethodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
Now the application is able to start but the output channel is created by Spring Cloud Stream in server A instead of server B. It seems that Spring Cloud Stream configuration is using the connection created by Spring AMQP instead of using its own configuration.
The configuration of Spring AMQP is this:
#Bean
public SimpleRabbitListenerContainerFactory priceRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_A));
}
#Bean
public SimpleRabbitListenerContainerFactory maxbetRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_B));
}
#Bean
public ConnectionFactory consumerConnectionFactory() throws Exception {
return
new CachingConnectionFactory(
getRabbitConnectionFactoryBean(
rabbitProperties.getConsumer()
).getObject()
);
}
private SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
RabbitProperties.ListenerProperties listenerProperties) {
//return a SimpleRabbitListenerContainerFactory set up from external properties
}
/**
* Create the AMQ Admin.
*/
#Bean
public AmqpAdmin consumerAmqpAdmin(ConnectionFactory consumerConnectionFactory) {
return new RabbitAdmin(consumerConnectionFactory);
}
/**
* Create the map of available queues and declare them in the admin.
*/
#Bean
public Map<String, Queue> queues(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
Queue queue =
QueueBuilder
.nonDurable(listenerEntry.getValue().getQueueName())
.autoDelete()
.build();
consumerAmqpAdmin.declareQueue(queue);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), queue);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the map of available exchanges and declare them in the admin.
*/
#Bean
public Map<String, TopicExchange> exchanges(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
TopicExchange exchange =
new TopicExchange(listenerEntry.getValue().getExchangeName());
consumerAmqpAdmin.declareExchange(exchange);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), exchange);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the list of bindings and declare them in the admin.
*/
#Bean
public List<Binding> bindings(Map<String, Queue> queues, Map<String, TopicExchange> exchanges, AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().keySet().stream()
.map(listenerName -> {
Queue queue = queues.get(listenerName);
TopicExchange exchange = exchanges.get(listenerName);
return
rabbitProperties.getConsumer().getListeners().get(listenerName).getKeys().stream()
.map(bindingKey -> {
Binding binding = BindingBuilder.bind(queue).to(exchange).with(bindingKey);
consumerAmqpAdmin.declareBinding(binding);
return binding;
}).collect(Collectors.toList());
}).flatMap(Collection::stream)
.collect(Collectors.toList());
}
Message listeners are:
#RabbitListener(
queues="${consumer.listeners.LISTENER_A.queue-name}",
containerFactory = "priceRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
#RabbitListener(
queues="${consumer.listeners.LISTENER_B.queue-name}",
containerFactory = "maxbetRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
Properties:
#
# Server A config (Spring AMQP)
#
consumer.host=server-a
consumer.username=
consumer.password=
consumer.port=5671
consumer.ssl.enabled=true
consumer.ssl.algorithm=TLSv1.2
consumer.ssl.validate-server-certificate=false
consumer.connection-name=local:microservice-1
consumer.thread-factory.thread-group-name=server-a-consumer
consumer.thread-factory.thread-name-prefix=server-a-consumer-
# LISTENER_A configuration
consumer.listeners.LISTENER_A.queue-name=local.listenerA
consumer.listeners.LISTENER_A.exchange-name=exchangeA
consumer.listeners.LISTENER_A.keys[0]=*.1.*.*
consumer.listeners.LISTENER_A.keys[1]=*.3.*.*
consumer.listeners.LISTENER_A.keys[2]=*.6.*.*
consumer.listeners.LISTENER_A.keys[3]=*.8.*.*
consumer.listeners.LISTENER_A.keys[4]=*.9.*.*
consumer.listeners.LISTENER_A.initial-concurrency=5
consumer.listeners.LISTENER_A.maximum-concurrency=20
consumer.listeners.LISTENER_A.thread-name-prefix=listenerA-consumer-
# LISTENER_B configuration
consumer.listeners.LISTENER_B.queue-name=local.listenerB
consumer.listeners.LISTENER_B.exchange-name=exchangeB
consumer.listeners.LISTENER_B.keys[0]=*.1.*
consumer.listeners.LISTENER_B.keys[1]=*.3.*
consumer.listeners.LISTENER_B.keys[2]=*.6.*
consumer.listeners.LISTENER_B.initial-concurrency=5
consumer.listeners.LISTENER_B.maximum-concurrency=20
consumer.listeners.LISTENER_B.thread-name-prefix=listenerB-consumer-
#
# Server B config (Spring Cloud Stream)
#
spring.rabbitmq.host=server-b
spring.rabbitmq.port=5672
spring.rabbitmq.username=
spring.rabbitmq.password=
spring.cloud.stream.bindings.outbound.destination=microservice-out
spring.cloud.stream.bindings.outbound.group=default
spring.cloud.stream.rabbit.binder.connection-name-prefix=local:microservice
So my question is: is it possible to use in the same Spring Boot application code that consumes data from RabbitMQ via Spring AMQP and produces messages into a different server via Spring Cloud Stream RabbitMQ? If it is, could somebody tell me what I am doing wrong, please?
Spring AMQP version is the one provided by Boot version 2.1.7 (2.1.8-RELEASE) and Spring Cloud Stream version is the one provided by Spring Cloud train Greenwich.SR2 (2.1.3.RELEASE).
EDIT
I was able to make it work configuring the binder via multiple configuration properties instead of the default one. So with this configuration it works:
#
# Server B config (Spring Cloud Stream)
#
spring.cloud.stream.binders.transport-layer.type=rabbit
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.host=server-b
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.username=
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.password=
spring.cloud.stream.bindings.stream-output.destination=microservice-out
spring.cloud.stream.bindings.stream-output.group=default
Unfortunately it is not possible to set the connection-name yet in multiple binders configuration: A custom ConnectionNameStrategy is ignored if there is a custom binder configuration.
Anyway, I still do not understand why it seems the contexts are "mixed" when using Spring AMQP and Spring Cloud Stream RabbitMQ. It is still necessary to set a primary MessageHandlerMethodFactory bean in order the implementation to work.
EDIT
I found out that the NoUniqueBeanDefinitionException was caused because the microservice itself was creating a ConditionalGenericConverter to be used by Spring AMQP part to deserialize messages from Server A.
I removed it and added some MessageConverters instead. Now the problem is solved and the #Primary bean is no longer necessary.
Unrelated, but
consumerAmqpAdmin.declareQueue(queue);
You should never communicate with the broker within a #Bean definition; it is too early in application context lifecycle. It might work but YMMV; also if the broker is not available it will prevent your app from starting.
It's better to define beans of type Declarables containing the lists of queues, channels, bindings and the Admin will automatically declare them when the connection is first opened successfully. See the reference manual.
I have never seen the MessageHandlerFactory problem; Spring AMQP declares no such bean. If you can provide a small sample app that exhibits the behavior, that would be useful.
I'll see if I can find a work around to the connection name issue.
EDIT
I found a work around to the connection name issue; it involves a bit of reflection but it works. I suggest you open a new feature request against the binder to request a mechanism to set the connection name strategy when using multiple binders.
Anyway; here's the work around...
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57725710Application {
public static void main(String[] args) {
SpringApplication.run(So57725710Application.class, args);
}
#Bean
public Object connectionNameConfigurer(BinderFactory binderFactory) throws Exception {
setConnectionName(binderFactory, "rabbit1", "myAppProducerSide");
setConnectionName(binderFactory, "rabbit2", "myAppConsumerSide");
return null;
}
private void setConnectionName(BinderFactory binderFactory, String binderName,
String conName) throws Exception {
binderFactory.getBinder(binderName, MessageChannel.class); // force creation
#SuppressWarnings("unchecked")
Map<String, Map.Entry<Binder<?, ?, ?>, ApplicationContext>> binders =
(Map<String, Entry<Binder<?, ?, ?>, ApplicationContext>>) new DirectFieldAccessor(binderFactory)
.getPropertyValue("binderInstanceCache");
binders.get(binderName)
.getValue()
.getBean(CachingConnectionFactory.class).setConnectionNameStrategy(queue -> conName);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String listen(String in) {
System.out.println(in);
return in.toUpperCase();
}
}
and
spring.cloud.stream.binders.rabbit1.type=rabbit
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.output.destination=outDest
spring.cloud.stream.bindings.output.producer.required-groups=outQueue
spring.cloud.stream.bindings.output.binder=rabbit1
spring.cloud.stream.binders.rabbit2.type=rabbit
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.input.destination=inDest
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.bindings.input.binder=rabbit2
and
I have a Spring Boot application that spins consumers for a section of queues and I want to be able to add queues to those consumers at run time.
I've installed the event exchange plugin (https://www.rabbitmq.com/event-exchange.html) and I created a dedicated queue that is bound to the amq.rabbitmq.event exchange. I can see the events coming in when I declare the queues statically.
How would I accomplish this run time magic? I've seen people use the property file, but I would prefer not to have to modify the property file during run time as I add more queues
#Component
public class MessageConsumer {
List<String> allQueues = new ArrayList<String>();
public MessageConsumer() {
allQueues.add("queue1");
allQueues.add("queue2");
allQueues.add("queue3");
}
#RabbitListener(id = "event", queues = {"custom-emp-queue-events"}) // create this queue in rabbitmq management, bound to amqp exchange
public void processQueueEvents(Message message) {
... add the queue to the allQueues list on queue.created ...
}
#RabbitListener(id = "process", queues = allQueues.stream().toArray(String[]::new) ) // this is where the "issue" is
public void processMessageFromQueues(String messageAsJson) {
... process message ...
}
}
This can be done with a SpEL expression over there:
#RabbitListener(id = "process", queues = "#{messageConsumer.allQueues}" )
But you have to add a public getter for that allQueues.
See more info in the Reference Manual: https://docs.spring.io/spring-amqp/docs/2.1.3.RELEASE/reference/html/_reference.html#async-annotation-driven
UPDATE
#Autowired
private RabbitListenerEndpointRegistry listenerEdnpointRegistry;
#RabbitListener(id = "event", queues = {"custom-emp-queue-events"}) // create this queue in rabbitmq management, bound to amqp exchange
public void processQueueEvents(Message message) {
((AbstractMessageListenerContainer) this.listenerEdnpointRegistry.getListenerContainer("process")).addQueueNames(...);
}
I am planning to use Spring cloud Stream for my project. I see that there's built-in Trigger source application starter. What I want to do is to use, quartz job scheduler as the source app. This is to allow dynamic job schedules from application. Is there a good sample to achieve this?
I found this. spring integration + cron + quartz in cluster?. This solution talks about getting reference to inbound channel adapter. I am using Annotation to define the inbound channel adapter. How do I get references to this object so that I can do start / stop mentioned in the solution.
This is how i define inbound channel adapter.
#Bean
#InboundChannelAdapter(autoStartup = "false", value = SourceChannel.CHANNEL_NAME, poller = #Poller(trigger = "fireOnceTrigger"))
public MessageSource<String> timerMessageSource() {
return new MessageSource<String>() {
public Message<String> receive() {
System.out.println("******************");
System.out.println("At the Source");
System.out.println("******************");
String value = "{\"value\":\"hi\"}";
System.out.println("Sending value: " + value);
return MessageBuilder.withPayload(value).setHeader(MessageHeaders.CONTENT_TYPE, "application/json").build();
}
};
}
The related issue on GitHub: https://github.com/spring-projects/spring-integration-java-dsl/issues/138
The algorithm to build a bean name for automatically created endpoints is like:
The bean names are generated with this algorithm: * The MessageHandler (MessageSource) #Bean gets its own standard name from the method name or name attribute on the #Bean. This works like there is no Messaging Annotation on the #Bean method. * The AbstractEndpoint bean name is generated with the pattern: [configurationComponentName].[methodName].[decapitalizedAnnotationClassShortName]. For example the endpoint (SourcePollingChannelAdapter) for the consoleSource() definition above gets a bean name like: myFlowConfiguration.consoleSource.inboundChannelAdapter.
See Reference Manual for more information.
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}
I have two projects one the service project another one consumer project,
Consumer project consumes the services of other project and the call should be async using JMS
I installed jms plugin in both of the projects
I have defined the JMSConnectionFactory in both of the project as below in resources.groovy
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'vm://localhost' }
}
Note: Both of the project are for now on same machine (i.e. localhost)
Now from consumer's controller I am making call to service from ServiceProvider project
jmsService.send(service:'serviceProvider', params.body)
In ServiceProvider the service is defined as follow
import grails.plugin.jms.*
class ServiceProviderService {
def jmsService
static transactional = true
static exposes = ['jms1']
def createMessage(msg) {
print "Called1"
sleep(2000) // slow it down
return null
}
}
now when controller submits the call to service it gets submitted successfully but doesn't reach to the actual service
I also tried
jmsService.send(app: "ServiceProvider", service: "serviceProvider", method: "createMessage", msg, "standard", null)
Update
Now I have installed activeMQ plugin to service provider to make it embedded broker (jms is already there)
and created a service
package serviceprovider
class HelloService {
boolean transactional = false
static exposes = ['jms']
static destination = "queue.notification"
def onMessage(it){
println "GOT MESSAGE: $it"
}
def sayHello(String message){
println "hello"+message
}
}
resources.groovy in both of the project is now
import org.springframework.jms.connection.SingleConnectionFactory
import org.apache.activemq.ActiveMQConnectionFactory
beans = {
jmsConnectionFactory(org.apache.activemq.ActiveMQConnectionFactory) { brokerURL = 'tcp://127.0.0.1:61616' }
}
from consumer's controller I am calling this service like below
jmsService.send(app:'queue.notification',service:'hello',method: 'sayHello', params.body)
call to method gets submitted but actually it is not getting called!
The in vm activemq config (vm://localhost) works only within a single VM. If your 2 projects run in separate VMs try setting up an external AMQ broker.
if you are using separate processes, then you need to use a different transport than VM (its for a single VM only), also, is one of your processes starting a broker? If not, then one of them should embed the broker (or run it externally) and expose it over a transport (like TCP)...