so creating a subscriber using
pubSubTemplate.subscribeAndConvert( subs, { message ->
...
is very concise.
Is it possible though, to set the Ack Mode using this approach to creating subscribers?
Using channel adapters (which are less concise imo, and reason why ia am exploring subscribeAndConvert option), as described here https://cloud.google.com/pubsub/docs/spring#receiving-messages-using-channel-adapters - i can do it, e.g.
adapter.setAckMode(AckMode.MANUAL);
There is a config available with spring cloud stream for this;
spring.cloud.stream.gcp.pubsub.default.consumer.ack-mode: AUTO_ACK
Thanks!
In method pubSubSubscriberTemplate.subscribeAndConvert there is no parameter to set AckMode(AckMode.AUTO).
In Spring Integration, you can configure bind an input channel to a Pub/Sub Subscription using the PubSubInboundChannelAdapter.
As you have mentioned using PubSubInboundChannelAdapter you can set acknowledgement mode to auto using the syntax adapter.setAckMode(AckMode.AUTO).
Example code:
#Bean
public MessageChannel orderRequestInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public PubSubInboundChannelAdapter orderRequestChannelAdapter(
#Qualifier("orderRequestInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(
pubSubTemplate, "orders-subscription");
adapter.setOutputChannel(inputChannel);
adapter.setPayloadType(Order.class);
adapter.setAckMode(AckMode.AUTO);
return adapter;
}
You can set the ACK mode of the consumer endpoint in application.properties as:
spring.cloud.stream.gcp.pubsub.bindings.{CONSUMER_NAME}.consumer.ack-mode=AUTO_ACK
Related
I am using spring-cloud-stream-binder-kafka and have implemented stateful retry using DefaultErrorHandler, I found that by enabling deliveryAttemptHeader of container properties I can access the retry count or deliveryAttempt count from message header but I am not able to enable it.
I tried to set the value to true as below
#Bean
public ContainerCustomizer<String, Message, ConcurrentMessageListenerContainer<String, Message>> containerCustomizer(
ConcurrentKafkaListenerContainerFactory<String, Message> factory) {
ContainerCustomizer<String, Message, ConcurrentMessageListenerContainer<String, Message>> custCustomizer = container -> {
container.getContainerProperties().setDeliveryAttemptHeader(true);
};
factory.setContainerCustomizer(custCustomizer);
return custCustomizer;
}
With this configuration when I start the application and debug KafkaMessageListenerContainer.java#L1078 I still see that deliveryAttemptHeader is disabled, and also the ContainerCustomizer instance that I have created is also not getting called.
spring-cloud-stream does not use Boot's container factory; use its ListenerContainerCustomizer instead.
https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_advanced_consumer_configuration
I am migrating a Spring Boot microservice that consumes data from 3 RabbitMQ queues on server A, saves it into Redis and finally produces messages into an exchange in a different RabbitMQ on server B so these messages can be consumed by another microservice. This flow is working fine but I would like to migrate it to Spring Cloud Stream using the RabbitMQ binder. All Spring AMQP configuration is customised in the properties file and no spring property is used to create connections, queues, bindings, etc...
My first idea was setting up two bindings in Spring Cloud Stream, one connected to server A (consumer) and the other connected to server B (producer), and migrate the existing code to a Processor but I discarded it because it seems connection names cannot be set yet if multiple binders are used and I need to add several bindings to consume from server A's queues and bindingRoutingKey property does not support a list of values (I know it can be done programmately as explained here).
So I decided to only refactor the part of code related to the producer to use Spring Cloud Stream over RabbitMQ so the same microservice should consume via Spring AMQP from server A (original code) and should produce into server B via Spring Cloud Stream.
The first issue I found was a NonUniqueBeanDefinitionException in Spring Cloud Stream because a org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory bean was defined twice with handlerMethodFactory and integrationMessageHandlerMethodFactory names.
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory' available: expected single matching bean but found 2: handlerMethodFactory,integrationMessageHandlerMethodFactory
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1144)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:411)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:344)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1123)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.injectAndPostProcessDependencies(StreamListenerAnnotationBeanPostProcessor.java:317)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.afterSingletonsInstantiated(StreamListenerAnnotationBeanPostProcessor.java:113)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:862)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:743)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:390)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1214)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1203)
It seems the former bean is created by Spring AMQP and the latter by Spring Cloud Stream so I created my own primary bean:
#Bean
#Primary
public MessageHandlerMethodFactory messageHandlerMethodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
Now the application is able to start but the output channel is created by Spring Cloud Stream in server A instead of server B. It seems that Spring Cloud Stream configuration is using the connection created by Spring AMQP instead of using its own configuration.
The configuration of Spring AMQP is this:
#Bean
public SimpleRabbitListenerContainerFactory priceRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_A));
}
#Bean
public SimpleRabbitListenerContainerFactory maxbetRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_B));
}
#Bean
public ConnectionFactory consumerConnectionFactory() throws Exception {
return
new CachingConnectionFactory(
getRabbitConnectionFactoryBean(
rabbitProperties.getConsumer()
).getObject()
);
}
private SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
RabbitProperties.ListenerProperties listenerProperties) {
//return a SimpleRabbitListenerContainerFactory set up from external properties
}
/**
* Create the AMQ Admin.
*/
#Bean
public AmqpAdmin consumerAmqpAdmin(ConnectionFactory consumerConnectionFactory) {
return new RabbitAdmin(consumerConnectionFactory);
}
/**
* Create the map of available queues and declare them in the admin.
*/
#Bean
public Map<String, Queue> queues(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
Queue queue =
QueueBuilder
.nonDurable(listenerEntry.getValue().getQueueName())
.autoDelete()
.build();
consumerAmqpAdmin.declareQueue(queue);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), queue);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the map of available exchanges and declare them in the admin.
*/
#Bean
public Map<String, TopicExchange> exchanges(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
TopicExchange exchange =
new TopicExchange(listenerEntry.getValue().getExchangeName());
consumerAmqpAdmin.declareExchange(exchange);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), exchange);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the list of bindings and declare them in the admin.
*/
#Bean
public List<Binding> bindings(Map<String, Queue> queues, Map<String, TopicExchange> exchanges, AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().keySet().stream()
.map(listenerName -> {
Queue queue = queues.get(listenerName);
TopicExchange exchange = exchanges.get(listenerName);
return
rabbitProperties.getConsumer().getListeners().get(listenerName).getKeys().stream()
.map(bindingKey -> {
Binding binding = BindingBuilder.bind(queue).to(exchange).with(bindingKey);
consumerAmqpAdmin.declareBinding(binding);
return binding;
}).collect(Collectors.toList());
}).flatMap(Collection::stream)
.collect(Collectors.toList());
}
Message listeners are:
#RabbitListener(
queues="${consumer.listeners.LISTENER_A.queue-name}",
containerFactory = "priceRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
#RabbitListener(
queues="${consumer.listeners.LISTENER_B.queue-name}",
containerFactory = "maxbetRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
Properties:
#
# Server A config (Spring AMQP)
#
consumer.host=server-a
consumer.username=
consumer.password=
consumer.port=5671
consumer.ssl.enabled=true
consumer.ssl.algorithm=TLSv1.2
consumer.ssl.validate-server-certificate=false
consumer.connection-name=local:microservice-1
consumer.thread-factory.thread-group-name=server-a-consumer
consumer.thread-factory.thread-name-prefix=server-a-consumer-
# LISTENER_A configuration
consumer.listeners.LISTENER_A.queue-name=local.listenerA
consumer.listeners.LISTENER_A.exchange-name=exchangeA
consumer.listeners.LISTENER_A.keys[0]=*.1.*.*
consumer.listeners.LISTENER_A.keys[1]=*.3.*.*
consumer.listeners.LISTENER_A.keys[2]=*.6.*.*
consumer.listeners.LISTENER_A.keys[3]=*.8.*.*
consumer.listeners.LISTENER_A.keys[4]=*.9.*.*
consumer.listeners.LISTENER_A.initial-concurrency=5
consumer.listeners.LISTENER_A.maximum-concurrency=20
consumer.listeners.LISTENER_A.thread-name-prefix=listenerA-consumer-
# LISTENER_B configuration
consumer.listeners.LISTENER_B.queue-name=local.listenerB
consumer.listeners.LISTENER_B.exchange-name=exchangeB
consumer.listeners.LISTENER_B.keys[0]=*.1.*
consumer.listeners.LISTENER_B.keys[1]=*.3.*
consumer.listeners.LISTENER_B.keys[2]=*.6.*
consumer.listeners.LISTENER_B.initial-concurrency=5
consumer.listeners.LISTENER_B.maximum-concurrency=20
consumer.listeners.LISTENER_B.thread-name-prefix=listenerB-consumer-
#
# Server B config (Spring Cloud Stream)
#
spring.rabbitmq.host=server-b
spring.rabbitmq.port=5672
spring.rabbitmq.username=
spring.rabbitmq.password=
spring.cloud.stream.bindings.outbound.destination=microservice-out
spring.cloud.stream.bindings.outbound.group=default
spring.cloud.stream.rabbit.binder.connection-name-prefix=local:microservice
So my question is: is it possible to use in the same Spring Boot application code that consumes data from RabbitMQ via Spring AMQP and produces messages into a different server via Spring Cloud Stream RabbitMQ? If it is, could somebody tell me what I am doing wrong, please?
Spring AMQP version is the one provided by Boot version 2.1.7 (2.1.8-RELEASE) and Spring Cloud Stream version is the one provided by Spring Cloud train Greenwich.SR2 (2.1.3.RELEASE).
EDIT
I was able to make it work configuring the binder via multiple configuration properties instead of the default one. So with this configuration it works:
#
# Server B config (Spring Cloud Stream)
#
spring.cloud.stream.binders.transport-layer.type=rabbit
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.host=server-b
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.username=
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.password=
spring.cloud.stream.bindings.stream-output.destination=microservice-out
spring.cloud.stream.bindings.stream-output.group=default
Unfortunately it is not possible to set the connection-name yet in multiple binders configuration: A custom ConnectionNameStrategy is ignored if there is a custom binder configuration.
Anyway, I still do not understand why it seems the contexts are "mixed" when using Spring AMQP and Spring Cloud Stream RabbitMQ. It is still necessary to set a primary MessageHandlerMethodFactory bean in order the implementation to work.
EDIT
I found out that the NoUniqueBeanDefinitionException was caused because the microservice itself was creating a ConditionalGenericConverter to be used by Spring AMQP part to deserialize messages from Server A.
I removed it and added some MessageConverters instead. Now the problem is solved and the #Primary bean is no longer necessary.
Unrelated, but
consumerAmqpAdmin.declareQueue(queue);
You should never communicate with the broker within a #Bean definition; it is too early in application context lifecycle. It might work but YMMV; also if the broker is not available it will prevent your app from starting.
It's better to define beans of type Declarables containing the lists of queues, channels, bindings and the Admin will automatically declare them when the connection is first opened successfully. See the reference manual.
I have never seen the MessageHandlerFactory problem; Spring AMQP declares no such bean. If you can provide a small sample app that exhibits the behavior, that would be useful.
I'll see if I can find a work around to the connection name issue.
EDIT
I found a work around to the connection name issue; it involves a bit of reflection but it works. I suggest you open a new feature request against the binder to request a mechanism to set the connection name strategy when using multiple binders.
Anyway; here's the work around...
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57725710Application {
public static void main(String[] args) {
SpringApplication.run(So57725710Application.class, args);
}
#Bean
public Object connectionNameConfigurer(BinderFactory binderFactory) throws Exception {
setConnectionName(binderFactory, "rabbit1", "myAppProducerSide");
setConnectionName(binderFactory, "rabbit2", "myAppConsumerSide");
return null;
}
private void setConnectionName(BinderFactory binderFactory, String binderName,
String conName) throws Exception {
binderFactory.getBinder(binderName, MessageChannel.class); // force creation
#SuppressWarnings("unchecked")
Map<String, Map.Entry<Binder<?, ?, ?>, ApplicationContext>> binders =
(Map<String, Entry<Binder<?, ?, ?>, ApplicationContext>>) new DirectFieldAccessor(binderFactory)
.getPropertyValue("binderInstanceCache");
binders.get(binderName)
.getValue()
.getBean(CachingConnectionFactory.class).setConnectionNameStrategy(queue -> conName);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String listen(String in) {
System.out.println(in);
return in.toUpperCase();
}
}
and
spring.cloud.stream.binders.rabbit1.type=rabbit
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.output.destination=outDest
spring.cloud.stream.bindings.output.producer.required-groups=outQueue
spring.cloud.stream.bindings.output.binder=rabbit1
spring.cloud.stream.binders.rabbit2.type=rabbit
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.input.destination=inDest
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.bindings.input.binder=rabbit2
and
I'm new to Spring Framework and, indeed, I'm learning and using Spring Boot. Recently, in the app I'm developing, I made Quartz Scheduler work, and now I want to make Spring Integration work there: FTP connection to a server to write and read files from.
What I want is really simple (as I've been able to do so in a previous java application). I've got two Quartz Jobs scheduled to fired in different times daily: one of them reads a file from a FTP server and another one writes a file to a FTP server.
I'll detail what I've developed so far.
#SpringBootApplication
#ImportResource("classpath:ws-config.xml")
#EnableIntegration
#EnableScheduling
public class MyApp extends SpringBootServletInitializer {
#Autowired
private Configuration configuration;
//...
#Bean
public DefaultFtpsSessionFactory myFtpsSessionFactory(){
DefaultFtpsSessionFactory sess = new DefaultFtpsSessionFactory();
Ftp ftp = configuration.getFtp();
sess.setHost(ftp.getServer());
sess.setPort(ftp.getPort());
sess.setUsername(ftp.getUsername());
sess.setPassword(ftp.getPassword());
return sess;
}
}
The following class I've named it as a FtpGateway, as follows:
#Component
public class FtpGateway {
#Autowired
private DefaultFtpsSessionFactory sess;
public void sendFile(){
// todo
}
public void readFile(){
// todo
}
}
I'm reading this documentation to learn to do so. Spring Integration's FTP seems to be event driven, so I don't know how can I execute either of the sendFile() and readFile() from by Jobs when the trigger is fired at an exact time.
The documentation tells me something about using Inbound Channel Adapter (to read files from a FTP?), Outbound Channel Adapter (to write files to a FTP?) and Outbound Gateway (to do what?):
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
So, I haven't got it clear as how to follow.
Please, could anybody give me a hint?
Thank you!
EDIT:
Thank you #M. Deinum. First, I'll try a simple task: read a file from the FTP, the poller will run every 5 seconds. This is what I've added:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(myFtpsSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/Entrada");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.csv"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(inbound);
source.setLocalDirectory(new File(configuracion.getDirFicherosDescargados()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
if(payload instanceof File){
File f = (File) payload;
System.out.println(f.getName());
}else{
System.out.println(message.getPayload());
}
}
};
}
Then, when the app is running, I put a new csv file intro "Entrada" remote folder, but the handler() method isn't run after 5 seconds... I'm doing something wrong?
Please add #Scheduled(fixedDelay = 5000) over your poller method.
You should use SPRING BATCH with tasklet. It is far easier to configure bean, crone time, input source with existing interfaces provided by Spring.
https://www.baeldung.com/introduction-to-spring-batch
Above example is annotation and xml based both, you can use either.
Other benefit Take use of listeners and parallel steps. This framework can be used in Reader - Processor - Writer manner as well.
I am using Spring Cloud Stream and want to programmatically create and bind channels. My use case is that during application startup I receive the dynamic list of Kafka topics to subscribe to. How can I then create a channel for each topic?
I ran into similar scenario recently and below is my sample of creating SubscriberChannels dynamically.
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
BindingProperties bindingProperties = new BindingProperties();
bindingProperties.setConsumer(consumerProperties);
bindingProperties.setDestination(retryTopic);
bindingProperties.setGroup(consumerGroup);
bindingServiceProperties.getBindings().put(consumerName, bindingProperties);
SubscribableChannel channel = (SubscribableChannel)bindingTargetFactory.createInput(consumerName);
beanFactory.registerSingleton(consumerName, channel);
channel = (SubscribableChannel)beanFactory.initializeBean(channel, consumerName);
bindingService.bindConsumer(channel, consumerName);
channel.subscribe(consumerMessageHandler);
I had to do something similar for the Camel Spring Cloud Stream component.
Perhaps the Consumer code to bind a destination "really just a String indicating the channel name" would be useful to you?
In my case I only bind a single destination, however I don't imagine it being much different conceptually for multiple destinations.
Below is the gist of it:
#Override
protected void doStart() throws Exception {
SubscribableChannel bindingTarget = createInputBindingTarget();
bindingTarget.subscribe(message -> {
// have your way with the received incoming message
});
endpoint.getBindingService().bindConsumer(bindingTarget,
endpoint.getDestination());
// at this point the binding is done
}
/**
* Create a {#link SubscribableChannel} and register in the
* {#link org.springframework.context.ApplicationContext}
*/
private SubscribableChannel createInputBindingTarget() {
SubscribableChannel channel = endpoint.getBindingTargetFactory()
.createInputChannel(endpoint.getDestination());
endpoint.getBeanFactory().registerSingleton(endpoint.getDestination(), channel);
channel = (SubscribableChannel) endpoint.getBeanFactory().initializeBean(channel,
endpoint.getDestination());
return channel;
}
See here for the full source for more context.
I had a task where I did not know the topics in advance. I solved it by having one input channel which listens to all the topics I need.
https://docs.spring.io/spring-cloud-stream/docs/Brooklyn.RELEASE/reference/html/_configuration_options.html
Destination
The target destination of a channel on the bound middleware (e.g., the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations and the destination names can be specified as comma-separated String values. If not set, the channel name is used instead.
So my configuration
spring:
cloud:
stream:
default:
consumer:
concurrency: 2
partitioned: true
bindings:
# inputs
input:
group: application_name_group
destination: topic-1,topic-2
content-type: application/json;charset=UTF-8
Then I defined one consumer which handles messages from all these topics.
#Component
#EnableBinding(Sink.class)
public class CommonConsumer {
private final static Logger logger = LoggerFactory.getLogger(CommonConsumer.class);
#StreamListener(target = Sink.INPUT)
public void consumeMessage(final Message<Object> message) {
logger.info("Received a message: \nmessage:\n{}", message.getPayload());
// Here I define logic which handles messages depending on message headers and topic.
// In my case I have configuration which forwards these messages to webhooks, so I need to have mapping topic name -> webhook URI.
}
}
Note, in your case it may not be a solution. I needed to forward messages to webhooks, so I could have configuration mapping.
I also thought about other ideas.
1) You kafka client consumer without Spring Cloud.
2) Create a predefined number of inputs, for example 50.
input-1
intput-2
...
intput-50
And then have a configuration for some of these inputs.
Related discussions
Spring cloud stream to support routing messages dynamically
https://github.com/spring-cloud/spring-cloud-stream/issues/690
https://github.com/spring-cloud/spring-cloud-stream/issues/1089
We use Spring Cloud 2.1.1 RELEASE
MessageChannel messageChannel = createMessageChannel(channelName);
messageChannel.send(getMessageBuilder().apply(data));
public MessageChannel createMessageChannel(String channelName) {
return (MessageChannel) applicationContext.getBean(channelName);}
public Function<Object, Message<Object>> getMessageBuilder() {
return payload -> MessageBuilder
.withPayload(payload)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)
.build();}
For the incoming messages, you can explicitly use BinderAwareChannelResolver to dynamically resolve the destination. You can check this example where router sink uses binder aware channel resolver.
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}