Goal
I would like to send a message to a topic which I will process later with a client applications. For this purpose I use Spring Boot and Spring Integration Java DSL with its JMS module. As a message broker I use a native ActiveMQ Artemis.
Here is my setup
DemoApplication.java
#SpringBootApplication
public class DemoApplication {
private static final Logger logger = LoggerFactory.getLogger(DemoApplication.class);
public interface StarGate {
void sendHello(String helloText);
}
#Autowired
private ConnectionFactory connectionFactory;
#Bean
public IntegrationFlow mainFlow() {
return IntegrationFlows
.from(StarGate.class)
.handle(Jms.outboundAdapter(connectionFactory)
.configureJmsTemplate(jmsTemplateSpec -> jmsTemplateSpec
.deliveryPersistent(true)
.pubSubDomain(true)
.sessionTransacted(true)
.sessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE)
.explicitQosEnabled(true)
)
.destination(new ActiveMQTopic("wormhole")))
.get();
}
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(DemoApplication.class, args);
StarGate stargate = context.getBean(StarGate.class);
stargate.sendHello("Jaffa, kree!");
logger.info("Hello message sent.");
}
}
application.properties
spring.artemis.mode=native
spring.artemis.host=localhost
spring.artemis.port=61616
spring.artemis.user=artemis
spring.artemis.password=simetraehcapa
spring.jms.pub-sub-domain=true
spring.jms.template.delivery-mode=persistent
spring.jms.template.qos-enabled=true
spring.jms.listener.acknowledge-mode=client
logging.level.org.springframework=INFO
build.gradle (the important parts)
springBootVersion = '2.0.2.RELEASE'
dependencies {
compile('org.springframework.boot:spring-boot-starter-artemis')
compile('org.springframework.boot:spring-boot-starter-integration')
compile('org.springframework.integration:spring-integration-jms')
testCompile('org.springframework.boot:spring-boot-starter-test')
}
As an ActiveMQ Artemis server I use the vromero/artemis (2.6.0) docker image with default configuration.
The Problem
On the producer side the message appears to be successfully sent but on the message broker side the message is missing. The address is created but the queue is missing.
The name of the topic will be dynamic in the future, so I'm not allowed to create the topic manually in broker.xml. I rely on the automatic-queue-creation feature of Artemis.
Why message sending is not working in this case?
Nerd note: I'm aware that Star Gates are basically connected via wormholes in a point-to-point manner but for the sake of the question let's ignore this fact.
When a message is sent to a topic and auto-creation is enabled for both addresses and queues only the address will be created and not a queue. If a queue were created automatically and the message was put into the queue that would violate the semantics of a topic. A subscription queue on a topic address is only created in response to a subscriber. Therefore, you need a subscriber on the topic before you send the message otherwise the message will be dropped (in accordance with topic semantics).
Related
I'm using SpringBoot along with #JmsListener to retrieve IBM MQ messages from multiple queues within the same QManager. So far I can get messages without any issues. But there could be scenarios, where I had to stop consuming msgs from one of these queues temporarily. It doesn't have to be dynamic.
I'm not using any custom ConnectionFactory methods. When needed, I would like to make config changes in application.properties to disable that particular Queue consumption and restart the process. Is this possible? Can't find any specific info for this scenario. Would appreciate any suggestions. TIA.
#Component
public class MyJmsListener {
#JmsListener(destination = "{ibm.mq.queue.queue01}")
public void handleQueue01(String message) {
System.out.println("received: "+message);
}
#JmsListener(destination = "{ibm.mq.queue.queue02}")
public void handleQueue02(String message) {
System.out.println("received: "+message);
}
}
application.properties
ibm.mq.queue.queue01=IBM.QUEUE01
ibm.mq.queue.queue02=IBM.QUEUE02
If you give each #JmsListener an id property, you can start and stop them individually using the JmsListenerEndpointRegistry bean.
registry.getListenerContainer(id).stop();
I want to send something to Kafka topic in producer-only (not in read-write process) transaction using output-channel.
I read documentation and another topic on StackOverflow (Spring cloud stream kafka transactions in producer side).
Problem is that i need to set unique transactionIdPrefix per node.
Any suggestion how to do it?
Here is one way...
#Component
class TxIdCustomizer implements EnvironmentAware {
#Override
public void setEnvironment(Environment environment) {
Properties properties = new Properties();
properties.setProperty("spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix",
UUID.randomUUID().toString());
((StandardEnvironment) environment).getPropertySources()
.addLast(new PropertiesPropertySource("txId", properties));
}
}
I am migrating a Spring Boot microservice that consumes data from 3 RabbitMQ queues on server A, saves it into Redis and finally produces messages into an exchange in a different RabbitMQ on server B so these messages can be consumed by another microservice. This flow is working fine but I would like to migrate it to Spring Cloud Stream using the RabbitMQ binder. All Spring AMQP configuration is customised in the properties file and no spring property is used to create connections, queues, bindings, etc...
My first idea was setting up two bindings in Spring Cloud Stream, one connected to server A (consumer) and the other connected to server B (producer), and migrate the existing code to a Processor but I discarded it because it seems connection names cannot be set yet if multiple binders are used and I need to add several bindings to consume from server A's queues and bindingRoutingKey property does not support a list of values (I know it can be done programmately as explained here).
So I decided to only refactor the part of code related to the producer to use Spring Cloud Stream over RabbitMQ so the same microservice should consume via Spring AMQP from server A (original code) and should produce into server B via Spring Cloud Stream.
The first issue I found was a NonUniqueBeanDefinitionException in Spring Cloud Stream because a org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory bean was defined twice with handlerMethodFactory and integrationMessageHandlerMethodFactory names.
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory' available: expected single matching bean but found 2: handlerMethodFactory,integrationMessageHandlerMethodFactory
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1144)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:411)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:344)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1123)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.injectAndPostProcessDependencies(StreamListenerAnnotationBeanPostProcessor.java:317)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.afterSingletonsInstantiated(StreamListenerAnnotationBeanPostProcessor.java:113)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:862)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:743)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:390)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1214)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1203)
It seems the former bean is created by Spring AMQP and the latter by Spring Cloud Stream so I created my own primary bean:
#Bean
#Primary
public MessageHandlerMethodFactory messageHandlerMethodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
Now the application is able to start but the output channel is created by Spring Cloud Stream in server A instead of server B. It seems that Spring Cloud Stream configuration is using the connection created by Spring AMQP instead of using its own configuration.
The configuration of Spring AMQP is this:
#Bean
public SimpleRabbitListenerContainerFactory priceRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_A));
}
#Bean
public SimpleRabbitListenerContainerFactory maxbetRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_B));
}
#Bean
public ConnectionFactory consumerConnectionFactory() throws Exception {
return
new CachingConnectionFactory(
getRabbitConnectionFactoryBean(
rabbitProperties.getConsumer()
).getObject()
);
}
private SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
RabbitProperties.ListenerProperties listenerProperties) {
//return a SimpleRabbitListenerContainerFactory set up from external properties
}
/**
* Create the AMQ Admin.
*/
#Bean
public AmqpAdmin consumerAmqpAdmin(ConnectionFactory consumerConnectionFactory) {
return new RabbitAdmin(consumerConnectionFactory);
}
/**
* Create the map of available queues and declare them in the admin.
*/
#Bean
public Map<String, Queue> queues(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
Queue queue =
QueueBuilder
.nonDurable(listenerEntry.getValue().getQueueName())
.autoDelete()
.build();
consumerAmqpAdmin.declareQueue(queue);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), queue);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the map of available exchanges and declare them in the admin.
*/
#Bean
public Map<String, TopicExchange> exchanges(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
TopicExchange exchange =
new TopicExchange(listenerEntry.getValue().getExchangeName());
consumerAmqpAdmin.declareExchange(exchange);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), exchange);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the list of bindings and declare them in the admin.
*/
#Bean
public List<Binding> bindings(Map<String, Queue> queues, Map<String, TopicExchange> exchanges, AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().keySet().stream()
.map(listenerName -> {
Queue queue = queues.get(listenerName);
TopicExchange exchange = exchanges.get(listenerName);
return
rabbitProperties.getConsumer().getListeners().get(listenerName).getKeys().stream()
.map(bindingKey -> {
Binding binding = BindingBuilder.bind(queue).to(exchange).with(bindingKey);
consumerAmqpAdmin.declareBinding(binding);
return binding;
}).collect(Collectors.toList());
}).flatMap(Collection::stream)
.collect(Collectors.toList());
}
Message listeners are:
#RabbitListener(
queues="${consumer.listeners.LISTENER_A.queue-name}",
containerFactory = "priceRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
#RabbitListener(
queues="${consumer.listeners.LISTENER_B.queue-name}",
containerFactory = "maxbetRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
Properties:
#
# Server A config (Spring AMQP)
#
consumer.host=server-a
consumer.username=
consumer.password=
consumer.port=5671
consumer.ssl.enabled=true
consumer.ssl.algorithm=TLSv1.2
consumer.ssl.validate-server-certificate=false
consumer.connection-name=local:microservice-1
consumer.thread-factory.thread-group-name=server-a-consumer
consumer.thread-factory.thread-name-prefix=server-a-consumer-
# LISTENER_A configuration
consumer.listeners.LISTENER_A.queue-name=local.listenerA
consumer.listeners.LISTENER_A.exchange-name=exchangeA
consumer.listeners.LISTENER_A.keys[0]=*.1.*.*
consumer.listeners.LISTENER_A.keys[1]=*.3.*.*
consumer.listeners.LISTENER_A.keys[2]=*.6.*.*
consumer.listeners.LISTENER_A.keys[3]=*.8.*.*
consumer.listeners.LISTENER_A.keys[4]=*.9.*.*
consumer.listeners.LISTENER_A.initial-concurrency=5
consumer.listeners.LISTENER_A.maximum-concurrency=20
consumer.listeners.LISTENER_A.thread-name-prefix=listenerA-consumer-
# LISTENER_B configuration
consumer.listeners.LISTENER_B.queue-name=local.listenerB
consumer.listeners.LISTENER_B.exchange-name=exchangeB
consumer.listeners.LISTENER_B.keys[0]=*.1.*
consumer.listeners.LISTENER_B.keys[1]=*.3.*
consumer.listeners.LISTENER_B.keys[2]=*.6.*
consumer.listeners.LISTENER_B.initial-concurrency=5
consumer.listeners.LISTENER_B.maximum-concurrency=20
consumer.listeners.LISTENER_B.thread-name-prefix=listenerB-consumer-
#
# Server B config (Spring Cloud Stream)
#
spring.rabbitmq.host=server-b
spring.rabbitmq.port=5672
spring.rabbitmq.username=
spring.rabbitmq.password=
spring.cloud.stream.bindings.outbound.destination=microservice-out
spring.cloud.stream.bindings.outbound.group=default
spring.cloud.stream.rabbit.binder.connection-name-prefix=local:microservice
So my question is: is it possible to use in the same Spring Boot application code that consumes data from RabbitMQ via Spring AMQP and produces messages into a different server via Spring Cloud Stream RabbitMQ? If it is, could somebody tell me what I am doing wrong, please?
Spring AMQP version is the one provided by Boot version 2.1.7 (2.1.8-RELEASE) and Spring Cloud Stream version is the one provided by Spring Cloud train Greenwich.SR2 (2.1.3.RELEASE).
EDIT
I was able to make it work configuring the binder via multiple configuration properties instead of the default one. So with this configuration it works:
#
# Server B config (Spring Cloud Stream)
#
spring.cloud.stream.binders.transport-layer.type=rabbit
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.host=server-b
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.username=
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.password=
spring.cloud.stream.bindings.stream-output.destination=microservice-out
spring.cloud.stream.bindings.stream-output.group=default
Unfortunately it is not possible to set the connection-name yet in multiple binders configuration: A custom ConnectionNameStrategy is ignored if there is a custom binder configuration.
Anyway, I still do not understand why it seems the contexts are "mixed" when using Spring AMQP and Spring Cloud Stream RabbitMQ. It is still necessary to set a primary MessageHandlerMethodFactory bean in order the implementation to work.
EDIT
I found out that the NoUniqueBeanDefinitionException was caused because the microservice itself was creating a ConditionalGenericConverter to be used by Spring AMQP part to deserialize messages from Server A.
I removed it and added some MessageConverters instead. Now the problem is solved and the #Primary bean is no longer necessary.
Unrelated, but
consumerAmqpAdmin.declareQueue(queue);
You should never communicate with the broker within a #Bean definition; it is too early in application context lifecycle. It might work but YMMV; also if the broker is not available it will prevent your app from starting.
It's better to define beans of type Declarables containing the lists of queues, channels, bindings and the Admin will automatically declare them when the connection is first opened successfully. See the reference manual.
I have never seen the MessageHandlerFactory problem; Spring AMQP declares no such bean. If you can provide a small sample app that exhibits the behavior, that would be useful.
I'll see if I can find a work around to the connection name issue.
EDIT
I found a work around to the connection name issue; it involves a bit of reflection but it works. I suggest you open a new feature request against the binder to request a mechanism to set the connection name strategy when using multiple binders.
Anyway; here's the work around...
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57725710Application {
public static void main(String[] args) {
SpringApplication.run(So57725710Application.class, args);
}
#Bean
public Object connectionNameConfigurer(BinderFactory binderFactory) throws Exception {
setConnectionName(binderFactory, "rabbit1", "myAppProducerSide");
setConnectionName(binderFactory, "rabbit2", "myAppConsumerSide");
return null;
}
private void setConnectionName(BinderFactory binderFactory, String binderName,
String conName) throws Exception {
binderFactory.getBinder(binderName, MessageChannel.class); // force creation
#SuppressWarnings("unchecked")
Map<String, Map.Entry<Binder<?, ?, ?>, ApplicationContext>> binders =
(Map<String, Entry<Binder<?, ?, ?>, ApplicationContext>>) new DirectFieldAccessor(binderFactory)
.getPropertyValue("binderInstanceCache");
binders.get(binderName)
.getValue()
.getBean(CachingConnectionFactory.class).setConnectionNameStrategy(queue -> conName);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String listen(String in) {
System.out.println(in);
return in.toUpperCase();
}
}
and
spring.cloud.stream.binders.rabbit1.type=rabbit
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.output.destination=outDest
spring.cloud.stream.bindings.output.producer.required-groups=outQueue
spring.cloud.stream.bindings.output.binder=rabbit1
spring.cloud.stream.binders.rabbit2.type=rabbit
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.input.destination=inDest
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.bindings.input.binder=rabbit2
and
Please refer to system diagram attached.
system diagram here
ISSUE: When I try to post message to input channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
Code inside 5 -> Read from a channel, apply Business Logic (empty for now) and send the response to another channel.
#Bean
public IntegrationFlow sendToBusinessLogictoNotifyExternalSystem() {
return IntegrationFlows
.from("CommonChannelName")
.handle("Business Logic Class name") // Business Logic empty for now
.channel("QueuetoAnotherSystem")
.get();
}
I have written the JUnit for 5 as given below,
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
ISSUE: As you can see from the system diagram, my code also has a DB connection on a different flow.
When I try to post message to producer channel, the code tries to connect to the DB and throws an exception that it is unable to connect.
I do not want this to happen, because the JUnit should never be related to the DB, and should run anywhere, anytime.
How do I fix this exception?
NOTE: Not sure if it matters, the application is a Spring Boot application. I have used Spring Integration inside the code to read and write from/to queues.
Since the common channel is a publish/subscribe channel, the message goes to both flows.
If this is a follow-up to this question/answer, you can prevent the DB flow from being invoked by calling stop() on the sendToDb flow (as long as you set ignoreFailures to true on the pub/sub channel, like I suggested there.
((Lifecycle) sendToDb).stop();
JUNIT TEST CASE - UPDATED:
#Autowired
PublishSubscribeChannel CommonChannelName;
#Autowired
MessageChannel QueuetoAnotherSystem;
#Autowired
SendResponsetoDBConfig sendResponsetoDBConfig;
#Test
public void sendToBusinessLogictoNotifyExternalSystem() {
Lifecycle flowToDB = ((Lifecycle) sendResponsetoDBConfig.sendToDb());
flowToDB.stop();
Message<?> message = (Message<?>) MessageBuilder.withPayload("World")
.setHeader(MessageHeaders.REPLY_CHANNEL, QueuetoAnotherSystem).build();
this.CommonChannelName.send((org.springframework.messaging.Message<?>) message);
Message<?> receive = QueuetoAnotherSystem.receive(5000);
assertNotNull(receive);
assertEquals("World", receive.getPayload());
}
CODE FOR 4: The flow that handles message to DB
public class SendResponsetoDBConfig {
#Bean
public IntegrationFlow sendToDb() {
System.out.println("******************* Inside SendResponsetoDBConfig.sendToDb ***********");
return IntegrationFlows
.from("Common Channel Name")
.handle("DAO Impl to store into DB")
.get();
}
}
NOTE: ******************* Inside SendResponsetoDBConfig.sendToDb *********** never gets printed.
I'm trying to configure a simple Spring Cloud Stream application with RabbitMQ. The code I use is mostly taken from spring-cloud-stream-samples.
I have an entry point:
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
and a simple messages producer from the example:
#EnableBinding(Source.class)
public class SourceModuleDefinition {
private String format = "yyyy-MM-dd HH:mm:ss";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "${fixedDelay}", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
return () -> new GenericMessage<>(new SimpleDateFormat(this.format).format(new Date()));
}
}
Additionally, here is application.yml configuration:
fixedDelay: 5000
spring:
cloud:
stream:
bindings:
output:
destination: test
When I run the example, it connects to Rabbit and creates an exchange called test. But my problem is, it doesn't create a queue and binding automatically. I can see traffic going in Rabbit, but all my messages are then gone. While I need them to stay in some queue unless they are read by consumer.
Maybe I misunderstand something, but from all the topics I read, it seems like Spring Cloud Stream should create a queue and a binding automatically. If not, how do I configure it so my messages are persisted?
I'm using Spring Cloud Brixton.SR5 and Spring Boot 1.4.0.RELEASE.
A queue would be created as soon as you start a consumer application.
In the case of Rabbit MQ, where we have separate queues for each consumer group and we cannot know all groups beforehand, if you want to have the queues created automatically for consumer groups that are known in advance, you can use the requiredGroups property of the producers. This will ensure that messages are persisted until a consumer from that group is started.
See details here: http://docs.spring.io/spring-cloud-stream/docs/Brooklyn.BUILD-SNAPSHOT/reference/htmlsingle/#_producer_properties
You will need a consumer in order to have a queue created.
Here you can find an example of a producer and a consumer using rabbitMq:
http://ignaciosuay.com/how-to-implement-asyncronous-communication-between-microservices-using-spring-cloud-stream/