Failed to start bean 'org.springframework.amqp.rabbit.config.internalRabbitListenerEndpointRegistry' - spring-boot

I have a simple spring-boot application with a rabbit sender and a receiver. I want to write some receiver tests where I am running a rabbitmq docker instance as Junit Class Rule (RabbitContainerRule)and then sending a message using rabbitTemplate and the test verifies if the receiver receives the same message. But I am getting the following exception:
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.amqp.rabbit.config.internalRabbitListenerEndpointRegistry'; nested exception is org.springframework.amqp.AmqpIllegalStateException: Fatal exception on listener startup
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178)
Caused by: org.springframework.amqp.rabbit.listener.QueuesNotAvailableException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it.
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:599)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1424)
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'my-message-queue' in vhost '/', class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66)
If I create the queue manually(by stopping at a breakpoint) in the docker instance using admin console, my test passes.
Also, if I test it manually using the docker rabbit instance, my spring boot application creates queue successfully. So what is causing it to not create in the test?
I am using spring-amqp 1.7.4 RELEASE
Receiver code:
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "my-message-queue", durable = "true",
arguments = {
#Argument(name = "x-dead-letter-exchange", value = "my-message-exchange-dead-letter"),
#Argument(name = "x-dead-letter-routing-key", value = "my-message-queue")}),
exchange = #Exchange(value = "my-message-exchange", type = "topic", durable = "true"),
key = "my-message-rk")
)
public void handleMessage(MyMessage message) {
MESSAGE_LOG.info("Receiving message: " + message);
}
Also I am not creating any #Bean for my-message-queue in Configurations and rely on #RabbitListener to create one for me. But I am creating ConnectionFactory, RabbitTemplate and SimpleRabbitListenerContainerFactory beans in my config.

The #EnableRabbit is necessary on some #Configuration class to let your application context to parse #RabbitListener.
To let the application to create queues and exchanges and bindings between them automatically, and the RabbitAdmin bean must be present in the configuration.
See Reference Manual for more information: https://docs.spring.io/spring-amqp/docs/2.0.0.RELEASE/reference/html/

The class where you are building your queues should be annotated with #Configuration annotation, otherwise, spring will not be able to create the queues at the time of start up

Related

Pass multiple queue names to Spring JMSListener

How can i pass multiple queue name to my JmsListener.
#JmsListener(destination = "#{'${solace.jms.queueNames}'.split(',')}"
In my property file:
solace.jms.queueNames =q1,q2,q3
But when I start the Spring Boot app I get the below error:
.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination '[Ljava.lang.String;#1b30a54e' - trying to recover. Cause: Error creating consumer - internal error (Queue name "[Ljava.lang.String;#1b30a54e" contains illegal character [;])
How to resolve it?
You can define multiple #JMSListener
#JmsListener(destination = "${solace.jms.queueNames[0]}")
#JmsListener(destination = "${solace.jms.queueNames[1]}")
#JmsListener(destination = "${solace.jms.queueNames[2]}")

Spring Boot app not starting when Kafka is not up

I have a Spring Boot app that has a Kafka consumer and producer in it. There's also a bean to create a topic.
e.g.
#KafkaListener(topics = "myTopic")
public void doSomething() {
// do something on receipt of the message
}
#Bean
public NewTopic topic(){
return TopicBuilder.name("myTopic")
.partitions(2)
.
Both my Spring Boot app and Kafka start up in Docker in Kubernetes. Sometimes the Spring Boot app starts up before the Kafka pod is up and therefore fails to start as the consumer cannot connect (see stacktrace).
Is there a way of my application starting up in a resilient manner ? For example the consumer should cope with Kafka not being there at startup or when the app is running ?
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:827)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:629)
at org.springframework.kafka.core.Def Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:827)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:629)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:207)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:193)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:167)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:141)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:607)
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:329)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:176)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 59 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:735)aultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:207)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:193)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:167)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:141)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:607)
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:329)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:176)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 59 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:735)
You can set autostartup = "false" on the listener and start it yourself (using the KafkaListenerEndpointRegistry - give the listener an id so you can get a reference to its container from the registry).
If the broker is not available, the KafkaAdmin won't create the topic; you will also need to call KafkaAdmin.initialize():
/**
* Call this method to check/add topics; this might be needed if the broker was not
* available when the application context was initialized, and
* {#link #setFatalIfBrokerNotAvailable(boolean) fatalIfBrokerNotAvailable} is false,
* or {#link #setAutoCreate(boolean) autoCreate} was set to false.
* #return true if successful.
* #see #setFatalIfBrokerNotAvailable(boolean)
* #see #setAutoCreate(boolean)
*/
public final boolean initialize() {

RabbitHandler to create consumer and retry on Fatal Exception in Spring for queue on listening to RabbitMQ

I am using Spring AMQP RabbitHandler and have written the following code:
#RabbitListener(queues = "#{testQueue.name}")
public class Tut4Receiver {
#RabbitHandler
public void receiveMessage(String message){
System.out.println("Message received "+message);
}
}
The Queue is defined like:-
#Bean
public Queue testQueue() {
return new AnonymousQueue();
}
I am using separate code to initialize the Connection Factory.
My question is if RabbitMQ is down for some time, it keeps on retrying to create a consumer but only if it receives a ConnectionRefused error. But suppose the user does not exist in RabbitMQ and there is a gap in which a new user will be created, then it receives a fatal error from RabbitMQ and it never retries due to which the result is auto delete queue would be created on RabbitMQ without any consumers.
Stack Trace:
SimpleMessageListenerContainer] [SimpleAsyncTaskExecutor-11] [|] [|||] Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.exception.FatalListenerStartupException: Authentication failure
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:476)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1280)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:65)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:309)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:547)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createConnection(ConnectionFactoryUtils.java:90)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:140)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:76)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:472)
... 2 common frames omitted
Caused by: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:339)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:813)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:767)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:887)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:300)
SimpleMessageListenerContainer] [SimpleAsyncTaskExecutor-11] [|] [|||] Stopping container from aborted consumer
[|] [|||] Waiting for workers to finish.
[|] [|||] Successfully waited for workers to finish.
Any way to retry even on fatal exceptions like when the user does not exist?
Authentication failures are considered fatal by default and not retried.
You can override this behavior by setting a property on the listener container (possibleAuthenticationFailureFatal). The property is not available as a boot property so you have to override boot's container factory...
#Bean(name = "rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setContainerConfigurer(smlc -> smlc.setPossibleAuthenticationFailureFatal(false));
return factory;
}

Rabbit MQ Connection Factory Connecting to Cluster with 2 nodes But separate queue names

Below is my Rabbit setup configured form my middle-ware team.
A cluster having 2 nodes, each node has 1 Queue lets say, Node1 --> Sample.Q1 And Node2 --> Sample.Q2
Queues configured to have 50% of load on rabbit side.
Basically Sample.Q1 and Sample.Q2 receive same type messages, But 2 Queues are created on each host for more resilience and high availability.
I have requested them to keep the same queue across the Nodes, But my middle-ware team is confirmed that they can't create a duplicate queues on the same cluster.
My question is How spring boot supports to create a connection factory and Rabbit listener.
I have code configuration as below, But that is not working I know this is not correct..
#Bean
public CachingConnectionFactory subscriberConnectionFactory() {
CachingConnectionFactory subsCachingConnectionFactory = new CachingConnectionFactory();
subsCachingConnectionFactory.setAddresses(rabbitMqConfig.getSubscriberAddresses()); // host1:port, host2:port
subsCachingConnectionFactory.setUsername(rabbitMqConfig.getSubscriberUsername());
subsCachingConnectionFactory.setPassword(rabbitMqConfig.getSubscriberPassword());
subsCachingConnectionFactory.setVirtualHost(rabbitMqConfig.getVhost());
subsCachingConnectionFactory.setConnectionNameStrategy(f -> "subscriberConnection");
return subsCachingConnectionFactory;
}
#RabbitListener(id="messageListener",queues = "#{rabbitMqConfig.getSubscriberQueueName()}",containerFactory="queueListenerContainer")
public void receiveMessage(Message message, Channel channel, #Header("id") String messageId,
#Header("amqp_deliveryTag") Long deliveryTag) {
LOGGER.info(" Message:"+ message.toString());
}
Queues are configured like Sample.Q1, Sample.Q2.
But this is not working.
----------
Error Log:
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'sample.q1,Hydra.clone.q2' in vhost 'Sample.services', class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66)
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:36)
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:494)
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:288)
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:138)
... 14 common frames omitted
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'Sample.q1,Sample.q2' in vhost 'Sample.services', class-id=50, method-id=10)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:178)
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:111)
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:670)
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:597)
... 1 common frames omitted

Spring Boot with Embedded Mongo : Cannot assign requested address: JVM_Bind

I am trying to setup a JUnit test for a Spring Boot with embedded Mongo & Kafka :-
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
classes = {AccountingApplication.class})
#DataMongoTest
public class BaseEmbeddedTest {
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true);
#Autowired
private MongoTemplate mongoTemplate;
#Test
public void emptyTest(){
}
}
src/test/resources/application.yml :-
spring:
data:
mongodb:
port: 0
kafka:
bootstrap-servers: ${spring.embedded.kafka.brokers}
PROBLEM
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [de.flapdoodle.embed.mongo.config.IMongodConfig]: Factory method 'embeddedMongoConfiguration' threw exception; nested exception is java.net.BindException: Cannot assign requested address: JVM_Bind
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 140 more
Caused by: java.net.BindException: Cannot assign requested address: JVM_Bind
at java.net.DualStackPlainSocketImpl.bind0(Native Method)
at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at de.flapdoodle.embed.process.runtime.Network.getFreeServerPort(Network.java:80)
at org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration.embeddedMongoConfiguration(EmbeddedMongoAutoConfiguration.java:147)
What am I doing wrong here ?
Version:-
dependencyManagementPluginVersion = '1.0.3.RELEASE'
springBootVersion = '1.5.6.RELEASE'
springCloudVersion = 'Dalston.SR2'
projectVersion = '0.0.1-SNAPSHOT'
javaVersion = 1.8
kotlinVersion = '1.1.4'
This annotation: #DataMongoTest causes Spring Boot to create an embedded Mongo instance. The exception messages tells us that the embedded Mongo instance cannot start because there is already a process running on the port it is trying to run on.
The embedded Mongo instance is configured by EmbeddedMongoAutoConfiguration and the strategy applied by Spring Boot - for port allocation - is as follows:
if configured Mongo port > 0 then
use the configured port
else
assign a random port
end
So, I suspect that your test context is configured with a non zero value for spring.data.mongodb.port. I know you posted your application.yml which implies that you are - correctly - assigning a zero value to spring.data.mongodb.port but if you put a breakpoint inside the EmbeddedMongoAutoConfiguration constructor and peek inside the properties parameter I think you'll see that the actual value in use by that configuration class is not zero. If the port value passed to EmbeddedMongoAutoConfiguration is actually zero but you are still getting the JVM_Bind error then that implies that this call: Network.getFreeServerPort(this.getHost()) is not returning a free port and that seems unlikely.
In order to fix this issue: as long as you configure your test context with spring.data.mongodb.port=0 then the embedded Mongo instance will be assigned a random port and this random port will be made known to other aspects of your Spring context (such as your MongoTemplate) which need to talk to that Mongo instance.

Resources