Spring Boot app not starting when Kafka is not up - spring-boot

I have a Spring Boot app that has a Kafka consumer and producer in it. There's also a bean to create a topic.
e.g.
#KafkaListener(topics = "myTopic")
public void doSomething() {
// do something on receipt of the message
}
#Bean
public NewTopic topic(){
return TopicBuilder.name("myTopic")
.partitions(2)
.
Both my Spring Boot app and Kafka start up in Docker in Kubernetes. Sometimes the Spring Boot app starts up before the Kafka pod is up and therefore fails to start as the consumer cannot connect (see stacktrace).
Is there a way of my application starting up in a resilient manner ? For example the consumer should cope with Kafka not being there at startup or when the app is running ?
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:827)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:629)
at org.springframework.kafka.core.Def Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:827)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:629)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:207)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:193)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:167)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:141)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:607)
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:329)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:176)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 59 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:735)aultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:207)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:193)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:167)
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:141)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:607)
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:329)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:176)
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:340)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312)
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 59 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:735)

You can set autostartup = "false" on the listener and start it yourself (using the KafkaListenerEndpointRegistry - give the listener an id so you can get a reference to its container from the registry).
If the broker is not available, the KafkaAdmin won't create the topic; you will also need to call KafkaAdmin.initialize():
/**
* Call this method to check/add topics; this might be needed if the broker was not
* available when the application context was initialized, and
* {#link #setFatalIfBrokerNotAvailable(boolean) fatalIfBrokerNotAvailable} is false,
* or {#link #setAutoCreate(boolean) autoCreate} was set to false.
* #return true if successful.
* #see #setFatalIfBrokerNotAvailable(boolean)
* #see #setAutoCreate(boolean)
*/
public final boolean initialize() {

Related

Pass multiple queue names to Spring JMSListener

How can i pass multiple queue name to my JmsListener.
#JmsListener(destination = "#{'${solace.jms.queueNames}'.split(',')}"
In my property file:
solace.jms.queueNames =q1,q2,q3
But when I start the Spring Boot app I get the below error:
.s.j.l.DefaultMessageListenerContainer : Setup of JMS message listener invoker failed for destination '[Ljava.lang.String;#1b30a54e' - trying to recover. Cause: Error creating consumer - internal error (Queue name "[Ljava.lang.String;#1b30a54e" contains illegal character [;])
How to resolve it?
You can define multiple #JMSListener
#JmsListener(destination = "${solace.jms.queueNames[0]}")
#JmsListener(destination = "${solace.jms.queueNames[1]}")
#JmsListener(destination = "${solace.jms.queueNames[2]}")

Spring Cloud Stream Kafka Binder KafkaTransactionManager results in a cycle in application context

I am setting up a basic Spring Cloud Stream producer with Kafka. The intent is to accept a HTTP POST, save the result of the post to a database with Spring Data JPA, and write the results to a Kafka topic using Spring Cloud Stream Kafka Binder. I am following the latest binder documentation on how to setup a KafkaTransactionManager, but this code results in an error on Application startup.
***************************
APPLICATION FAILED TO START
***************************
Description:
The dependencies of some of the beans in the application context form a cycle:
┌─────┐
| kafkaTransactionManager defined in com.example.tx.Application
↑ ↓
| org.springframework.boot.autoconfigure.kafka.KafkaAnnotationDrivenConfiguration
└─────┘
I have the following Bean defined in my Application class, which is the same as documentation.
#Bean
public KafkaTransactionManager kafkaTransactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null, MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager tm = new KafkaTransactionManager<>(pf);
tm.setTransactionIdPrefix("tx-test");
return tm;
}
It seems that calling getBinder causes Spring to create the context again. How can I resolve this circular dependency?
Dependencies: Spring Boot parent 2.4.6; Spring Cloud BOM 2020.0.3
Something must have changed in one of the layers; here is a work around:
#Bean
SmartInitializingSingleton ktmProvider(BinderFactory binders, GenericApplicationContext context) {
return () -> {
context.registerBean("kafkaTransactionManager", KafkaTransactionManager.class,
((KafkaMessageChannelBinder) binders.getBinder(null, MessageChannel.class))
.getTransactionalProducerFactory());
context.getBean(KafkaTransactionManager.class).setTransactionIdPrefix("tx-test");
};
}
i.e. wait for the other beans to be created before registering and configuring the tm.

RabbitHandler to create consumer and retry on Fatal Exception in Spring for queue on listening to RabbitMQ

I am using Spring AMQP RabbitHandler and have written the following code:
#RabbitListener(queues = "#{testQueue.name}")
public class Tut4Receiver {
#RabbitHandler
public void receiveMessage(String message){
System.out.println("Message received "+message);
}
}
The Queue is defined like:-
#Bean
public Queue testQueue() {
return new AnonymousQueue();
}
I am using separate code to initialize the Connection Factory.
My question is if RabbitMQ is down for some time, it keeps on retrying to create a consumer but only if it receives a ConnectionRefused error. But suppose the user does not exist in RabbitMQ and there is a gap in which a new user will be created, then it receives a fatal error from RabbitMQ and it never retries due to which the result is auto delete queue would be created on RabbitMQ without any consumers.
Stack Trace:
SimpleMessageListenerContainer] [SimpleAsyncTaskExecutor-11] [|] [|||] Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.exception.FatalListenerStartupException: Authentication failure
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:476)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1280)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:65)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:309)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:547)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils$1.createConnection(ConnectionFactoryUtils.java:90)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.doGetTransactionalResourceHolder(ConnectionFactoryUtils.java:140)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactoryUtils.java:76)
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:472)
... 2 common frames omitted
Caused by: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:339)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:813)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:767)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:887)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:300)
SimpleMessageListenerContainer] [SimpleAsyncTaskExecutor-11] [|] [|||] Stopping container from aborted consumer
[|] [|||] Waiting for workers to finish.
[|] [|||] Successfully waited for workers to finish.
Any way to retry even on fatal exceptions like when the user does not exist?
Authentication failures are considered fatal by default and not retried.
You can override this behavior by setting a property on the listener container (possibleAuthenticationFailureFatal). The property is not available as a boot property so you have to override boot's container factory...
#Bean(name = "rabbitListenerContainerFactory")
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setContainerConfigurer(smlc -> smlc.setPossibleAuthenticationFailureFatal(false));
return factory;
}

Failed to start bean 'org.springframework.amqp.rabbit.config.internalRabbitListenerEndpointRegistry'

I have a simple spring-boot application with a rabbit sender and a receiver. I want to write some receiver tests where I am running a rabbitmq docker instance as Junit Class Rule (RabbitContainerRule)and then sending a message using rabbitTemplate and the test verifies if the receiver receives the same message. But I am getting the following exception:
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.amqp.rabbit.config.internalRabbitListenerEndpointRegistry'; nested exception is org.springframework.amqp.AmqpIllegalStateException: Fatal exception on listener startup
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178)
Caused by: org.springframework.amqp.rabbit.listener.QueuesNotAvailableException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it.
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:599)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1424)
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'my-message-queue' in vhost '/', class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66)
If I create the queue manually(by stopping at a breakpoint) in the docker instance using admin console, my test passes.
Also, if I test it manually using the docker rabbit instance, my spring boot application creates queue successfully. So what is causing it to not create in the test?
I am using spring-amqp 1.7.4 RELEASE
Receiver code:
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "my-message-queue", durable = "true",
arguments = {
#Argument(name = "x-dead-letter-exchange", value = "my-message-exchange-dead-letter"),
#Argument(name = "x-dead-letter-routing-key", value = "my-message-queue")}),
exchange = #Exchange(value = "my-message-exchange", type = "topic", durable = "true"),
key = "my-message-rk")
)
public void handleMessage(MyMessage message) {
MESSAGE_LOG.info("Receiving message: " + message);
}
Also I am not creating any #Bean for my-message-queue in Configurations and rely on #RabbitListener to create one for me. But I am creating ConnectionFactory, RabbitTemplate and SimpleRabbitListenerContainerFactory beans in my config.
The #EnableRabbit is necessary on some #Configuration class to let your application context to parse #RabbitListener.
To let the application to create queues and exchanges and bindings between them automatically, and the RabbitAdmin bean must be present in the configuration.
See Reference Manual for more information: https://docs.spring.io/spring-amqp/docs/2.0.0.RELEASE/reference/html/
The class where you are building your queues should be annotated with #Configuration annotation, otherwise, spring will not be able to create the queues at the time of start up

Spring Boot with Embedded Mongo : Cannot assign requested address: JVM_Bind

I am trying to setup a JUnit test for a Spring Boot with embedded Mongo & Kafka :-
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
classes = {AccountingApplication.class})
#DataMongoTest
public class BaseEmbeddedTest {
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true);
#Autowired
private MongoTemplate mongoTemplate;
#Test
public void emptyTest(){
}
}
src/test/resources/application.yml :-
spring:
data:
mongodb:
port: 0
kafka:
bootstrap-servers: ${spring.embedded.kafka.brokers}
PROBLEM
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [de.flapdoodle.embed.mongo.config.IMongodConfig]: Factory method 'embeddedMongoConfiguration' threw exception; nested exception is java.net.BindException: Cannot assign requested address: JVM_Bind
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 140 more
Caused by: java.net.BindException: Cannot assign requested address: JVM_Bind
at java.net.DualStackPlainSocketImpl.bind0(Native Method)
at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at de.flapdoodle.embed.process.runtime.Network.getFreeServerPort(Network.java:80)
at org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration.embeddedMongoConfiguration(EmbeddedMongoAutoConfiguration.java:147)
What am I doing wrong here ?
Version:-
dependencyManagementPluginVersion = '1.0.3.RELEASE'
springBootVersion = '1.5.6.RELEASE'
springCloudVersion = 'Dalston.SR2'
projectVersion = '0.0.1-SNAPSHOT'
javaVersion = 1.8
kotlinVersion = '1.1.4'
This annotation: #DataMongoTest causes Spring Boot to create an embedded Mongo instance. The exception messages tells us that the embedded Mongo instance cannot start because there is already a process running on the port it is trying to run on.
The embedded Mongo instance is configured by EmbeddedMongoAutoConfiguration and the strategy applied by Spring Boot - for port allocation - is as follows:
if configured Mongo port > 0 then
use the configured port
else
assign a random port
end
So, I suspect that your test context is configured with a non zero value for spring.data.mongodb.port. I know you posted your application.yml which implies that you are - correctly - assigning a zero value to spring.data.mongodb.port but if you put a breakpoint inside the EmbeddedMongoAutoConfiguration constructor and peek inside the properties parameter I think you'll see that the actual value in use by that configuration class is not zero. If the port value passed to EmbeddedMongoAutoConfiguration is actually zero but you are still getting the JVM_Bind error then that implies that this call: Network.getFreeServerPort(this.getHost()) is not returning a free port and that seems unlikely.
In order to fix this issue: as long as you configure your test context with spring.data.mongodb.port=0 then the embedded Mongo instance will be assigned a random port and this random port will be made known to other aspects of your Spring context (such as your MongoTemplate) which need to talk to that Mongo instance.

Resources