Spring Boot + Kafka handle if Kafka is not available - spring

I'm trying to push some non-critical data into Kafka, but I'd still like the application to run without Kafka. However my Spring Boot (with spring-kafka 2.7.2) now won't start normall and it's in a loop of:
[AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
Basically this blocks all http requests to my app because I guess the startup process cannot be completed until it connects to Kafka. After maybe a minute, I get the "Tomcat started xx" message, so now the application runs, I guess?
Except when now anything calls a Kafka related services, it again goes into a loop of:
org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
This again goes for 60 seconds, then it stops, and I get internal server error.
I'd like to simply treat Kafka as an optional service even though on production we will have it up constantly, this might not be true for any of the testing or development servers or developer machines etc. Basically "if it is running, push this data to this topic, if its not running, fine" situation.
Is there a way to config Spring with Kafka so it does not block everything like this (and ideally this all should run in a background thread, which is another thing, but very weird that Kafka would block all the main thread of a request)?
Thanks

I have experienced the same in one of the projects so I did enabled/disabled the Kafka service class through conditionalOnExpression like below.
#Service
#ConditionalOnExpression(value = "'${kafka.enabled}'.equalsIgnoreCase('true')")
public class kafkaPosting {
....
}
Please note to add the property (kafka.enabled) in your application.properties file.
Moreover, if you are Autowiring Kafka bean into your service class then don't forget to add the below to the injection.
#Autowired(required=false)
KafkaClient kafkaClient;
This will help to bypass the kafka based on the configured properties.

Related

Apache Camel - IDLE AMQP1.0 Connection

I have Apache-Camel with Spring application. Application acts as a bridge between two AMQP destinations. It consumes messages from one broker and publishes it on to the other broker. Communication is done both ways over AMQP1.0 protocol.
Problem
I am facing a IDLE connection issue. After few days of operations, the consumers stops receiving messages, unless restarted. Moreover, I am not able to get any ERROR logs. This issue goes away after restart of application.
My expectation is that similar to Spring-JMS, Apache Camel shall retry connecting the consumers. Kindly guide me if I need to configure something in Camel to perform reconnection tries and do proper logging.
Camel Route COnfiguration
cmlCntxt.addRoutes(new RouteBuilder() {
public void configure() {
from("incomingOne:queue:" + inQueueOne)
.to("outGoingBroker:queue:"outQueueOne).transform(body().append("\n\n"));
from("inQueueTwo:queue:" + inQueueTwo).to("outGoingBroker:"+outQueueTwo).transform(body().append("\n\n"));
}
});
Moreover I am not having control of the brokers at both ends and am unable to check why my consumers are not receiving messages. That is why I am expecting camel ERROR logs to be informative for me to debug the issue, whether connectivity or else.
Try configuring jms.requestTimeout property at your remoteURI. by default, the requestTimeout is indefinite . So incase of any issues, it might stuck forever.
Also try using failover to connect the broker and enable debugging in application.
if you are still facing the issue, kindly edit with broker details

Kafka Cannot Configure Topics on Application Startup, but Later Can Communicate

We have a spring boot application using spring-kafka (2.2.5.RELEASE) that always gets this error when starting up:
Could not configure topics
org.springframework.kafka.KafkaException: Timed out waiting to get existing
topics; nested exception is java.util.concurrent.TimeoutException
However, the application continues to startup:
org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1]
INFO o.s.k.l.KafkaMessageListenerContainer - partitions revoked: []
INFO o.s.k.l.KafkaMessageListenerContainer - partitions assigned: [my-reply-topic-1]
INFO o.s.k.l.KafkaMessageListenerContainer - partitions assigned: [my-request-topic-0]
INFO o.s.b.w.e.tomcat.TomcatWebServer -
Tomcat started on port(s): 8080 (http) with context path ''
At this point, the application interacts with Kafka as expected.
We like to keep our logs clean, so we would like to understand why this Exception is thrown. Also, it is a bit confusing, because when we move to a different environment where the networking has not been established between the application and the kafka broker(s), we get the same error, but the application does not function. Having the same Exception occur when there is truly a problem and when it can be ignored is irksome when trying to troubleshoot connectivity issues.
Is there a way, on application startup, to determine whether connectivity has been established with Kafka rather than just waiting for a timeout message (which may be a red herring anyway)?
If the topic(s) exist already, remove any NewTopic beans from the application context and the KafkaAdmin won't try to connect to the broker at all.

Managing JMS Message Containers on Application Startup and Shutdown

Currently, we have four JMS listener containers that are started during the application start. They all connect through Apache ZooKeeper and are manually started. This becomes problematic when a connection to ZooKeeper cannot be established. The (Wicket) application cannot start, even though it is not necessary for the JMS listeners be active to use the application. They simply need to listen to messages in the background, save them and a cron job will process them in batches.
Goals:
Allow the application to start and not be prevented by the message containers not being able to connect.
After the application starts, start the message listeners.
If the connection to one or any of the message listeners goes down, it should attempt to automatically reconnect.
On application shutdown (such as the Tomcat being shutdown), the application should stop the message listeners and the cron job that processes the saved messages.
Make all of this testable (as in, be able to write integration tests for this setup).
Current Setup:
Spring Boot 1.5.6
Apache ZooKeeper 3.4.6
Apache ActiveMQ 5.7
Wicket 7.7.0
Work done so far:
Define a class that implements ApplicationListener<ApplicationReadyEvent>.
Setting the autoStart property of the DefaultMessageListenerContainer to false and start each container in the onApplicationEvent in a separate thread.
Questions:
Is it necessary to start each message container in its own thread? This seems to be overkill, but the way the "start" process works is that the DefaultMessageListenerContainer is built for that listener and then it is started. There is a UI component that a user can use to start/stop the message listeners if need be, and if these are started sequentially in one thread, then the latter three message containers could be null if the first one has yet to connect on startup.
How do I accomplish goals 4 and 5?
Of course, any commments on whether I am on the right track would be helpful.
If you do not start them in a custom thread then the whole application cannot be fully started. It is not just Wicket, but the Servlet container won't change the application state from STARTING to STARTED due to the blocking request to ZooKeeper.
Another option is to use a non-blocking request to ZooKeeper but this is done by the JMS client (ActiveMQ), so you need to check whether this is supported in their docs (both ActiveMQ and ZooKeeper). I haven't used those in several years, so I cannot help you more.

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

Webapp hangs when Active MQ broker is running

I got a strange problem with my spring webapp (running on local jetty) which connects to a locally running ActiveMQ broker for JMS functionality.
As soon as I start the broker the applications becomes incredibly slow, e.g. the startup of the ApplicationContext with active broker takes forever (i.e. > 10mins, did not yet wait long enough for it to complete). If I start the broker after the webapp (i.e. after the ApplicationContext was loaded) it's running but in a very very slow way (requests which usually take <1s take >30s). All operations take longer even the ones without JMS involved. When I run the application without an activemq broker everything runs smoothly (except the JMS related stuff of course ;-) )
Here's what I tried so far:
Updated the ActiveMQ Version to 5.10.1
Used standalone ActiveMQ instead of maven-plugin
moved the broker running from a separate JVM (via active mq maven plugin, connection via JNDI lookup in jetty config) into the same JVM (started via spring config, without JNDI)
changed the active mq transport from tcp to vm
several activemq settings (alwaysSyncSend, alwaysSessionAsync, producerWindowSize)
Using CachingConnectionFactory and PooledConnectionFactory
When analyzing a thread dump (jstack) I see many activemq threads sleeping on a monitor. Which looks like this:
"ActiveMQ VMTransport: vm://localhost#0-3" daemon prio=6 tid=0x000000000b1a3000 nid=0x1840 waiting on condition [0x00000000177df000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f786d670> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:955)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java.lang.Thread.run(Thread.java:662)
Any help is greatly appreciated !
I found the cause of the issue and was able to fix it:
we were passing a transactionmanager to the AbstractMessageListenerContainer. While in production there is a XA-Transactionmanager in use on the local jetty environment only a JPATransactionManager is used. Apparently the JMS is waiting forever for an XA transaction to be commited, which never happens in the local environment.
By overriding the bean definition of the AbstractMessageListenerContainer for the local env without setting a transcationmanager but using sessionTransacted="true" instead everything works fine.
I got the idea that it might be related to transaction handling from enabling the ActiveMQ logging. With this I saw that something was wrong with the transaction (transactionContext.getTransactionId() returned null).

Resources