Quarkus Kafka: How to configure the number of retry attempts if we are not able to connect to the Kafka Broker? - quarkus

I am working on a Quarkus application and intend to use Kafka to receive messages, however I want to stop the application if the application is not able to reach Kafka broker after retrying for a certain number of times. The default configuration is to try infinite number of times to reconnect. In the documentation at Smallrye Reactive Messaging Kafka, it says we can use kafka.retry-attempts or mp.messaging.incoming.[channel-name].retry-attempts to configure the number of retries. I have tried both but the application still goes on retring.
Have someone faced a similar issue or can someone help me with the resolution?

Related

RabbitMQ on Kubernates Unacked messages in queue

We are having issue on rabbitmq that happens when we deploy the application on production, we are not able to reproduce the issue on our development environment.
We have a microservices architecture with multiple spring boot applications deployed on kubernates with autoscaler depends on the usage and we notice that after sometimes some Unacked messages are created in queue, the number of Unacked messages will increase with the time and after sometimes rabbitmq seems to stop working.
Is there something we can check in order to identify the problem?

How to restart kubernetes pod when issue because of Rabbit MQ connectivity in logs

I have a Spring Boot 2 standalone application( not REST service) which connect to rabbit MQ and process message. The application is deployed in kubernetes. While it work great, but when Rabbit MQ remain down for little longer and in logs I see hearbeat exception 60sec and eventually connection get drop even if the rabbit mq comes up after certain time:
Automatic retry connection to broker by spring-rabbitmq
https://www.rabbitmq.com/heartbeats.html
While I try to manage above issue by increasing number of retry :https://stackoverflow.com/questions/45385119/how-configure-timeouts-retries-or-max-attempts-in-differents-queues-with-spring
but after expiry of retry still above issue comes.
How can I reboot/delete-recreate pod if I see above issue in logs from kubernetes.
The easiest way is to use actuator, which has a /actuator/health endpoint. (Note that the recent version also add /actuator/health/liveness and /actuator/health/readiness).
You can assign the endpoint to livenessProbe property of k8s. Then it will automatically restart when it is necessary. You can parameterize, when your app is down if necessary.
See the docs:
Kubernetes liveness probe
Spring actuator health

Apache Camel - IDLE AMQP1.0 Connection

I have Apache-Camel with Spring application. Application acts as a bridge between two AMQP destinations. It consumes messages from one broker and publishes it on to the other broker. Communication is done both ways over AMQP1.0 protocol.
Problem
I am facing a IDLE connection issue. After few days of operations, the consumers stops receiving messages, unless restarted. Moreover, I am not able to get any ERROR logs. This issue goes away after restart of application.
My expectation is that similar to Spring-JMS, Apache Camel shall retry connecting the consumers. Kindly guide me if I need to configure something in Camel to perform reconnection tries and do proper logging.
Camel Route COnfiguration
cmlCntxt.addRoutes(new RouteBuilder() {
public void configure() {
from("incomingOne:queue:" + inQueueOne)
.to("outGoingBroker:queue:"outQueueOne).transform(body().append("\n\n"));
from("inQueueTwo:queue:" + inQueueTwo).to("outGoingBroker:"+outQueueTwo).transform(body().append("\n\n"));
}
});
Moreover I am not having control of the brokers at both ends and am unable to check why my consumers are not receiving messages. That is why I am expecting camel ERROR logs to be informative for me to debug the issue, whether connectivity or else.
Try configuring jms.requestTimeout property at your remoteURI. by default, the requestTimeout is indefinite . So incase of any issues, it might stuck forever.
Also try using failover to connect the broker and enable debugging in application.
if you are still facing the issue, kindly edit with broker details

How should you handle the retry of sending a JMS message from your application to ActiveMQ if the ActiveMQ server is down?

So using JMS and ActiveMQ, I can be sure that my message sent from my Spring Boot application using JmsTemplate will reach it's destination application even if that destination application is down at the time I send the message to ActiveMQ. As when the destination application starts up, it grabs the message from the queue. Great!
However.
What happens if my Spring Boot application tries to send a JMS message to a queue on the ActiveMQ server, but the ActiveMQ server is down at that point or the network is down and I get a connection refused exception?
What is the recommended way to make sure my application keeps trying to re-sends the message to ActiveMQ until it is successful? Is this something I have to develop into my application myself? Are there any nifty Spring tools or annotations which do this for me? Any advice on best practice or how I should be handling this scenario?
You can try Spring-Retry. Has lots of fine grain controls for it:
http://www.baeldung.com/spring-retry
https://github.com/spring-projects/spring-retry
If it is critical that you don't lose this message, you will want to save it to some alternative persistent store (e.g. filesystem, local mq server) along with whatever retry code you come up with. But for those occasional network glitches or a very temporary mq shutdown/restart, Spring-Retry alone should do the trick.
Couple of approaches I can think of
1. You can set up another ActiveMq as fallback. In your code you don't have to do anything, just change your broker url from
activemq.broker.url=tcp://amq01.blah.blah.com:61616
to
activemq.broker.url=failover:(tcp://amq01.blah.blah.com:61616,tcp://amq02.blah.blah.com:61616)?randomize=false
The rest is automatically taken care of. i.e. when one of them is down, the messages are sent to other.
Another approach is to send to a internal queue (like seda, direct) when activemq is down and read from there.
Adding failover to the url is one appropriate way.
And another reasonable way is to making sure activemq always online , as activemq has the master-slave mode(http://activemq.apache.org/masterslave.html) to get high availability.

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

Resources