Redis messages are consumed by Spring boot application after Redis restart - spring-boot

First started Redis Server then started application.As soon as messages appears in the Redis stream those are getting consumed properly. In mean while if restarts the Redis Server as application is already running it is not consuming new messages from the Redis Server.
Could someone please assist on this.
Is there any more configuration needs to be done, to continue to process message after redis server restart.
I'm using StreamMessageListenerContainer with consumergroup.
#Bean
public Subscription listener(RedisStreamConsumer streamListener, RedisConnectionFactory redisConnectionFactory) throws InterruptedException {
StreamMessageListenerContainer<String, MapRecord<String, Object, String>> listenerContainer =
StreamMessageListenerContainer.create(redisTemplate().getConnectionFactory(),
StreamMessageListenerContainer.StreamMessageListenerContainerOptions.builder()
.hashKeySerializer(new StringRedisSerializer()).hashValueSerializer(new StringRedisSerializer())
.pollTimeout(Duration.ofMillis(100))
.build());
Subscription subscription = listenerContainer.receive(Consumer.from(groupName, consumerName),
StreamOffset.create(consumerstreamName, ReadOffset.lastConsumed()), streamListener);
subscription.await(Duration.ofSeconds(2));
listenerContainer.start();
return subscription;
}

Related

Spring Boot & RabbitMQ: How to detect when the application has re-connected to the broker

I have a spring boot application acting as a producer and sending messages via RabbitMQ.
In case the broker is down, is there some kind of listener that will detect when the broker is up again in order to retry sending the failed messages?
Thanks!
Not if you are only producing messages. You could add a #Scheduled method (and #EnableScheduling) that attempts to create a connection every so often and you can add a ConnectionListener to the connection factory which will be called when the new connection is opened.
Since the connection is shared, it won't hurt anything having this scheduled task running when the connection is already open...
#Scheduled(fixedDelay = 10000)
public void tryConn() {
try {
this.cachingConnectionFactory.createConnection().close();
}
catch (Exception e) {
}
}
On the consumer side, the listener container will keep retrying.

Spring data rabbitmq #RabbitListener suddenly stopped consuming en-queued messages

I am using spring boot 2.1.1 release and spring-boot-starter-amqp , the ##RabbitListener stopped consuming messages although it was working fine. When i restarted the consumer API , it starts to work fine.
Also from the management UI, it shows no consumers on this queue
Java 1.8 version
RabbitMQ 3.7.11 cluster (3 nodes)
Kubernetes
No Exception at the java client side or rabbitmq server side.
Heartbeates and keepalive with default values
I tried to re-synchronize the queue via rabbitmqctl, but it still not working.
#Component
public class Receiver {
Logger logger = Logger.getLogger(Receiver.class);
#RabbitListener(queues="test")
public void recievedMessage(String msg) {
logger.info("Recieved Message: " + msg);
}
}
The most common cause, by far, for problems like this is the container thread is "stuck" somewhere in user code - either in the listener, or code called by the listener; e.g. a deadlock.
First step is to take a thread dump the next time it happens to see what the listener container thread(s) are doing.

Connection Refused redis master slave with springboot

I have a master slave set up on ports 3824(master) and 3825(slave). However, when I shutdown master, the read operation gives a connection refused exception. Below is my configuration. How can I ensure that even if I kill master, I'm still reading from slave. Where did i go wrong.
#Bean
public RedisConnectionFactory redisFactory() {
LettuceClientConfiguration config = LettuceClientConfiguration.builder().readFrom(ReadFrom.SLAVE_PREFERRED).buld();
RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration("localhost",3825);
LettuceConnectionFactory fact = new LettuceConnectionFactory(serverConfig, config);
return fact;
}
To support high-availability within your application, you might need to implement the redis-sentinel.
Redis Sentinel when passed to RedisSentinelConfiguration act as a bridge b/w your application and the redis master-slave nodes running as in a group of servers.
It will mainly act as a configuration provider. If a failover occurs, Sentinels will report the new address
Spring Data Redis Sentinel Support:
/**
* Lettuce
*/
#Bean
public RedisConnectionFactory lettuceConnectionFactory() {
RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
.master("mymaster")
.sentinel("127.0.0.1", 26379)
.sentinel("127.0.0.1", 26380);
return new LettuceConnectionFactory(sentinelConfig);
}
Upon master failure event, when the slave is promoted as master, all write requests will be routed to the newly elected master.

Kafka, Spring Kafka and redelivering old messages

I use Kafka and Spring Boot with Spring Kafka. After abnormal application termination and then restart, my application started receiving the old, already processed messages from Kafka queue.
What may be the reason for that and how to find and resolve the issue?
my Kafka properties:
spring.kafka.bootstrap-servers=${kafka.host}:${kafka.port}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=postfenix
spring.kafka.consumer.enable-auto-commit=false
My Spring Kafka factory and listener:
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Post> postKafkaListenerContainerFactory(KafkaProperties kafkaProperties) {
ConcurrentKafkaListenerContainerFactory<String, Post> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.setConsumerFactory(postConsumerFactory(kafkaProperties));
return factory;
}
#KafkaListener(topics = "${kafka.topic.post.send}", containerFactory = "postKafkaListenerContainerFactory")
public void sendPost(ConsumerRecord<String, Post> consumerRecord, Acknowledgment ack) {
Post post = consumerRecord.value();
// do some logic here
ack.acknowledge();
}
When using Kafka, the clients need to commit offsets themselves. This is in contrast to other message brokers, such as AMQP brokers, where the broker keeps track of messages a client did already receive.
In your case, you do not commit offsets automatically and therefore Kafka expects you to commit them manually (because of this setting: spring.kafka.consumer.enable-auto-commit=false). If you do not commit offsets manually in your program, the behaviour you describe is pretty much the expected one. Kafka simply does not know what messages your program did process successfully. Each time you restart your program, Kafka will see that your program did not commit any offsets yet and will apply the strategy you provide in spring.kafka.consumer.auto-offset-reset=earliest, which means the first message in the queue.
If this is all new to you, I suggest reading up this documentation on Kafka and this Spring documentation, because Kafka is quite different than other message brokers.

Losing JMS Messages with Spring JMS and ActiveMQ when application server is suddenly stopped

I have a Spring JMS application that has a JMS Listener that connects to an Active MQ queue on application startup. This JMS listener is a part of the an application that takes a message, enriches it with content, and then delivers it to a topic on the same ActiveMQ broker.
The sessionTransacted is set to True. I'm not performing any database transactions, so I do not have #Transactional set anywhere. From what I've read, the sessionTransacted property sets a local transaction around the JMS Listener's receive method and therefore it will not pull the message off the queue until the transaction is complete. I've tested this using a local ActiveMQ instance, and on my local tomcat container, and it worked as expected.
However, when I deploy to our PERF environment and retry the same test, I notice that the message that was currently in-flight when the server was shutdown, is pulled from queue prior to completing the receive method.
What I would like to know is if there is anything obvious that I should be looking for? Are there certain JMS headers that would cause this behaviour to occur? Please let me know if there is anymore information that I can provide.
I'm using Spring 4.1.2.RELEASE with Apache ActiveMQ 5.8.0, on a Tomcat 7 container running Java 8.
UPDATE - Adding my Java JMS Configurations. Please note that I substituted what I had in my PERF properties file into the relevant areas for clarity.
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() throws Throwable {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxMessagesPerTask(-1);
factory.setConcurrency(1);
factory.setSessionTransacted(Boolean.TRUE);
return factory;
}
#Bean
public CachingConnectionFactory connectionFactory(){
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(1000);
redeliveryPolicy.setRedeliveryDelay(1000);
redeliveryPolicy.setMaximumRedeliveries(6);
redeliveryPolicy.setUseExponentialBackOff(Boolean.TRUE);
redeliveryPolicy.setBackOffMultiplier(5);
ActiveMQConnectionFactory activeMQ = new ActiveMQConnectionFactory(environment.getProperty("queue.username"), environment.getProperty("queue.password"), environment.getProperty("jms.broker.endpoint"));
activeMQ.setRedeliveryPolicy(redeliveryPolicy);
activeMQ.setPrefetchPolicy(prefetchPolicy());
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(activeMQ);
cachingConnectionFactory.setCacheConsumers(Boolean.FALSE);
cachingConnectionFactory.setSessionCacheSize(1);
return cachingConnectionFactory;
}
#Bean
public JmsMessagingTemplate jmsMessagingTemplate(){
ActiveMQTopic activeMQ = new ActiveMQTopic(environment.getProperty("queue.out"));
JmsMessagingTemplate jmsMessagingTemplate = new JmsMessagingTemplate(connectionFactory());
jmsMessagingTemplate.setDefaultDestination(activeMQ);
return jmsMessagingTemplate;
}
protected ActiveMQPrefetchPolicy prefetchPolicy(){
ActiveMQPrefetchPolicy prefetchPolicy = new ActiveMQPrefetchPolicy();
int prefetchValue = 0;
prefetchPolicy.setQueuePrefetch(prefetchValue);
return prefetchPolicy;
}
Thanks,
Juan
It turns out that there were different versions of our application deployed on our PERF environment. Once the application was updated, then it worked as expected.

Resources