Can I define retry mechanism for RabbitMQ message producer in SpringBoot? - spring-boot

I have the following RabbitMQ setup in the application.yml in my SpringBoot app which can consume (receive) messages:
spring:
rabbitmq:
host: localhost
port: 5672
username: admin
password: password
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-interval: 10s
multiplier: 2
max-attempts: 3
I want to create a different SpringBoot app where I can only send the messages.
My questions:
is it possible to define retry setup for message-sending?
if yes, is it the same as my example shows? since it is named listener:
spring.rabbitmq.listener...
Thank you!

See the Boot Documentation about template.retry properties.
spring.rabbitmq.template.retry.enabled
false
Whether publishing retries are enabled.
spring.rabbitmq.template.retry.initial-interval
1000ms
Duration between the first and second attempt to deliver a message.
spring.rabbitmq.template.retry.max-attempts
3.0
Maximum number of attempts to deliver a message.
spring.rabbitmq.template.retry.max-interval
10000ms
Maximum duration between attempts.
spring.rabbitmq.template.retry.multiplier
1.0
Multiplier to apply to the previous retry interval.

Related

Spring Cloud Stream + RabbitMQ - Consuming existing messages in queue

I have a RabbitMQ message broker running in a server, to which I'm trying to configure a Producer and a Consumer using Spring Cloud Stream. My Producer is creating messages in a queue every second, while my Consumer reads them at the same rate. However, if I stop my Consumer and the Producer keeps pushing messages, when I restart my Consumer again it is unable to retrieve the messages created in that period of time it was down, only picking up the messages produced from the time that it was started. How can I make my Consumer consume existing messages in the queue when it starts?
Here are my Consumer properties:
cloud:
stream:
bindings:
input:
destination: spring-cloud-stream-demo
consumer:
auto-bind-dlq: true
republishToDlq: true
maxAttempts: 5
And my Producer properties:
cloud:
stream:
bindings:
output:
destination: spring-cloud-stream-demo
Appreciate any help!
You need to add a group to the consumer (input) binding; otherwise it will bind an anonymous, auto-delete, queue to the exchange.
With a group, a permanent, durable, queue is bound instead.

Spring cloud stream rabbitmq - consumer retries 3 times for all exception

I am trying to stop the retires for my spring boot cloud application completely or retry for only specific exceptions
I have tried several ways to completely stop it from retrying 3 times I set the following configuration but even then it retires
spring.cloud.stream.bindings.input.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=true
To retry for specific exceptions I used below code but didn't work out it still tries to retry for exceptions I don't want to
cloud:
stream:
rabbit:
bindings:
input:
consumer:
autoBindDlq: true
republishToDlq: true
prefix: local-
maxAttempts: 3
backOffInitialInterval: 1000
backOffMaxInterval: 10000
backOffMultiplier: 2.0
defaultRetryable: true
retryableExceptions:
com.ss.*: true
java.lang.*: false
Any idea on how to make it not retry or retry for specific exceptions

Spring Boot Eureka - Faster offline detection

I am using Spring Boot with Eureka and it works really good. But since a few hours, I wanted to detect offline Eureka instances/clients more quickly and I found no good documentation about Eureka's configuration properties. And I'm not even sure if it's possible because Eureka seems to presume that clients send their updates every 30 seconds.
I started to deactivate self-preservation mode and to increase the speed of renewal and interval updates and lower expiration durations but my Eureka server still needs two minutes to discover its offline clients.
After changing the renewal percent threshold the Eureka server didn't remove offline clients ever.
Is there any way to detect offline Eureka clients more quickly?
Server configuration:
eureka:
client:
registerWithEureka: false
fetchRegistry: false
server:
enableSelfPreservation: false
eviction-interval-timer-in-ms: 10000
response-cache-update-interval-ms: 5000
Client configuration:
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
healthcheck:
enabled: true
instance:
lease-renewal-interval-in-seconds: 5
lease-expiration-duration-in-seconds: 15
edit:
Even health check url is not called more often. It is still called every 30 seconds.
Did you check Ribbon configuration. Ribbon can cache servers configuration upfront during startup and make use of it when Eureka-sever is down. Please check if ribbon is enabled on eureka-client app
The correct option to change the schedule was:
eureka.client.instance-info-replication-interval-seconds

Spring Cloud Streaming - Separate Connection for Producer & Consumer

I have a Spring Cloud Streaming transformer application using RabbitMQ. It is reading from a Rabbit queue, doing some transformation, and writing to a Rabbit exchange. I have my application deployed to PCF and am binding to a Rabbit service.
This works fine, but now I am needing a separate connection for consuming and producing the message. (I want to read from the Rabbit queue using one connection, and write to a Rabbit exchange using a different connection). How would I configure this? Is it possible to bind my applications to 2 different Rabbit services using 1 as the producer and 1 as the consumer?
Well, starting with version 1.3 Rabbit Binder indeed creates a separate ConnectionFactory for producers: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#_rabbitmq_binder
Starting with version 1.3, the RabbitMessageChannelBinder creates an internal ConnectionFactory copy for the non-transactional producers to avoid dead locks on consumers when shared, cached connections are blocked because of Memory Alarm on Broker.
So, maybe that is just enough for you as is after upgrading to Spring Cloud Stream Ditmars.
UPDATE
How would I go about configuring this internal ConnectionFactory copy with different connection properties?
No, that's different story. What you need is called multi-binder support: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#multiple-binders
You should declare several blocks for different connection factories:
spring.cloud.stream.bindings.input.binder=rabbit1
spring.cloud.stream.bindings.output.binder=rabbit2
...
spring:
cloud:
stream:
bindings:
input:
destination: foo
binder: rabbit1
output:
destination: bar
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>

Can I Enrol One Micro services at 2 Different eureka

I don't know whether it is possible or not. I want to know whether I can enroll one microservice at two different eureka server at once.
I have one MS let say API-GATEWAY:
I want to enrol it on two different eureka registry server running on 8761 and 8762.
for that I write:
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/ , http://localhost:8762/eureka/
registry-fetch-interval-seconds: 1000
instance:
hostname: api-gateway
prefer-ip-address: true
lease-renewal-interval-in-seconds: 5000000
lease-expiration-duration-in-seconds: 5000000
First up all tell me is it possible or not?
If yes, what property should I use to achieve the objectives?
You can have only one defaultZone. You need peer replication. So that your 8761/eureka is synchronized with 8762/eureka. When the application is registered to 8761 it is also available in 8762. To do this see #spencergibb answer here.

Resources