I am trying to stop the retires for my spring boot cloud application completely or retry for only specific exceptions
I have tried several ways to completely stop it from retrying 3 times I set the following configuration but even then it retires
spring.cloud.stream.bindings.input.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=true
To retry for specific exceptions I used below code but didn't work out it still tries to retry for exceptions I don't want to
cloud:
stream:
rabbit:
bindings:
input:
consumer:
autoBindDlq: true
republishToDlq: true
prefix: local-
maxAttempts: 3
backOffInitialInterval: 1000
backOffMaxInterval: 10000
backOffMultiplier: 2.0
defaultRetryable: true
retryableExceptions:
com.ss.*: true
java.lang.*: false
Any idea on how to make it not retry or retry for specific exceptions
Related
I have the following RabbitMQ setup in the application.yml in my SpringBoot app which can consume (receive) messages:
spring:
rabbitmq:
host: localhost
port: 5672
username: admin
password: password
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-interval: 10s
multiplier: 2
max-attempts: 3
I want to create a different SpringBoot app where I can only send the messages.
My questions:
is it possible to define retry setup for message-sending?
if yes, is it the same as my example shows? since it is named listener:
spring.rabbitmq.listener...
Thank you!
See the Boot Documentation about template.retry properties.
spring.rabbitmq.template.retry.enabled
false
Whether publishing retries are enabled.
spring.rabbitmq.template.retry.initial-interval
1000ms
Duration between the first and second attempt to deliver a message.
spring.rabbitmq.template.retry.max-attempts
3.0
Maximum number of attempts to deliver a message.
spring.rabbitmq.template.retry.max-interval
10000ms
Maximum duration between attempts.
spring.rabbitmq.template.retry.multiplier
1.0
Multiplier to apply to the previous retry interval.
I have a RabbitMQ message broker running in a server, to which I'm trying to configure a Producer and a Consumer using Spring Cloud Stream. My Producer is creating messages in a queue every second, while my Consumer reads them at the same rate. However, if I stop my Consumer and the Producer keeps pushing messages, when I restart my Consumer again it is unable to retrieve the messages created in that period of time it was down, only picking up the messages produced from the time that it was started. How can I make my Consumer consume existing messages in the queue when it starts?
Here are my Consumer properties:
cloud:
stream:
bindings:
input:
destination: spring-cloud-stream-demo
consumer:
auto-bind-dlq: true
republishToDlq: true
maxAttempts: 5
And my Producer properties:
cloud:
stream:
bindings:
output:
destination: spring-cloud-stream-demo
Appreciate any help!
You need to add a group to the consumer (input) binding; otherwise it will bind an anonymous, auto-delete, queue to the exchange.
With a group, a permanent, durable, queue is bound instead.
i have a Spring boot application and using Spring Kafka. we have create a consumer which is consuming messages from 4 topics. these topics doesnt have any partition. the issue i am facing here a rendom behavior that out of three topics, in any one topic offset stop and my consumer keep on consuming same messages from that topic again and again until we need to manually move the offset to latest.below is the configuration YAML configuration i have :
spring:
kafka:
consumer:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS}
group-id: group_id
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
kafka:
consumer:
allTopicList: user.topic,student.topic,class.topic,teachers.topic**
as it is a Spring boot application, default offset is set to latest.
what i am doing wrong here, please help me to understand.
What version are you using?
You should set
...consumer:
enable-auto-commit: false
The listener container will more reliably commit the offsets.
You should also consider
ack-mode: RECORD
and the container will commit the offset for each successfully processed record (default is BATCH).
I have configured zuul with 2 instances using ribbon (without eureka) as below:
zuul.retryable=true
zuul.routes.simple-ms-app.serviceId: client
client.ribbon.listOfServers=http://localhost:7788,http://localhost:8877
When both the instances 7788 & 8877 are up and running, everything goes fine.
When the first instance in the listOfServers is down, then the request ends up in the below error:
com.netflix.zuul.exception.ZuulException: Forwarding error
I am using the below version configuration:
spring-boot : 2.0.7.RELEASE
spring-cloud: Finchley.SR2
If anyone had faced similar issue and managed to figure out a solution, please share it here.
Thank you.
By default, Zuul throws exception (instead of throwing 503/404) when upstream service is not available. This behavior has been discussed in detail in Zuul swallows 503 exceptions from upstream microservices GitHub thread.
To handle this case and configure Zuul to retry on (current and next ) available instances, you need to do two things:
Extend ErrorFilter and handle the exception with custom behavior
Configure retry for Zuul
Extend ErrorFilter and provide custom logic to return 404 or 503 status code. Some of the approaches to deal with this exception is explain in this SO thread: Customizing Zuul Exception.
Retry in Zuul can be configured using following application properties:
zuul:
retryable: true
ribbon:
MaxAutoRetries: 1
MaxAutoRetriesNextServer: 3
OkToRetryOnAllOperations: true
yourApplication:
ribbon:
listOfServers: instance-1-url, instance-2-url
Please note that Spring retry is a dependency for retry in Zuul.
I have a Spring Cloud Streaming transformer application using RabbitMQ. It is reading from a Rabbit queue, doing some transformation, and writing to a Rabbit exchange. I have my application deployed to PCF and am binding to a Rabbit service.
This works fine, but now I am needing a separate connection for consuming and producing the message. (I want to read from the Rabbit queue using one connection, and write to a Rabbit exchange using a different connection). How would I configure this? Is it possible to bind my applications to 2 different Rabbit services using 1 as the producer and 1 as the consumer?
Well, starting with version 1.3 Rabbit Binder indeed creates a separate ConnectionFactory for producers: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#_rabbitmq_binder
Starting with version 1.3, the RabbitMessageChannelBinder creates an internal ConnectionFactory copy for the non-transactional producers to avoid dead locks on consumers when shared, cached connections are blocked because of Memory Alarm on Broker.
So, maybe that is just enough for you as is after upgrading to Spring Cloud Stream Ditmars.
UPDATE
How would I go about configuring this internal ConnectionFactory copy with different connection properties?
No, that's different story. What you need is called multi-binder support: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#multiple-binders
You should declare several blocks for different connection factories:
spring.cloud.stream.bindings.input.binder=rabbit1
spring.cloud.stream.bindings.output.binder=rabbit2
...
spring:
cloud:
stream:
bindings:
input:
destination: foo
binder: rabbit1
output:
destination: bar
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>