Spring Cloud Stream + RabbitMQ - Consuming existing messages in queue - spring-boot

I have a RabbitMQ message broker running in a server, to which I'm trying to configure a Producer and a Consumer using Spring Cloud Stream. My Producer is creating messages in a queue every second, while my Consumer reads them at the same rate. However, if I stop my Consumer and the Producer keeps pushing messages, when I restart my Consumer again it is unable to retrieve the messages created in that period of time it was down, only picking up the messages produced from the time that it was started. How can I make my Consumer consume existing messages in the queue when it starts?
Here are my Consumer properties:
cloud:
stream:
bindings:
input:
destination: spring-cloud-stream-demo
consumer:
auto-bind-dlq: true
republishToDlq: true
maxAttempts: 5
And my Producer properties:
cloud:
stream:
bindings:
output:
destination: spring-cloud-stream-demo
Appreciate any help!

You need to add a group to the consumer (input) binding; otherwise it will bind an anonymous, auto-delete, queue to the exchange.
With a group, a permanent, durable, queue is bound instead.

Related

RabbitMQ queue gets deleted immediately after creation. Why?

I'm trying to deploy Spring Boot microservices applications producing and consuming data using RabbitMQ on K8s Cluster in Azure AKS.
When I run producer application and produce a message to the queue through POSTMAN, I get 200 OK response but in RabbitMQ management UI, I get no queues and in the RabbitMQ container logs I see below error
o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'employeeexchange' in vhost '/', class-id=60, method-id=40)
Not able to figure out what I'm doing wrong.
If you have any idea (or need any kind of additional information), let me know.
You can use below to create a queue
#Bean
Queue queue() {
return new Queue(String name, boolean durable, boolean exclusive, boolean autoDelete)
Parameters:
name - the name of the queue.
durable - true if we are declaring a durable queue (the queue will survive a server restart)
exclusive - false if we are not declaring an exclusive queue (the queue will only be used by the declarer's connection)
autoDelete - false if the server should not delete the queue when it is no longer in use

Spring Kafka consuming old messages which are already consumed by the consumer

i have a Spring boot application and using Spring Kafka. we have create a consumer which is consuming messages from 4 topics. these topics doesnt have any partition. the issue i am facing here a rendom behavior that out of three topics, in any one topic offset stop and my consumer keep on consuming same messages from that topic again and again until we need to manually move the offset to latest.below is the configuration YAML configuration i have :
spring:
kafka:
consumer:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS}
group-id: group_id
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
kafka:
consumer:
allTopicList: user.topic,student.topic,class.topic,teachers.topic**
as it is a Spring boot application, default offset is set to latest.
what i am doing wrong here, please help me to understand.
What version are you using?
You should set
...consumer:
enable-auto-commit: false
The listener container will more reliably commit the offsets.
You should also consider
ack-mode: RECORD
and the container will commit the offset for each successfully processed record (default is BATCH).

how to pause consumers from consuming messages in spring cloud stream

I ran into situation where I want to pause consumers to stop consuming messages through spring boot application and messages should not be lost from queue, I am using amqp and spring cloud stream.
https://github.com/spring-cloud/spring-cloud-stream/blob/master/docs/src/main/asciidoc/spring-cloud-stream.adoc#binding_visualization_control this states only kafka has this option.
Can I do something to fix this for rabbit-mq binder?
Pause is required for Kafka because if you stop() the container, the broker will rebalance and give the partitions to another instance. With RabbitMQ, the queue remains when the container is stop()ped so the issue doesn't arise there (as long as you specify a group).
You can't stop() the container for an anonymous consumer (no group) because such consumers use an auto-delete queue. There is no way to pause such consumers (the AMQP protocol has no such mechanism).

Spring Boot and Kafka: Broker disconnected

I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks
See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).

Spring Cloud Streaming - Separate Connection for Producer & Consumer

I have a Spring Cloud Streaming transformer application using RabbitMQ. It is reading from a Rabbit queue, doing some transformation, and writing to a Rabbit exchange. I have my application deployed to PCF and am binding to a Rabbit service.
This works fine, but now I am needing a separate connection for consuming and producing the message. (I want to read from the Rabbit queue using one connection, and write to a Rabbit exchange using a different connection). How would I configure this? Is it possible to bind my applications to 2 different Rabbit services using 1 as the producer and 1 as the consumer?
Well, starting with version 1.3 Rabbit Binder indeed creates a separate ConnectionFactory for producers: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#_rabbitmq_binder
Starting with version 1.3, the RabbitMessageChannelBinder creates an internal ConnectionFactory copy for the non-transactional producers to avoid dead locks on consumers when shared, cached connections are blocked because of Memory Alarm on Broker.
So, maybe that is just enough for you as is after upgrading to Spring Cloud Stream Ditmars.
UPDATE
How would I go about configuring this internal ConnectionFactory copy with different connection properties?
No, that's different story. What you need is called multi-binder support: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#multiple-binders
You should declare several blocks for different connection factories:
spring.cloud.stream.bindings.input.binder=rabbit1
spring.cloud.stream.bindings.output.binder=rabbit2
...
spring:
cloud:
stream:
bindings:
input:
destination: foo
binder: rabbit1
output:
destination: bar
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>

Resources