I have a Spring Cloud Streaming transformer application using RabbitMQ. It is reading from a Rabbit queue, doing some transformation, and writing to a Rabbit exchange. I have my application deployed to PCF and am binding to a Rabbit service.
This works fine, but now I am needing a separate connection for consuming and producing the message. (I want to read from the Rabbit queue using one connection, and write to a Rabbit exchange using a different connection). How would I configure this? Is it possible to bind my applications to 2 different Rabbit services using 1 as the producer and 1 as the consumer?
Well, starting with version 1.3 Rabbit Binder indeed creates a separate ConnectionFactory for producers: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#_rabbitmq_binder
Starting with version 1.3, the RabbitMessageChannelBinder creates an internal ConnectionFactory copy for the non-transactional producers to avoid dead locks on consumers when shared, cached connections are blocked because of Memory Alarm on Broker.
So, maybe that is just enough for you as is after upgrading to Spring Cloud Stream Ditmars.
UPDATE
How would I go about configuring this internal ConnectionFactory copy with different connection properties?
No, that's different story. What you need is called multi-binder support: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.RELEASE/reference/htmlsingle/#multiple-binders
You should declare several blocks for different connection factories:
spring.cloud.stream.bindings.input.binder=rabbit1
spring.cloud.stream.bindings.output.binder=rabbit2
...
spring:
cloud:
stream:
bindings:
input:
destination: foo
binder: rabbit1
output:
destination: bar
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>
Related
I'm working on a project for a large company with millions of users. We are attempting to convert their REST based architecture to an event based architecture. The current architecture involves a service, we'll call it Service-A, that makes 7 REST calls when a user logs in.
Rather than calling out to the 7 services for that data when the user logs in we want to modify those 7 services to produce events when there are updates to the data. Then we will have Service-A listen to 7 different kafka topics and save those updates to the database.
It is a Java Spring Boot application. We are using AWS MSK to host our kafka cluster and we are using AWS Glue for the schema registry. I can configure my consumer in Service-A to listen to 7 topics but I don't know how to get Service-A to check 7 different schemas when consuming a message from one of those 7 topics.
So far, the only configuration I've found for the kafka consumer is one property that takes one schema name.
Here is my config yaml:
spring:
kafka:
listener:
ack-mode: manual_immediate
consumer:
enable-auto-commit: false
group-id: my-group
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.json.trusted.packages: com.app.somepackage.domain
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer
auto-offset-reset: earliest
bootstrap-servers: <my-msk-url>
properties:
region: us-west-2
schemaName: my-first-schema
registry.name: my-registry-name
avroRecordType: SPECIFIC_RECORD
I have a RabbitMQ message broker running in a server, to which I'm trying to configure a Producer and a Consumer using Spring Cloud Stream. My Producer is creating messages in a queue every second, while my Consumer reads them at the same rate. However, if I stop my Consumer and the Producer keeps pushing messages, when I restart my Consumer again it is unable to retrieve the messages created in that period of time it was down, only picking up the messages produced from the time that it was started. How can I make my Consumer consume existing messages in the queue when it starts?
Here are my Consumer properties:
cloud:
stream:
bindings:
input:
destination: spring-cloud-stream-demo
consumer:
auto-bind-dlq: true
republishToDlq: true
maxAttempts: 5
And my Producer properties:
cloud:
stream:
bindings:
output:
destination: spring-cloud-stream-demo
Appreciate any help!
You need to add a group to the consumer (input) binding; otherwise it will bind an anonymous, auto-delete, queue to the exchange.
With a group, a permanent, durable, queue is bound instead.
i have a Spring boot application and using Spring Kafka. we have create a consumer which is consuming messages from 4 topics. these topics doesnt have any partition. the issue i am facing here a rendom behavior that out of three topics, in any one topic offset stop and my consumer keep on consuming same messages from that topic again and again until we need to manually move the offset to latest.below is the configuration YAML configuration i have :
spring:
kafka:
consumer:
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS}
group-id: group_id
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
kafka:
consumer:
allTopicList: user.topic,student.topic,class.topic,teachers.topic**
as it is a Spring boot application, default offset is set to latest.
what i am doing wrong here, please help me to understand.
What version are you using?
You should set
...consumer:
enable-auto-commit: false
The listener container will more reliably commit the offsets.
You should also consider
ack-mode: RECORD
and the container will commit the offset for each successfully processed record (default is BATCH).
I have setup a Spring Boot application to receive Kafka messages from an existing and working Kafka producer. The setup is standard, and based on the following: https://www.codenotfound.com/spring-kafka-consumer-producer-example.html
Messages are not received, and the following is continually displayed in the console:
WARN org.apache.clients.NetworkClient :Bootstrap broker <hostname>:9092 disconnected
In addition, the following debug message is logged:
org.apache.common.errors.Timeout: Failed to update metadata after 60000 ms.
The console message is discussed in the following link:
https://community.hortonworks.com/content/supportkb/150148/errorwarn-bootstrap-broker-6668-disconnected-orgap.html
The logged message is discussed here:
https://community.cloudera.com/t5/Data-Ingestion-Integration/Error-when-sending-message-to-topic-in-Kafka/td-p/41440
Very likely, the timeout will not happen when the first issue is resolved.
The solution to the console message which is given is to explicitly pass --security-protocol SSL as an argument to the producer or consumer command.
Given that I am listening on an existing Kafka broker and topic, no settings can be changed there. Any changes must be on the Spring Boot side.
Is it possible to configure application.yml so that --security-protocol SSL is passed an an argument to the consumer? Also, has anyone experienced this before, and is there another way to resolve the issue using the configuration options available in Spring Boot and Spring Kafka?
Thanks
See the documentation.
Scroll down to Kafka. Arbitrary Kafka properties can be set using
spring:
kafka:
properties:
security.protocol: SSL
applies to consumer and producer (and admin in 2.0).
In the upcoming 2.0 release (currently RC1), there is also
spring:
kafka:
properties:
consumer:
some.property: foo
for properties that only apply to consumers (and similarly for producers and admins).
Having a Mesos-Marathon cluster in place and a Spring Boot application with Spring Cloud Stream that consumes a topic from Kafka, we now want to integrate Kafka with the Mesos cluster. For this we want to install Kafka Mesos Framework.
Right now we have the application.yml configuration like this:
---
spring:
profiles: local-docker
cloud:
stream:
kafka:
binder:
zk-nodes: 192.168.88.188
brokers: 192.168.88.188
....
Once we have installed Kafka Mesos Framework,
How can we connect to kafka from Spring Cloud Stream?
or more specifically
How will be the configuration?
The configuration properties look good. Do you have the host addresses correct.
For more info on the kafka binder config properties, you can refer here:
https://github.com/spring-cloud/spring-cloud-stream/blob/master/spring-cloud-stream-docs/src/main/asciidoc/spring-cloud-stream-overview.adoc#kafka-specific-settings