I want to set acks=all property for my producer in my spring cloud stream kafka application.
I had tried like this :
spring.cloud.stream.kafka.binder.requiredAcks=all
and
spring.cloud.stream.kafka.streams.binder.configuration=all
and
spring.cloud.stream.kafka.streams.bindings.<channel>.producer.configuration.requiredAcks=all
Unfortunately noting works for me.
Can you please help me how to set these kind of properties to application level or producer/consumer.
The configuration below is only used by the Kafka Binders (not the kafka Streams one). It is used to set the acks property of a producer instance.
spring.cloud.stream.kafka.binder.requiredAcks
To configure a Kafka Streams instance, properties must be prefixed with spring.cloud.stream.kafka.streams.binder(Spring Cloud Stream Configuration).
Into KafkaStreams, producers properties can be override by adding the prefix ".producer" (see Configuring a Streams Application; So to configure producer acks you should define the following property :
spring.cloud.stream.kafka.streams.binder.configuration.producer.acks=all
Note, that if you are building a stateful Kafka Streams application it's highly recommended to enable the exactly_once semantic.
This semantic can be configured with :
spring.cloud.stream.kafka.streams.binder.configuration.processing.guarantee=exactly-once
Related
I'm experiencing a problem, I have an application in spring boot, this application consumes from one topic and produces in another topic.
the topic that the application consumes is on-premises, the topic that the application produces is on cloud aws.
is there a way to specify a bootstrap server and schema registry for each topic?
my application.properties has the following property:
spring.kafka.bootstrap-servers=localhost:32202 spring.kafka.properties.schema.registry.url=127.0.0.1:8082
the problem here is that these properties are for both consumer and producer.
I need to specify a bootstrap server for the consumer, another for the producer.
also specify a schema-registry for the consumer, another for the producer.
I don't know if this way is the best way to deal with this problem.
spring.kafka.consumer.bootstrap-servers=consumer-localhost:32202 spring.kafka.consumer.schema.registry.url=consumer-127.0.0.1:8082 spring.kafka.producer.bootstrap-servers=producer-localhost:10010 spring.kafka.producer.schema.registry.url=producer-127.0.0.1:9090
Thanks in advance!
See the Spring Boot documentation.
The properties supported by auto configuration are shown in the “Integration Properties” section of the Appendix. Note that, for the most part, these properties (hyphenated or camelCase) map directly to the Apache Kafka dotted properties. See the Apache Kafka documentation for details.
The first few of these properties apply to all components (producers, consumers, admins, and streams) but can be specified at the component level if you wish to use different values. Apache Kafka designates properties with an importance of HIGH, MEDIUM, or LOW. Spring Boot auto-configuration supports all HIGH importance properties, some selected MEDIUM and LOW properties, and any properties that do not have a default value.
Only a subset of the properties supported by Kafka are available directly through the KafkaProperties class. If you wish to configure the producer or consumer with additional properties that are not directly supported, use the following properties:
spring.kafka.properties[prop.one]=first
spring.kafka.admin.properties[prop.two]=second
spring.kafka.consumer.properties[prop.three]=third
spring.kafka.producer.properties[prop.four]=fourth
spring.kafka.streams.properties[prop.five]=fifth
So...
spring.kafka.consumer.bootstrap-servers=consumer-localhost:32202
spring.kafka.consumer.properties.schema.registry.url=consumer-127.0.0.1:8082
spring.kafka.producer.bootstrap-servers=producer-localhost:10010
spring.kafka.producer.properties.schema.registry.url=producer-127.0.0.1:9090
This sets the common prop.one Kafka property to first (applies to producers, consumers and admins), the prop.two admin property to second, the prop.three consumer property to third, the prop.four producer property to fourth and the prop.five streams property to fifth.
For the management of the headers of the messages that are produced/consumed in the Kafka binder there is the KafkaHeaderMapper interface, whose implementation as a bean can be configured with the following property: spring.cloud.stream.kafka.binder.headerMapperBeanName.
Is there something similar for the KafkaStreams binder in Spring Cloud Stream? My intention is to be able to control how to deserialize/serialize or include/exclude message headers on stream input and output. does anyone know how to realize this?
There is not an equivalent; for streams; Spring is only involved with setting up the infrastructure/topology; it is not involved with the runtime record processing.
You can, however, use a custom serializer/deserializer and manipulate the headers there.
I have a Reactor-based Spring Boot Kafka stream processing app that I am working on writing integration tests for. I am using Spring's #EmbeddedKafka broker. It works great, I have it overriding the bootstrap broker urls that get configured on my reactive processor's consumer & publisher, but what I haven't figured out yet is how to deal with the schema registry for my processor when testing. I'm using Confluent's KafkaAvroSerializer and KafkaAvroDeserializer classes and just have the schema.registry.url field configured in my Spring app configs to get injected into the Kafka properties. I'm using Confluent's MockSchemaRegistryClient for the test producer and consumer, but what I need is a way to inject this mock client into the actual consumer and producer in my stream processor code, but I see no way to do that. Almost seems like I need something more like an embedded version of the schema registry to point them to like the embedded broker. Our build pipeline does not support spinning up containers otherwise I'd use Docker or Testcontainers. Anyone else solve this already? Any help or suggestions appreciated.
I managed to figure this out. If you use a url that begins with mock:// for your test's SerDes, and you override the schema.registry.url property in the #SpringBootTest annotation with the same mock url, then your processor's consumer and producer will also pick up and use this mock schema registry client, and everything just works!
I have a usecase where I want to get the underlying Kafka producer (KafkaTemplate) in a Spring Cloud Stream application. While navigating the code I stumbled upon KafkaProducerMessageHandler which has a getKafkaTemplate method. However, it fails to auto-wire.
Also, if I directly auto-wire KafkaTemplate, the template is initialized with default properties and it ignores the broker in the binder key of the SCSt configuration
How can I access the underlying KafkaTemplate or a producer/consumer in a Spring Cloud Stream app?
EDIT: Actually my SCSt app has multiple Kafka binders and I want to get the KafkaTemplate or Kafka producer corresponding to each binder. Is that possible somehow?
It's not entirely clear why you would need to do that, but you can capture the KafkaTemplates by adding a ProducerMessageHandlerCustomizer #Bean to the application context.
I am new to Spring-Integration.
My use case is:
Listen to a RabbitMQ queue/topic, get the message, process it, send it to other message broker (mostly it will be another RabbitMQ instance).
Expected load: 5000 messages/sec
In application.properties we can set configurations for one host.
How to use Spring Integration between two message brokers?
All the examples that i see are for one message broker. Any pointers to get started with two message brokers and Spring Integration.
Regards,
Mahesh
Since you mention an application.properties it sounds like you use Spring Boot with its auto-configuration feature. It is very important detail in your question because Spring Boot has opinion about auto-configuration and you really can have only one broker connection configuration auto-configured. If you would like to have an another similar in the same application, then you should forget that auto-configuration feature. You still can use the mentioned application.properties, but you have to manage them manually.
Since you talk about a RabbitMQ connection, so you need to exclude RabbitAutoConfiguration and manage all the required beans manually:
#SpringBootApplication(exclude = RabbitAutoConfiguration.class)
You still can use the #EnableConfigurationProperties(RabbitProperties.class) on some your #Configuration class to be able to inject that RabbitProperties and populate respective CachingConnectionFactory. For the second broker you can introduce your own #ConfigurationProperties or just configure everything manually reading properties via #Value. See more info about manual connection factory configuration in Spring AMQP reference manual: https://docs.spring.io/spring-amqp/docs/2.2.1.RELEASE/reference/html/#connections