I have a usecase where I want to get the underlying Kafka producer (KafkaTemplate) in a Spring Cloud Stream application. While navigating the code I stumbled upon KafkaProducerMessageHandler which has a getKafkaTemplate method. However, it fails to auto-wire.
Also, if I directly auto-wire KafkaTemplate, the template is initialized with default properties and it ignores the broker in the binder key of the SCSt configuration
How can I access the underlying KafkaTemplate or a producer/consumer in a Spring Cloud Stream app?
EDIT: Actually my SCSt app has multiple Kafka binders and I want to get the KafkaTemplate or Kafka producer corresponding to each binder. Is that possible somehow?
It's not entirely clear why you would need to do that, but you can capture the KafkaTemplates by adding a ProducerMessageHandlerCustomizer #Bean to the application context.
Related
For the management of the headers of the messages that are produced/consumed in the Kafka binder there is the KafkaHeaderMapper interface, whose implementation as a bean can be configured with the following property: spring.cloud.stream.kafka.binder.headerMapperBeanName.
Is there something similar for the KafkaStreams binder in Spring Cloud Stream? My intention is to be able to control how to deserialize/serialize or include/exclude message headers on stream input and output. does anyone know how to realize this?
There is not an equivalent; for streams; Spring is only involved with setting up the infrastructure/topology; it is not involved with the runtime record processing.
You can, however, use a custom serializer/deserializer and manipulate the headers there.
I have a Reactor-based Spring Boot Kafka stream processing app that I am working on writing integration tests for. I am using Spring's #EmbeddedKafka broker. It works great, I have it overriding the bootstrap broker urls that get configured on my reactive processor's consumer & publisher, but what I haven't figured out yet is how to deal with the schema registry for my processor when testing. I'm using Confluent's KafkaAvroSerializer and KafkaAvroDeserializer classes and just have the schema.registry.url field configured in my Spring app configs to get injected into the Kafka properties. I'm using Confluent's MockSchemaRegistryClient for the test producer and consumer, but what I need is a way to inject this mock client into the actual consumer and producer in my stream processor code, but I see no way to do that. Almost seems like I need something more like an embedded version of the schema registry to point them to like the embedded broker. Our build pipeline does not support spinning up containers otherwise I'd use Docker or Testcontainers. Anyone else solve this already? Any help or suggestions appreciated.
I managed to figure this out. If you use a url that begins with mock:// for your test's SerDes, and you override the schema.registry.url property in the #SpringBootTest annotation with the same mock url, then your processor's consumer and producer will also pick up and use this mock schema registry client, and everything just works!
I started developing an spring cloud stream project. I'm successfully received message from Kafka through #Streamlistener annotation. Before sending the message to any output channel, I have to convert the payload by calling an externalservice or by DB call. I don't want to call the external service or DB method from the same streamlistener method. My question is , can we create internal channels (like Spring Integration DSL flow) in spring cloud stream?
Yes you can. In Spring Cloud Stream, the channel binding to the binder would only be enabled based on what binder you use for the channel binding.
What benefits does spring Kafka template provide?
I have tried the existing Producer/Consumer API by Kafka. That is very simple to use, then why use Kafka template.
Kafka Template internally uses Kafka producer so you can directly use Kafka APIs. The benefit of using Kafka template is it provides different methods for sending message to Kafka topic, kind of added benefits you can see the API comparison between KafkaProducer and KafkaTemplate here:
https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/KafkaTemplate.html
You can see KafkaTemplate provide many additional ways of sending data to Kafka topics because of various send methods while some calls are the same as Kafka API and are simply forwarded from KafkaTemplate to KafkaProducer.
It's up to the developer what to use. If you feel like working with KafkaTemplate is easy as you don't have to create ProducerRecord a simple send method will do all the work for you.
At a high level, the benefit is that you can externalize your properties objects more easily and you can just focus on the record processing logic
Plus Spring is integrated with lots of other components.
Note: Other options still exist like Reactor Kafka, Alpakka, Apache Camel, Smallrye reactive messaging, Vert.x... But they all wrap the same Kafka API.
So, I'd say you're (marginally) trading efficiency for convinience
I am using Spring, Spring-Websocket, STOMP for my application, and RabbitMQ as broker. I need to log all messages going through RabbitMQ to Postgresql tables.
I know that I can write #MessageMapping in Spring and log there, but my problem is that some clients talk to RabbitMQ directly through MQTT protocol, and Spring does not support it yet (https://jira.spring.io/browse/SPR-12581). Moreover browser clients talk through Spring to RabbitMQ using STOMP protocol.
RabbitMQ allows to track all messages using Firehose tracer. How to properly listen to amq.rabbitmq.trace topic from Spring? Or do I need to write separate Java app as consumer?
The Spring AMQP is for you!
You bind some custom queue to to that amq.rabbitmq.trace with appropriate pattern (e.g. publish.#) and configure SimpleMessageListenerContainer to receive messages from that queue.
It can be done even with pretty simple config: #EnableRabbit and #RabbitListener on some POJO method. Anyway the Binding #Bean must be there to attache your queue to that exchange.