Spring sleuth kafkaTemplate injections - spring-boot

Have a project with spring boot 2.1.9, spring kafka and spring sleuth (2.1.6).
All's been going well till I reached the point of tracing messages to/from kafka.
Kafka messaging is done through:
kafkaTemplate.send(uri, pojo)
And here I realized that there is no injection into kafka messaging - both debugged points of doSend of Kafka, and printed the message, received from #KafkaListener (with looking for keys from brave...KafkaKeys), and never saw a note of tracing.
As I understand from the doc, these messaging are enabled by default (not that I havent enabling "messaging" or "integration").
Tried registering custom bean implementations of "Propagation.Setter" just to see if it's actually being called, and never seen this actually ping.
Additional note: I found that org.apache.kafka(from spring-kafka 2.2.9) ...KafkaProducer is used instead of Sleuth one's.
What am I missing?

Related

Spring Slueth implementation for Apache Kafka Producer and Consumer

I am not using Spring-Kafka module to produce and consume messages. Instead, I am using Apache client library with producer and consumer implementations. As I am not using Spring-Kafka, the Spring Slueth auto configuration is not applied to generate the traces. I have referred https://docs.spring.io/spring-cloud-sleuth/docs/current-SNAPSHOT/reference/html/integrations.html I don't find any document around how to apply Spring Slueth for the code which uses 3rd party libraries?
If you're not using Spring, then don't add it to non-Spring code.
The traces in Sleuth are implemented using Brave Kafka Interceptor

Spring Reactive Stack with Spring for Apache Kafka

In a few words:
I'm trying to decide between using the default Spring for Apache Kafka stack, KafkaTemplate or the pair, ReactiveKafkaProducerTemplate and ReactiveKafkaConsumerTemplate for my Reactor based application.
Some more context:
In the company I work we're developing a high-disponibility application aiming to publish a set of requests directly to a Kafka Broker. Since this is an API centric application expecting to receive a few millions of requests per week, we decided to go with a stack based on the Project Reactor with Spring WebFlux and Kotlin.
After doing some digging I've discovered that the Spring for Apache Kafka has a simple wrapper designed around the Reactor Kafka implementation, but this wrapper lacks a lot of the functionalities present in the default KafkaTemplate mentioned before, things like: A Metrics Binder out of the box (for prometheus integration), associated factories, extensive documentation, Auto configuration, etc.
I'm trying to understand what I'm really giving up when using the default implementation in favor of the Reactive one. Am I giving up back pressure functionality? Am I sacrificing the Reactive Stack present in my application? Will this be a toll in the future? Does anyone has some experience in working with a Reactive Stack alongside a non-reactive solution?
I have, also, a few concerns regarding the DLT flow facilitated in the default implementation, things like the SeekToCurrentErrorHandler strategy

Get underlying low-level Kafka consumers and Producers in Spring Cloud Stream

I have a usecase where I want to get the underlying Kafka producer (KafkaTemplate) in a Spring Cloud Stream application. While navigating the code I stumbled upon KafkaProducerMessageHandler which has a getKafkaTemplate method. However, it fails to auto-wire.
Also, if I directly auto-wire KafkaTemplate, the template is initialized with default properties and it ignores the broker in the binder key of the SCSt configuration
How can I access the underlying KafkaTemplate or a producer/consumer in a Spring Cloud Stream app?
EDIT: Actually my SCSt app has multiple Kafka binders and I want to get the KafkaTemplate or Kafka producer corresponding to each binder. Is that possible somehow?
It's not entirely clear why you would need to do that, but you can capture the KafkaTemplates by adding a ProducerMessageHandlerCustomizer #Bean to the application context.

Jax-rs and amqp zipkin integration

I've been roaming the depths of the internet but I find myself unsatisfied by the examples I've found so far. Can someone point me or, show me, a good starting point to integrate zipkin tracing with jaxrs clients and amqp clients?
My scenario is quite simple and I'd expect this task to be trivial tbh. We have a micro services based architecture and it's time we start tracing our requests and have global perspective of our inter service dependencies and what the requests actually look like (we do have metrics but I want more!) . The communication is done via jax-rs auto generated clients and we use rabbit template for messaging.
I've seen brave integrations with jaxrs but they are a bit simplistic. My zipkin server is a spring boot mini app using stream-rabbit, so zipkin data is sent using rabbitmq.
Thanks in advance.
After some discussion with Marcin Grzejszczak and Adrien Cole (zipkin and sleuth creators/active developers) I ended up creating a Jersey filter that acts as bridge between sleuth and brave. Regarding AMQP integration, added a new #StreamListener with a conditional for zipkin format spans (using headers). Sending messages to the sleuth exchange with zipkin format will then be valid and consumed by the listener. For javascript (zipkin-js), I ended up creating a new AMQP Logger that sends zipkin spans to a determined exchange. If someone ends up reading this and needs more detail, you're welcome to reach out to me.

Spring Cloud Contract and plain Spring AMQP

We are using plain Spring AMQP in our spring boot projects.
We want to make sure that our message consumers can test against real messages and avoid to test against static test messages.
Thus our producers could generate message snippets in a test phase that can be picked up by the consumer test to make sure it tests against the latest message version and see if changes in the producer break the consumer.
It seems like Spring Cloud Contract does exactly that. So is there a way to integrate spring cloud contract with spring amqp? Any hints in which direction to go would be highly appreciated.
Actually we don't support it out of the box but you can set it up yourself. In the autogenerated tests we're using an interface to receive and send messages so
you could implement your own class that uses spring-amqp. The same goes for the consumer side (the stub runner). What you would need to do is to implement and register a bean of
org.springframework.cloud.contract.verifier.messaging.MessageVerifier type for both producer and consumer. This should work cause what we're doing in the autogenerated tests is that we #Inject MessageVerifier
so if you register your own bean it will work.
UPDATE:
As #Mathias has mentioned it, the AMQP support is already there in Spring Cloud Contract https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html#_stub_runner_spring_amqp

Resources