Can I use together: Zipkin, Sleuth, MDC, ELK, Jaeger? - spring-boot

I really read many articles. I figure out that need to just include a starters in spring boot )))
Can anyone sort it out: is Sleuth create MDC (Mapped Diagnostic Context)? Is sleuth create a record's ID which used by Zipkin? Can I see this ID in Kibana? Or do I need to use zipkin API? Are there best practice to use all of this together? Is Jaeger substitute both Zipkin and Sleuth or how?

Yes you can, and I have shown that numerous times during my presentations (https://toomuchcoding.com/talks) and we describe it extensively in the documentation (https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/). Sleuth will set up your logging pattern which you can then parse and visualize using the ELK stack. Sleuth takes care of tracing context propagation and can send the spans to a span storage (e.g. Zipkin or Jaeger). Sleuth does take care of updating the MDC for you. Please always read the documentation and the project page before filing a question

Related

How to specify which paths should be traced by Sleuth with Zipkin?

I'm looking for some information about how to specify which endpoints should be traced by Sleuth. By far, all i've found in sleuth docs was information about how to specify which endpoints shouldn't be traced. I'm going in the opposite direction, so, is there any properties or configs that control this flow?
Spring Cloud Sleuth pushes the sampling decision down to the tracer implementation, you need to create either a SamplerFunction or a Sampler to do this, please see the docs:
Sleuth: Sampling
Sleuth: Brave Sampling
Brave: Sampling

Put trace id for each message in kafka listener

How can i use KafkaListenerAnnotationBeanPostProcessor to add trace id for each message coming to the consumer?
I want to avoid using ThreadContext.put() in each of the listeners. What's the best practice that is followed for this purpose? Is there any better way of doing it without using KafkaListenerAnnotationBeanPostProcessor.
Also I cannot use Sleuth because its creating some issue with my application. Any help would be appreciated.
Most of the tracing library ( including Spring Cloud Sleuth which in turn internally uses Brave as tracing library) generally utilize Threadlocal to store the tracing. So it would be simpler to use Threadlocal if what you want is just a tracing id. Of course in that case you would need to propagate the tracing id yourself. You can use Spring AOP to write and apply the threadlocal tracing id logic generation/cleanup in an non-invasive way. Another approach is to integrate the Brave library yourself as detailed here in their supported instrumentation options for Kafka clients here.

Difference between Zipkin and Elastic Stack(ELK)?

Spring Cloud Sleuth is used for creating traceIds (Unique to request across services) and spanId (Same for one unit for work). My idea is that Zipkin server is used to get collective visualization of these logs across service. But I know and have used ELK stack which does necessarily the same function. I mean we can group requests with the same traceId for visualising, using ELK stack. But I do see people trying to implement distributed tracing with Sleuth, ELK along with Zipkin, as in these examples (Link1,Link2). But why do we need Zipkin if there is already ELK for log collection and visualising? Where I am missing?

Merge Spring Boot actuator and Micrometer data on one endpoint

I have a number of applications that are using the SpringBoot actuator to publish metrics to the /metrics endpoint.
I have some other applications that are also using Micrometer to publish metrics to a /prometheus endpoint.
And finally, I have a cloud provider that will only allow me to pull metrics from a single end point. They have many preprepared Grafana dashboards, but most are targeted at the Actuator variable names. Some are targeted at the Micrometer variable names.
Micrometer puts out the same data, but it uses different names than Actuator, eg "jvm_memory" instead of "mem".
I would really like to find a way to merge both of these data sources so that they dump data to a single endpoint, and all of my Grafana dashboards would just work with all of the applications.
But I'm at a loss as to the best way to do this. Is there a way to tell Micrometer to use /metrics as a datasource so that any time it is polled it will include those?
Any thoughts are greatly appreciated.
The best solution probably depends on the complexity of your dashboard. You might just configure a set of gauges to report the value under a different name and then only use the Micrometer scrape endpoint. For example:
#Bean
public MeterBinder mapToOldNames() {
return r -> {
r.gauge("mem", Tags.empty(), r, r2 -> r2.find("jvm.memory.used").gauges()
.stream().mapToDouble(Gauge::value).sum());
};
}
Notice how in this case we are converting a memory gauge that is dimensioned in Micrometer (against the different aspects of heap/non-heap memory) and rolling them up into one gauge to match the old way.
For Spring Boot 1.5 you could do something like the Prometheus `simpleclient_spring_boot' does.
You collect the PublicMetrics from the actuator-metrics context and expose/register them as Gauges/Counters in the Micrometer MeterRegistry. This in term will expose those actuator metrics under your Prometheus scrape endpoint.
I assume you'd filter out non-functional metrics which are duplicates of the Micrometer ones. So the only thing I can think of is functional/business metrics to actually take over. But if you have the chance to actually change the code to Micrometer, I'd say that's the better approach.
I haven't tried this, just remembered I had seen this concept.

Jax-rs and amqp zipkin integration

I've been roaming the depths of the internet but I find myself unsatisfied by the examples I've found so far. Can someone point me or, show me, a good starting point to integrate zipkin tracing with jaxrs clients and amqp clients?
My scenario is quite simple and I'd expect this task to be trivial tbh. We have a micro services based architecture and it's time we start tracing our requests and have global perspective of our inter service dependencies and what the requests actually look like (we do have metrics but I want more!) . The communication is done via jax-rs auto generated clients and we use rabbit template for messaging.
I've seen brave integrations with jaxrs but they are a bit simplistic. My zipkin server is a spring boot mini app using stream-rabbit, so zipkin data is sent using rabbitmq.
Thanks in advance.
After some discussion with Marcin Grzejszczak and Adrien Cole (zipkin and sleuth creators/active developers) I ended up creating a Jersey filter that acts as bridge between sleuth and brave. Regarding AMQP integration, added a new #StreamListener with a conditional for zipkin format spans (using headers). Sending messages to the sleuth exchange with zipkin format will then be valid and consumed by the listener. For javascript (zipkin-js), I ended up creating a new AMQP Logger that sends zipkin spans to a determined exchange. If someone ends up reading this and needs more detail, you're welcome to reach out to me.

Resources