Spring Sleuth Zipkin Extra Field Propagation - spring

I am new to distributed logging and I need help around the propagation of extra fields across Http Request and Messaging Request.
Currently, I am able to propagate the traceId and spanId, but I need to pass correlationId to be propagated across all the microservices.
spring:
sleuth:
correlation-fields:
- x-correlation-id
remote-fields:
- x-correlation-id
logback.xml
%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(%5p [${appName},%X{traceId:-},%X{parentId:-},%X{spanId:-},%X{correlation-id:-}]) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%t]){faint} %clr(%logger{20}:%line){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
I am a bit curious about how to pass the correlation id to other services.
In case the message starts from Service A -
Service A (Message Started,CorrelationID-123) -> ServiceB (CorrelationID-123) -> ServiceC(CorrelationID-123)
In case if it started with Service B
Service B (Message Started,CorrelationID-123) -> ServiceA (CorrelationID-123) -> ServiceC(CorrelationID-123)
How the correlation id will be passed to Kafka messages?
How the correlation id will be passed to Http requests?
Is it possible to use existing tracedId from other service?

I think what you call correlationId is in fact the traceId, if you are new to distributed tracing, I highly recommend reading the docs of spring-cloud-sleuth, the introduction section will give you a basic understanding while the propagation will tell you well, how your fields are propagated across services.
I also recommend this talk: Distributed Tracing: Latency Analysis for Your Microservices - Grzejszczak, Krishna.
To answer your exact questions:
How the correlation id will be passed to Kafka messages?
Kafka has headers, I assume the fields are propagated through Kafka headers.
How the correlation id will be passed to Http requests?
Through HTTP Headers.
Is it possible to use existing tracedId from other service?
Not just possible, Sleuth does this for you out of the box. If there is a traceId in the incoming request/message/event/etc. Sleuth will not create a new one but it will use it (see the docs I linked above).

Related

OpenTelemetry: Context propagation using messaging (Artemis)

I wrote some micro-services using Quarkus that communicate via Artemis. Now I want to add OpenTelemetry for tracing purpose.
What I already tried is to call service B from service A using HTTP/REST. Here the trace id from service A is automatically added to the header of the HTTP request and used in service B. So this works fine. In Jaeger I can see the correlation.
But how can this be achieved using Artemis as messaging system? Do I have to (manually) add the trace id from service A into the message and read it in service B to setup somehow the context (don't know whether this is possible)? Or is there possibly an automatism like for HTTP requests?
I would appreciate any assistance.
I have to mention at this point that I have little experience with tracing so far.
There is no quarkus, quarkiverse extension or smallrye lib that provides integration with Artemis and OpenTelemetry, yet.
Also, OpenTelemetry massaging spec is being worked at the moment, because the correct way to correlate sent, received messages and services is under definition at the OTel spec level.
However, I had exactly the same problem as you and did a manual instrumentation that you can use as inspiration: quarkus-observability-demo-activemq
It will correlate the sent service as parent of receiving end.

Propagate traceId from Microservice and Kafka

I am using Spring Sleuth to generate logs that gets me traceId and spanId. I want to propagate these to Kafka topic, maybe in the header, and from there to another microservice that will consume from the header through that topic, so that traces are consistent for the same payload. Are there any properties I can configure to make this work?

Data propagation between downstream request/response and upstream request/response interception

New to Quarkus and need to create a POC which replicates current java servlet filter and client interceptors which use ThreadLocal maps and aspectJ to propagate and examine data between downstream and upstream request/responses across multiple asynch calls.
Where can I find example code in Quarkus which does something similar.

Conditionally propagation of span ids using Spring Sleuth

I'm working a project with tens of services, using Spring sleuth and zipkin, but I was wondering if there is any way to conditionally propagate logs to zipkin server.
Actually, would be perfect if the log was propagated only when The distributed transaction failed, (like using a saga pattern). The case is, we have a huge workload (millions of request per hour) and we are interest only in failed request.
You can't propagate logs to Zipkin, you can publish Spans.
Depending on your needs, you can use a SamplerFunction, a Sampler or a SpanHandler, see this answer: https://stackoverflow.com/a/69981877/971735

Spring Boot Micro Service Tracing Options

I am having below requirement for which is there any open source library will cover all of them.
1.We are building a distributed micro service architecture with Spring Boot.Which includes more than 100 micro services.
2.There is a lot if inter micro service communications possible to achieve single transaction.
3.We want to trace every micro service call and the trace should provide following information.
a.Transaction ID/Trace ID
b. Back end transaction status-HTTP status for REST.Like wise for SOAP as well.
c.Time taken for that call.
d.Request and Response payload.
Currently we are achieving this using indigenous tracing frame work.Is there any open source project will handle all this without any coding from developer.I know we have few options with spring Boot Cloud Zipkin,Seluth etc does this handle above requirements.
My project has similar requirements to yours. IMHO, Spring-cloud-sleuth + Zipkin work well in my case.
For any inter microservices communication, we are using Kafka, and Spring-cloud-sleuth + zipkin has no problem to trace all the call, from REST -> Kafka -> More Kafka -> REST.
To enable Kafka Tracing, just simply add
spring:
sleuth:
propagation-keys: some-key
sampler:
probability: 1
messaging:
kafka:
enabled: true
We are also using Azure ApplicationInsights to do centralized logging, which is well integrated with Spring Cloud.
Hope above give you some confidence of using Sleuth + Zipkin.

Resources