I'm looking for some information about how to specify which endpoints should be traced by Sleuth. By far, all i've found in sleuth docs was information about how to specify which endpoints shouldn't be traced. I'm going in the opposite direction, so, is there any properties or configs that control this flow?
Spring Cloud Sleuth pushes the sampling decision down to the tracer implementation, you need to create either a SamplerFunction or a Sampler to do this, please see the docs:
Sleuth: Sampling
Sleuth: Brave Sampling
Brave: Sampling
Related
I really read many articles. I figure out that need to just include a starters in spring boot )))
Can anyone sort it out: is Sleuth create MDC (Mapped Diagnostic Context)? Is sleuth create a record's ID which used by Zipkin? Can I see this ID in Kibana? Or do I need to use zipkin API? Are there best practice to use all of this together? Is Jaeger substitute both Zipkin and Sleuth or how?
Yes you can, and I have shown that numerous times during my presentations (https://toomuchcoding.com/talks) and we describe it extensively in the documentation (https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/). Sleuth will set up your logging pattern which you can then parse and visualize using the ELK stack. Sleuth takes care of tracing context propagation and can send the spans to a span storage (e.g. Zipkin or Jaeger). Sleuth does take care of updating the MDC for you. Please always read the documentation and the project page before filing a question
In one of my recent interview in Sapient, interview asked few questions:
Q1: How to find which microservice is slow, if your query goes to multiple services?
Q2. How to use logs in microservices and which information you will display in logs?
If anybody has any answer, then please explain.
Thanks in advance.
Based on generality, for the first one you can follow circuit breaker patterns for this, where you can mention timeouts on the called methods, such that if they don't respond till a threshold then the fallback methods shall be used to return some mock object of the kind of data being expected from the called method.
There are frameworks for this in Spring like Resilience4j or Hystrix
For logging you can use distributed tracing, i.e. via Zipkins ( an offering in spring cloud again ). And its purely your choice on what has to be logged for your application
And if dealing in Kubernetes based environments then you can use Jaeger also for distributed tracing and Istio can be used for service mesh and circuit breakers.
Hope this turns up useful !!
I have a number of applications that are using the SpringBoot actuator to publish metrics to the /metrics endpoint.
I have some other applications that are also using Micrometer to publish metrics to a /prometheus endpoint.
And finally, I have a cloud provider that will only allow me to pull metrics from a single end point. They have many preprepared Grafana dashboards, but most are targeted at the Actuator variable names. Some are targeted at the Micrometer variable names.
Micrometer puts out the same data, but it uses different names than Actuator, eg "jvm_memory" instead of "mem".
I would really like to find a way to merge both of these data sources so that they dump data to a single endpoint, and all of my Grafana dashboards would just work with all of the applications.
But I'm at a loss as to the best way to do this. Is there a way to tell Micrometer to use /metrics as a datasource so that any time it is polled it will include those?
Any thoughts are greatly appreciated.
The best solution probably depends on the complexity of your dashboard. You might just configure a set of gauges to report the value under a different name and then only use the Micrometer scrape endpoint. For example:
#Bean
public MeterBinder mapToOldNames() {
return r -> {
r.gauge("mem", Tags.empty(), r, r2 -> r2.find("jvm.memory.used").gauges()
.stream().mapToDouble(Gauge::value).sum());
};
}
Notice how in this case we are converting a memory gauge that is dimensioned in Micrometer (against the different aspects of heap/non-heap memory) and rolling them up into one gauge to match the old way.
For Spring Boot 1.5 you could do something like the Prometheus `simpleclient_spring_boot' does.
You collect the PublicMetrics from the actuator-metrics context and expose/register them as Gauges/Counters in the Micrometer MeterRegistry. This in term will expose those actuator metrics under your Prometheus scrape endpoint.
I assume you'd filter out non-functional metrics which are duplicates of the Micrometer ones. So the only thing I can think of is functional/business metrics to actually take over. But if you have the chance to actually change the code to Micrometer, I'd say that's the better approach.
I haven't tried this, just remembered I had seen this concept.
I've been roaming the depths of the internet but I find myself unsatisfied by the examples I've found so far. Can someone point me or, show me, a good starting point to integrate zipkin tracing with jaxrs clients and amqp clients?
My scenario is quite simple and I'd expect this task to be trivial tbh. We have a micro services based architecture and it's time we start tracing our requests and have global perspective of our inter service dependencies and what the requests actually look like (we do have metrics but I want more!) . The communication is done via jax-rs auto generated clients and we use rabbit template for messaging.
I've seen brave integrations with jaxrs but they are a bit simplistic. My zipkin server is a spring boot mini app using stream-rabbit, so zipkin data is sent using rabbitmq.
Thanks in advance.
After some discussion with Marcin Grzejszczak and Adrien Cole (zipkin and sleuth creators/active developers) I ended up creating a Jersey filter that acts as bridge between sleuth and brave. Regarding AMQP integration, added a new #StreamListener with a conditional for zipkin format spans (using headers). Sending messages to the sleuth exchange with zipkin format will then be valid and consumed by the listener. For javascript (zipkin-js), I ended up creating a new AMQP Logger that sends zipkin spans to a determined exchange. If someone ends up reading this and needs more detail, you're welcome to reach out to me.
I am having below requirement for which is there any open source library will cover all of them.
1.We are building a distributed micro service architecture with Spring Boot.Which includes more than 100 micro services.
2.There is a lot if inter micro service communications possible to achieve single transaction.
3.We want to trace every micro service call and the trace should provide following information.
a.Transaction ID/Trace ID
b. Back end transaction status-HTTP status for REST.Like wise for SOAP as well.
c.Time taken for that call.
d.Request and Response payload.
Currently we are achieving this using indigenous tracing frame work.Is there any open source project will handle all this without any coding from developer.I know we have few options with spring Boot Cloud Zipkin,Seluth etc does this handle above requirements.
My project has similar requirements to yours. IMHO, Spring-cloud-sleuth + Zipkin work well in my case.
For any inter microservices communication, we are using Kafka, and Spring-cloud-sleuth + zipkin has no problem to trace all the call, from REST -> Kafka -> More Kafka -> REST.
To enable Kafka Tracing, just simply add
spring:
sleuth:
propagation-keys: some-key
sampler:
probability: 1
messaging:
kafka:
enabled: true
We are also using Azure ApplicationInsights to do centralized logging, which is well integrated with Spring Cloud.
Hope above give you some confidence of using Sleuth + Zipkin.