Difference between Zipkin and Elastic Stack(ELK)? - microservices

Spring Cloud Sleuth is used for creating traceIds (Unique to request across services) and spanId (Same for one unit for work). My idea is that Zipkin server is used to get collective visualization of these logs across service. But I know and have used ELK stack which does necessarily the same function. I mean we can group requests with the same traceId for visualising, using ELK stack. But I do see people trying to implement distributed tracing with Sleuth, ELK along with Zipkin, as in these examples (Link1,Link2). But why do we need Zipkin if there is already ELK for log collection and visualising? Where I am missing?

Related

Is there any Elastic Search appender for directly sending(storing) spring-boot application logs to Elastic Search without using ELK stack

We are planning to store our (Spring-Boot) application logs to ElasticSearch. I am aware of ELK stack, which uses FileBeat + LogStash to collect and process the logs.
What is desired: Have an appender in logback.xml to directly send the logs to ElasticSearch. The very basic idea is of having an appender like File-Appenders with the difference of target for storing logs being ElasticSearch. At the same time, we want to do it in asynchronous manner. FYI, we are using slf4j with logback implementations for logging.
More specifically: We want to remove the intermediators:: Logstash or Beats as they will need more infra and may bring unwanted overhead. And having the process of sending logs to ElasticSearch in asynchronous way would be really great (so that application does not suffer latency due to logging).
What I have already tried:
Send Spring Boot logs directly to LogStash. But it seems of not much use, since it internally uses file appenders and the logs are then sent to LogStash.
Is there any such appenders available? Or maybe there is some workaround.

Can I use together: Zipkin, Sleuth, MDC, ELK, Jaeger?

I really read many articles. I figure out that need to just include a starters in spring boot )))
Can anyone sort it out: is Sleuth create MDC (Mapped Diagnostic Context)? Is sleuth create a record's ID which used by Zipkin? Can I see this ID in Kibana? Or do I need to use zipkin API? Are there best practice to use all of this together? Is Jaeger substitute both Zipkin and Sleuth or how?
Yes you can, and I have shown that numerous times during my presentations (https://toomuchcoding.com/talks) and we describe it extensively in the documentation (https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/). Sleuth will set up your logging pattern which you can then parse and visualize using the ELK stack. Sleuth takes care of tracing context propagation and can send the spans to a span storage (e.g. Zipkin or Jaeger). Sleuth does take care of updating the MDC for you. Please always read the documentation and the project page before filing a question

How to export multiple microservices jaeger metrics stored in elasticsearch to prometheus

We are able to get latency metrics of multiple microservices using Jaeger. Currently Jaeger stores application metrics in elasticsearch.
My usecase is to get the latency of application from elasticsearch to prometheus.
Is there anyway to read the elasticsearch metrics of Jaeger? I already used elasticsearch-prometheus-exporter which only exports cluster details of ES.
The prometheus-es-exporter provides a way to create metrics using queries.
For further details you can check prometheus-es-exporter#query-metrics

How do I pull Elastic-search metrics into Prometheus using the Elasticseacrh_exporter

I have installed Prometheus into a Kubernetes cluster using the helm stable chart. We run Elastic Search and I want to scrape metrics from this and then create Alerts based on events.
I have installed the elasticsearch exporter via helm but no where can I find how I then import these metrics into Prometheus ?
There is some config I am missing such as creating a scraping job or something. Anyone can help much appreciated.
I connected to the elasticsearch exporter and can see it pulling metrics.
If you're using an elasticsearch exporter it should contain some documentation. There are more than just one solution out there and you didn't specify which one you're using. In my opinion it would be best for you to start from a tutorial like this one which explains step by step the whole process. As you can read there:
Metrics collection of Prometheus follows the pull model. That means,
Prometheus is responsible for getting metrics from the services that
it monitors. This process introduced as scraping. Prometheus server
scrapes the defined service endpoints, collect the matrixes and store
in local database.
which means you need to configure Prometheus to scrape metrics exposed by the elasticsearch exporter you chose.
Official Prometheus documentation will be also a great source of knowledge and good starting point.
EDIT:
If you run your Elasticsearch instance on Kubernetes cluster, you should rather use the Service Discovery mechanism than static configs. More on <kubernetes_sd_config> you can find here.
There are five different types of Kubernetes service discoveries you can use with Prometheus: node, endpoints, service, pod, and ingress. The one which you most probably need in your case is endpoints. Prometheus uses the Kubernetes API to discover targets. Below you have some examples:
https://blog.sebastian-daschner.com/entries/prometheus-kubernetes-discovery
https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml

Jaeger with ElasticSearch

I have created a microservice based architecture using Spring Boot and deployed the application on Kubernetes/Istio platform.
The different microservices communicate with each other using either JMS (ActiveMQ) or REST API.
I am getting the tracing of REST communication on Istio's Jaeger but the JMS based communication is missing in Jaeger.
I am using ElasticSearch to store my application logs.
Is it possible to use the same ElasticSearch as a backend(DB) of Jaeger?
If yes then I will store tracing specific logs in ElasticSearch and query them on Jaeger UI.
I believe you can reuse Elasticsearch for multiple purposes - each would use a different set of indices, so separation is good.
from: https://www.jaegertracing.io/docs/1.11/deployment/ :
Collectors require a persistent storage backend. Cassandra and Elasticsearch are the primary supported storage backends
Tying the networking all together, a docker-compose example:
How to configure Jaeger with elasticsearch?
While this isn't exactly what you asked, it sounds like what you're trying to achieve is seeing tracing for your JMS calls in Jaegar. If that is the case, you could use an OpenTracing tracing solution for JMS or ActiveMQ to report tracing data directly to Jaegar. Here's one potential solution I found with a quick google. There may be others.
https://github.com/opentracing-contrib/java-jms

Resources