Visualization of data sent by Actutator - spring

Is there any way to visualize data sent by the Actuator putting aside grafana and prometheus?
I try to find something that collects data immediately from the actuator and presents it visually in real mode. (one application, not two)

Related

Make Zipkin (or any open-tracing framework) work with existing "trace id"

A bit of background:
We have around 10 Spring boot microservices, which communicate with each other via kafka. The logs of each microservice are sent to Kibana, and in case of any errors, we have to sift through Kibana logs.
The good thing is: at the start of any flow, a message-id is generated by one of our microservices, and that is propagated to all the others as part of the message transfer (which happens through kafka), so we can search for the message-id in the logs, and we can see the footprint of that flow across all our microservices.
The bad part: having to sift through tons of logs to get a basic idea of where things broke and why.
Now the Question:
So I was wondering if we can have some distributed tracing implemented, maybe through Zipkin (or some other open-tracing framework) that can work with the message-id that our ecosystem already produces, instead of generating a new one ?
Thank you for your time :)
I'm not entirely sure if that's what you mean, but you can use Jeager https://www.jaegertracing.io/ which checks if trace-id already exist in the invocation metadata and in it generate child trace id. Based on all trace ids call diagrams are generated

Pushing metrics data to Prometheus

I am configuring Prometheus to access Spring boot metrics data. For some of the metrics, Prometheus's pull mechanism is ok, but for some custom metrics I prefer push based mechanism.
Does Prometheus allow to push metrics data?
No.
Prometheus is very opinionated, and one of it's design decisions is to dis-allow push as a mechanism into Prometheus itself.
The way around this is to push into an intermediate store and allow Prometheus to scrape data from there. This isn't fun and there are considerations on how quickly you want to drain your data and how pass data into Prometheus with time-stamps -- I've had to override the Prometheus client library for this.
https://github.com/prometheus/pushgateway
Prometheus provides its own collector above which looks like it would be what you want but it has weird semantics around when it expires pushed metrics (it never does, only overwrites their value for a new datapoint with the same labels).
They make it very clear that they don't want it used for pushed metrics.
All in all, you can hack something together to get something close to push events.
But you're much better off embracing the pull model than fighting it.
Prometheus has added support for the push model recently. It is called as remote write receiver.
Link: https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
From what I understand it only accepts POST request with protocol buffers and snappy compression.
Prometheus doesn't support push model. If you need Prometheus-like monitoring system, which supports both pull model and push model, then try VictoriaMetrics - the project I work on:
It supports scraping of Prometheus metrics - see these docs.
It supports data ingestion (aka push model) in Prometheus text exposition format - see these docs.
It supports other popular data ingestion formats such as InfluxDB line protocol, Graphite, OpenTSDB, DataDog, CSV and JSON - see these docs.
Additionally to this VictoriaMetrics provides Prometheus querying API and PromQL-like query language - MetricsQL, so it can be used as a drop-in replacement for Prometheus in most cases.

Measuring HTTP Performance using Micrometer in Spring Boot Application

I am developing a Spring Boot 2 Application with Micrometer for reporting Metrics. One of the functionality is sending large amounts of Data to a Restful Web Service.
I would like to measure the amount of data sent and the time taken to complete the request. Using the Timer metric gives me the time as well as the number of times the request is made. But how can I include the bytes transferred also in the same metric? My Grafana dashboard is supposed to plat the amount of data transferred and the time is taken to accomplish it.
I looked at Counter and Gauges but they don't look like the right fit for what I am trying to do. Is there a way to add a custom field to the Grafana metric?
You'd use a DistributionSummaryfor that. See here and here.
Regarding instrumentation, you'd have to currently instrument your Controllers manually or wire an Aspect around them.
IIRC at least the Tomcat metrics provide some data-in and data-out metrics, but not down to the path level.

Showing HTTP Request API latency using the Spring Boot Micrometer metrics

We use Prometheus to scrape Spring Boot 2.0.0 metrics and then persist them in InfluxDB.
We then use Grafana to visualize them from InfluxDB.
Our micrometer dependencies are
micrometer-core
micrometer-registry-prometheus
I want to be able to show a latency metric for our REST APIs.
From our Prometheus scraper I can see these metrics are generated for HTTP requests.
http_server_requests_seconds_count
http_server_requests_seconds_sum
http_server_requests_seconds_max
I understand from the micrometer documentation, https://micrometer.io/docs/concepts#_client_side, that latency can be done by combining 2 of the above generated metrics: totalTime / count.
However our data source is InfluxDB which does not support combining measurements, https://docs.influxdata.com/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-do-i-query-data-across-measurements,
so I am unable to implement that function in InfluxDB.
Do I need to provide my own implementation of this latency metric in the Spring Boot component or is their an easier way that I can achieve this?
You essentially can join your measurements in Kapacitor, another component of Influxdata TICK stack.
It's going to be pretty simple with JoinNode, possibly followed by Eval to calculate what you want right in place. There's tons of examples around it in documentation.
Although the problem is different there: you'd unnecessarily overingeneered your solution, and moreover - you're trying to combine two products that has the same purpose, but uses different approach to it. How smart is that?
You're already scraping things with Prometheus? Fine! Stay with it, do the math there, it's simple. And Grafana works with Prometheus too, right out of the box!
You wanna have your data in Influx (I can understand that, it's certainly more advanced)?
Fine! Micrometer can send it right to Influx out of the box - and in at least two ways!
I, personally, don't see any reason to do what you suppose to do, can you share one?

ElasticSearch: Jest vs Rest vs TransportClient vs NodeClient

I have gone through the official documentation at https://www.elastic.co/blog/found-interfacing-elasticsearch-picking-client
But it does not give any benchmarks or performance numbers to help choose among the clients. And I am finding it non-trivial to setup a TransportClient or setup a NodeClient because the documentation for that is also really sparse with little to no examples whatsoever.
So if someone has already done some benchmarking on choosing a client, I would really appreciate that and focus more on tuning an established client rather than evaluating what client to choose.
Our application is a write-heavy application and we plan to have a 50-shard, 50-replica ES cluster for that.
All those clients are fine for querying and they all have their pros and cons (below list is not exhaustive):
A Node client provides a single hop into the cluster but since it will also be part of the cluster it can also induce too much chatter within the cluster
A Transport client is not part of the cluster, hence requires a two-hop roundtrip, and communicates with a single node at a time in a round-robin fashion (from the list provided during its construction)
Jest is basically the missing client for the ES REST interface
If you feel like you don't need all what Jest has to offer and simply want to interact with a few endpoints, you might as well create your own REST client by using Spring REST template, Apache HTTP, etc
If you're going to have a write-heavy application I suggest you don't even use any of those clients at all. The main reason is that they are all synchronous in nature and if any component of your architecture or the network were to fail for some reason, then you'd lose data, and that might not be an option for you.
If you have plenty of data to ingest, you normally go the asynchronous way, i.e. storing your data in a temporary (yet durable) queue (Kafka, Redis, JMS, etc) and then let another process stream it to ES. There are many ways to do that, but a very simple one is to use Logstash for that.
Whether you decide to store your data in Kafka or JMS or Redis, you can then let Logstash consume your data and stream it to ES, i.e. you let Logstash worry about the heavy write part, which it does very well. That can be achieved very easily with
a kafka or redis or stomp input
a few filters to massage your data
an elasticsearch output to forward the resulting data to ES via the bulk endpoint.
With that kind of well-tuned setup, you can handle very heavy write loads without needing to worry about which client you want to use and how you need to tune it. The question is still open for querying, though, but since the write part is paramount in your case, you need to make it solid, the only serious way is by going asynchronous and let a well-developed and tested ETL (such as Logstash, or fluentd, etc) do it for you.
UPDATE
It is worth noting that as of ES 5.0, there will be a new Java REST client available.

Resources