We use Prometheus to scrape Spring Boot 2.0.0 metrics and then persist them in InfluxDB.
We then use Grafana to visualize them from InfluxDB.
Our micrometer dependencies are
micrometer-core
micrometer-registry-prometheus
I want to be able to show a latency metric for our REST APIs.
From our Prometheus scraper I can see these metrics are generated for HTTP requests.
http_server_requests_seconds_count
http_server_requests_seconds_sum
http_server_requests_seconds_max
I understand from the micrometer documentation, https://micrometer.io/docs/concepts#_client_side, that latency can be done by combining 2 of the above generated metrics: totalTime / count.
However our data source is InfluxDB which does not support combining measurements, https://docs.influxdata.com/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-do-i-query-data-across-measurements,
so I am unable to implement that function in InfluxDB.
Do I need to provide my own implementation of this latency metric in the Spring Boot component or is their an easier way that I can achieve this?
You essentially can join your measurements in Kapacitor, another component of Influxdata TICK stack.
It's going to be pretty simple with JoinNode, possibly followed by Eval to calculate what you want right in place. There's tons of examples around it in documentation.
Although the problem is different there: you'd unnecessarily overingeneered your solution, and moreover - you're trying to combine two products that has the same purpose, but uses different approach to it. How smart is that?
You're already scraping things with Prometheus? Fine! Stay with it, do the math there, it's simple. And Grafana works with Prometheus too, right out of the box!
You wanna have your data in Influx (I can understand that, it's certainly more advanced)?
Fine! Micrometer can send it right to Influx out of the box - and in at least two ways!
I, personally, don't see any reason to do what you suppose to do, can you share one?
Related
I am using Spring Boot and Spring Cloud for Microservices architecture and using various things like API Gateway, Distributed Config, Zipkin + Sleuth, Cloud and 12 factor methodologies where we've single DB server has the same schema but tables are private.
Now I am looking to have below things - Note - Response Object is nested and gives data in hierarchy.
Can we ask downstream system to develop API to accept List of CustomerId and given response in one go?
Or can we simply call the same API multiple times giving single CustomerId and get the response?
Please suggest having complex response set and also having simple response set. What would be better considering performance and microservices in mind.
I would go with option 1. This may be less RESTful but it is more performant, especially if the list of CustomerId is large. Following standards is for sure good, but sometimes the use case requires us to bend a bit the standards so that the system is useful.
With option 2. you will most probably "waste" more time with HTTP connection "dance" than with your actual use case of getting the data. Imagine having to call 50 times the same downstream service if you are required to retrieve the data from 50 CustomerIds.
Good evening,
I’m a student from the university of Rome Tor Vergata. I’m currently working on my master thesis that involves the use of Linkerd.
Very briefly the thesis is about implementing a totally distributed root cause localization system for microservices architectures.
In the metrics collection phase I'm facing an issue with Linkerd since I’m not using Prometheus, but manually scraping metrics from proxies through the /metrics endpoint.
I can’t understand how or when do Linkerd’s proxies reset the various metrics they collect.
Does anybody know if they have a timer? Or is there a way to make them reset metrics after the scraping?
Thanks in advance for any help anyone will give me.
The metrics are stored in memory by the Linkerd proxy as soon as the proxy process starts running.
Most of the metrics are buckets for histograms whose main purpose is to view the data over time, so there isn't a way to reset them and they don't reset themselves.
You could write prometheus queries to select windows of time where you would reset the metrics or you could restart the containers and write queries to filter the metrics on the newer workloads.
I am configuring Prometheus to access Spring boot metrics data. For some of the metrics, Prometheus's pull mechanism is ok, but for some custom metrics I prefer push based mechanism.
Does Prometheus allow to push metrics data?
No.
Prometheus is very opinionated, and one of it's design decisions is to dis-allow push as a mechanism into Prometheus itself.
The way around this is to push into an intermediate store and allow Prometheus to scrape data from there. This isn't fun and there are considerations on how quickly you want to drain your data and how pass data into Prometheus with time-stamps -- I've had to override the Prometheus client library for this.
https://github.com/prometheus/pushgateway
Prometheus provides its own collector above which looks like it would be what you want but it has weird semantics around when it expires pushed metrics (it never does, only overwrites their value for a new datapoint with the same labels).
They make it very clear that they don't want it used for pushed metrics.
All in all, you can hack something together to get something close to push events.
But you're much better off embracing the pull model than fighting it.
Prometheus has added support for the push model recently. It is called as remote write receiver.
Link: https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
From what I understand it only accepts POST request with protocol buffers and snappy compression.
Prometheus doesn't support push model. If you need Prometheus-like monitoring system, which supports both pull model and push model, then try VictoriaMetrics - the project I work on:
It supports scraping of Prometheus metrics - see these docs.
It supports data ingestion (aka push model) in Prometheus text exposition format - see these docs.
It supports other popular data ingestion formats such as InfluxDB line protocol, Graphite, OpenTSDB, DataDog, CSV and JSON - see these docs.
Additionally to this VictoriaMetrics provides Prometheus querying API and PromQL-like query language - MetricsQL, so it can be used as a drop-in replacement for Prometheus in most cases.
I am developing a Spring Boot 2 Application with Micrometer for reporting Metrics. One of the functionality is sending large amounts of Data to a Restful Web Service.
I would like to measure the amount of data sent and the time taken to complete the request. Using the Timer metric gives me the time as well as the number of times the request is made. But how can I include the bytes transferred also in the same metric? My Grafana dashboard is supposed to plat the amount of data transferred and the time is taken to accomplish it.
I looked at Counter and Gauges but they don't look like the right fit for what I am trying to do. Is there a way to add a custom field to the Grafana metric?
You'd use a DistributionSummaryfor that. See here and here.
Regarding instrumentation, you'd have to currently instrument your Controllers manually or wire an Aspect around them.
IIRC at least the Tomcat metrics provide some data-in and data-out metrics, but not down to the path level.
I plan to set up monitoring for Redmine, with the help of which I can see man-hours spent on tickets, time taken to complete a ticket etc to monitor the productivity of my team. I want to see all of these using Graphana. As of now I think using Prometheus and exposing the Metrics but not sure how. (Might have to create an exporter I think, but not sure if that would work). So basically how can this be possible?
A Prometheus exporter is simply an HTTP server that sits next to your target (Redmine in your case, although I have no experience with it) and whenever it gets a /metrics request it does one or more API calls to the target (assuming Redmine provides an API to query the numbers you need) and returns said numbers as Prometheus metrics with names, labels etc.
Here are the Prometheus clients (that help expose metrics in the format accepted by Prometheus) for Go and Java (look for simpleclient_http or simpleclient_servlet). There is support for many other languages.
Adding on to #Alin's answer to expose Redmine metrics to Prometheus. You would need to install an exporter.
https://github.com/mbeloshitsky/redmine_prometheus.git
Here is a redmine plugin available for prometheus.
You can get the hours and all the data you need through Redmine Rest APIs. Write a little program to fetch and update the data in Graphite or Prometheus. You can perform this task using sensu through creating a metric script in python,ruby or Perl. Next all you have to do is Plotting the graphs. Well thats another race :P
RedMine guide: http://www.redmine.org/projects/redmine/wiki/Rest_api_with_python