Grafana - get Median of metrics - java-8

I am very new to Grafana and I am trying to get Median of some metrics.
These are the types of queries that my Team is using that I am trying to get a Median for:
avg(backend_service_manager_className_methodName_request_time{quantile="0.75",})*1000

Relevant Documentation:
https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators
avg(quantile(0.75, backend_service_manager_className_methodName_request_time))
If needed add avg for your BL, the displayed value should be pre-configured and you should not multiple by 1000.

Per the docs:
quantile(0.5, backend_service_manager_className_methodName_request_time) calculates the median

Related

How to get average value over last N hours in Prometheus

In Prometheus I want to calculate the total average of this specific power metric (dell_hw_chassis_power_reading) within a period of time. For example, the last 2 hours.
I currently am using this query below:
sum(avg_over_time(dell_hw_chassis_power_reading[2h])) by (node,instance)
This query seems to only give me the latest value within that 2-hour interval specified in the query. It does not give me the total average for all scrapes of that metric within the 2-hours.
The query avg_over_time(dell_hw_chassis_power_reading[2h]) returns the average values for raw samples stored in Prometheus over the last 2 hours. If Prometheus contains multiple time series with the dell_hw_chassis_power_reading name, then the query would return independent averages per each matching time series.
If you want calculating the average over all the time series with the name dell_hw_chassis_power_reading, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h]))
/
sum(count_over_time(dell_hw_chassis_power_reading[2h]))
If you need the average per some label or a set of labels, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
/
sum(count_over_time(dell_hw_chassis_power_reading[2h])) by (labels)

Can you apply a limit on promQL query?

For example:
Avg by (server) (HttpStatusCodes{category = 'Api.ResponseStatus'}) limit 10
Is this valid in promQl? I can not find anything about it in the documentation. Thanks
The provided query is valid MetricsQL query for VictoriaMetrics, but unfortunately it doesn't work in the original PromQL.
Prometheus provides topk
and bottomk operators, which can be used for limiting the number of returned time series. Unfortunately, these operators limit the number of returned time series on a per-point basis (considering points on the graph). This means that the total number of returned time series may exceed the requested limit. MetricsQL solve this issue with a family of topk_* and bottomk_* functions. See MetricsQL docs for details.

Rate metric per time with ElasticSearch datasource

I am using ElasticSearch as a data source in Grafana.
I have an ES index in which every document represents an HTTP request. I would like to create a graph that would show the rate of request in a given time interval (per second, per minute).
Basically, I am hoping it is possible to reproduce what prometheus offer with the rate() function: https://prometheus.io/docs/prometheus/latest/querying/functions/#rate
Per my actual researches, I think I should use the "derivative" option in Grafana, associated with the Count metric, but I am not sure how to configure it to graph correct results.
Furthermore, I am using a templated interval variable with custom intervals like 2m, 3m... Would it be possible to use $__interval_ms builtin variable to compute the rate. I mean, is this builtin automatically computed based my custom interval, or is it working only with the auto value? If not, how would I usea time interval like 5m to perform arithmetic to compute the rate from it ?
Thanks
Solved this by adding a dummy field for each request I log, where the content is simply the value 1. Then in grafana, I can use the sum aggregator and an inline script that allow me to calculate a rate given a time interval like 5m, where the script is simply *value / 60*5*.

Kibana: Best option for performing min-max aggregation

Given a series incoming events like say:
#timestamp1: a,b,c,d,e
#timestamp2: a,b,c,d,e
(all numbers)
I need to perform some calculation which would be of the form
(max (a) - min (a) )* (max (b) - min (b)) / (max © - min © ).
I know how to show it as a time series graph (using Visual Builder). But I also want to show it as a simple number for the overall duration that has been selected.
I tried lucene expression numeric APIs (doc[‘field_name’].max(), min()) but that doesn’t work. I didn’t see any such API within painless.
I also looked at “Scripted Metric Aggregation”, but couldn’t quite understand, where in Kibana to specify those expressions.
Same is the case with “Metrics Aggregation”, how do I make use of it within Kibana?
How can displaying aggregated number be so difficult as compared to a time-series chart? Any help is appreciated. Thanks.
In kibana when you create visualisation, you have in the bucket section in the bottom "advance" link, click on it, text area will open, there you can code agg that will inject to kibana aggregation, try write there your metric aggregation

trend of ratio in kibana 4.0

I have documents under two daily indexes. Both have count field which is >=1.
I want to create a graph which shows trend of ratio of these two fields aggregated over time.
Data will be sampled based on time duration selected in dashboard ex : for one day each sample would be be 10 min which will sum these two fields separately and calculate ratio and then show as one data point. So for 24 hours it would be 24*60 point in the graph.
How can I achieve same in Kibana 4 ?
We tried something similar but turns out it is not possible in Kibana.
As of now you can not plot a calculated field based on two different fields in Kibana.
To workaround this, we implemented a plugin that modifies data before it is pumped to elastic search. So we carried out calculations in that plugin. Also, the plugin periodically pumps data to elastic search so kibana gets the latest values

Resources