I am ingesting data to elasticsearch using flume, I want to create a time-series graph in kibana to show the events collected over time. BUT I also want to to the average per that time unit so the user knows if the current flow is around the average or not.
To create a timeline I am using line graph with #timestamp as X-axis and count as Y-axis.
The question is how to create the average line? and how to make this average dynamic e.g. as we zoom in average changes from average per day to average per hour.
While creating a visualization you can choose the type of y-axis metric. The default is "count". You can click on the icon to choose other type of metrics you want. It will have various options like average, sum, percentile etc.
As for the time range of average calculation, the the x-axis metrics, under buckets when you choose date histogram the default interval is auto.This means that the time range of average will chage automatically depending on overall time range selected.
You can change it to a fixed interval like per second, minute, hourly daily etc.
It's a bit odd, you would expect count to appear alongside field as something you can average. In reality you have to do it another way:
For the Y axis, instead of selecting count, select "Average Bucket"
then set up your bucket aggregation that you would like e.g. Date Histogram with second interval.
Below this you have another box for metric, e.g. the thing you're averaging, set this to count
Related
In Prometheus I want to calculate the total average of this specific power metric (dell_hw_chassis_power_reading) within a period of time. For example, the last 2 hours.
I currently am using this query below:
sum(avg_over_time(dell_hw_chassis_power_reading[2h])) by (node,instance)
This query seems to only give me the latest value within that 2-hour interval specified in the query. It does not give me the total average for all scrapes of that metric within the 2-hours.
The query avg_over_time(dell_hw_chassis_power_reading[2h]) returns the average values for raw samples stored in Prometheus over the last 2 hours. If Prometheus contains multiple time series with the dell_hw_chassis_power_reading name, then the query would return independent averages per each matching time series.
If you want calculating the average over all the time series with the name dell_hw_chassis_power_reading, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h]))
/
sum(count_over_time(dell_hw_chassis_power_reading[2h]))
If you need the average per some label or a set of labels, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
/
sum(count_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
I am trying to create a Kibana TSVB visualization that displays an “events per second (EPS)” metric for the last created elasticsearch index of a particular index pattern. Currently I’m using a Count aggregator that pipes to the Math aggregator with the formula params.Count / (params._interval / 1000).
But this calculation is only accurate if the chart’s timerange is set to exactly the first and last timestamps in the index. Otherwise the empty data sets (both before and after the index’s timeframe) is being included in calculating the EPS. Currently I have to manually query the min/max timestamps of the index and then manually set the chart’s timeframe in the upper right corner to match that, only then it calculates the EPS correctly.
So my question… is there a way to automatically do this? Such as having the chart’s Start and End timerange as variables equal to the Min and Max timestamps of the particular index I’m looking at? Or have it ignore the out of bounds time range?
Thanks
We have a counter metric in one our micro services which pushes data to DataDog. I want to display the total count for given time frame, and also the count per day (X axis would have the date and Y axis would have count). How do we achive this?
I tried using sum by and diff with Query value representation. It gives the total number of the count for given time frame. But I would like to get a bar graph with the X axis as the date and the Y axis as the count. Is this possible in DataDog?
It seems like there are 2 main questions here:
display the total count for a given time frame.
the count per day.
I think the rollup method is going to be your friend for both questions.
For #1 you need to pass in the time frame you want a total over: sum:<metric_name>.rollup(sum, <time_frame>) and the single value can be displayed using the Query Value visualization.
For #2 the datadog docs say you can get metrics per a day by
graphed using a day-long rollup with .rollup(avg,86400)
So this would look something like sum:<metric_name>.rollup(sum, 86400) and can be displayed a Timeseries with bars.
I want to look at the min, max, and average statistics for each time interval for a particular custom metric. In this case it's the size of each file my system is ingesting. Currently I can do this in a CloudWatch Dashboard with three separate widgets, one for each statistic:
I'd really prefer to have at least two curves on the same axis. I know how to put two different curves on one widget, but that only appears to support having two different metrics, with the same aggregation function applied.
Interestingly, in the Lambda monitoring view, AWS provides just such a plot for Invocation Duration:
Under Actions column, there is a button to Duplicate a metric. Just click on it, it'll add the a copy of that metric to same graph. Then you can tweak Statistic for this copy. E.g. below I have min, max and avg of CPUUtilization.
One workaround would be to create duplicate dummy metrics for the same data, then overlay them. The only problem is you'd still be stuck hard-coding the statistic for each curve, so the Average of the Min metric would work out to the min of the average. Not worth it.
I'm facing a following problem. In Kibana 4 I've created a line chart based on my input from elasticeasrch but I can only display average, min, max instead of an actual value of the field per time, e.g. sent bytes.
Most answears to that question on stackoverflow are about Kibana 3 (How to create value over time chart with Kibana 3?) and seem to include a Histogram on a X axis, yet I can't seem to find one which will enable me to apply them to Kibana 4. I was unable to find the histogram panel and once I click on the discover tab there is the constant Searching loading.
If I have the following fields in my _source:
{"timestamp":"2015-06-02T10:16:44.0855","time":587,"threadName":"Thread Group 1-957","byte":1372,"status":"false","latence":306,"registerCall":"404"}
and I would like to have the number of bytes on the Y-axis and on the X-axis my timestamp.
Any help in the right direction will be appreciated :)
To create a value over time line chart in Kibana, follow these steps:
Go to visualize tab and select line chart
In the X-axis, select X-axis, Aggregation as Date Histogram and then select your timestamp field as the date field.
Next for the Y-Axis, select Sum as the aggregation and then bytes as the field.
For the X axis, what Alcanzar said is good, but as you notice, the Y axis is problematic.
Sum (suggested by "Limit") works, but since it's aggregated, it shows the total used in each aggregated bucket, but that may be meaningless depending on what you are trying to show. Your question isn't clear on what you want, so I'm just guessing here. One hour of requests, each of which ran for one minute and sent 1 megabyte is indeed 60 megabytes-minutes, if you are trying to show total capacity used over than hour (maybe you are paying a bill based on usage per time). On the other hand, if you are trying to show peak usage in each time, it would be wrong.
You said you already looked and Max and Min and they don't meet your needs. I don't suppose Standard Deviation would be any better?
I have the same concern. The best I've been able to do so far is
display Min and Max simultaneously in the Y axis. When they diverge, I know I'm zoomed out too far, so I zoom in until they align.
This is how I know I'm seeing individual events.
In any case, I share your frustration. I too would like to be able to show time series as easily as I can in, say, Excel.