Kibana: Best option for performing min-max aggregation - elasticsearch

Given a series incoming events like say:
#timestamp1: a,b,c,d,e
#timestamp2: a,b,c,d,e
(all numbers)
I need to perform some calculation which would be of the form
(max (a) - min (a) )* (max (b) - min (b)) / (max © - min © ).
I know how to show it as a time series graph (using Visual Builder). But I also want to show it as a simple number for the overall duration that has been selected.
I tried lucene expression numeric APIs (doc[‘field_name’].max(), min()) but that doesn’t work. I didn’t see any such API within painless.
I also looked at “Scripted Metric Aggregation”, but couldn’t quite understand, where in Kibana to specify those expressions.
Same is the case with “Metrics Aggregation”, how do I make use of it within Kibana?
How can displaying aggregated number be so difficult as compared to a time-series chart? Any help is appreciated. Thanks.

In kibana when you create visualisation, you have in the bucket section in the bottom "advance" link, click on it, text area will open, there you can code agg that will inject to kibana aggregation, try write there your metric aggregation

Related

How to exclude lowest value from average calculation in Kibana

I usually do it in Excel but it is not easy for me to do it in KIBANA as well
I have this table in Excel and every hour I want to average for all instancs in the fiels "detail" but excluding the lowest three values (nine details each hour, the average should be only for the the six highest of them). In Excel I use the LARGE function.
https://docs.google.com/spreadsheets/d/1LcKO8TGl49dz6usWNwxRx0oVgQb9s_h1/edit?usp=sharing&ouid=114168049607741321864&rtpof=true&sd=true
In your opinion is there any chance to do it directly in KIBANA?
No idea how to proceed
You can use lens table visualization and set the number of rows to 6 and order rows by descending order of your CPU load. Look at the sample data table here
The average here is calculated for the top 6 values of bytes only.
Here are the settings:
You can try replacing the clientIP here by details and bytes by CPU load
No, it is not possible to automatically remove the last N results from the equation in Kibana. You should be manually filtering out from the list in the visualization every time.
The only alternative I see is to add an extra step that deletes or flags the 3 results per hour you want to exclude, and then in Kibana you just add a regular filter.
The easiest way I can think of is creating a watcher that groups the results by hour, sort by CPU, and then ingest the first 6 results in a different index you can query using Kibana.
Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-alerting.html
If this acceptable for you I can edit this answer with more details about the Watcher I would create.

Kibana chart time range - how to auto-dynamically set it?

I am trying to create a Kibana TSVB visualization that displays an “events per second (EPS)” metric for the last created elasticsearch index of a particular index pattern. Currently I’m using a Count aggregator that pipes to the Math aggregator with the formula params.Count / (params._interval / 1000).
But this calculation is only accurate if the chart’s timerange is set to exactly the first and last timestamps in the index. Otherwise the empty data sets (both before and after the index’s timeframe) is being included in calculating the EPS. Currently I have to manually query the min/max timestamps of the index and then manually set the chart’s timeframe in the upper right corner to match that, only then it calculates the EPS correctly.
So my question… is there a way to automatically do this? Such as having the chart’s Start and End timerange as variables equal to the Min and Max timestamps of the particular index I’m looking at? Or have it ignore the out of bounds time range?
Thanks

alternative for periodOverPeriodPercentDifference in quicksight

whenever I use the periodOverPeriodPercentDifference in the quicksight pivot table, it only gives the values for the lowest granularity on the row-wise basis and if I collapse all then all I can see is a blank in that calculated field. Is there any alternative for the periodOverPeriodPercentDifference in quicksight?
I tried with periodOverPeriodPercentDifference it's but no use so I want an alternative for that. can anyone help me with that?
short description:
Calculates Percent difference of the measure over two different time periods as specified by Period Granularity (** defaults to visual aggregation granularity **) and Offset (defaults to 1).

How to create value over time line chart in Kibana 4?

I'm facing a following problem. In Kibana 4 I've created a line chart based on my input from elasticeasrch but I can only display average, min, max instead of an actual value of the field per time, e.g. sent bytes.
Most answears to that question on stackoverflow are about Kibana 3 (How to create value over time chart with Kibana 3?) and seem to include a Histogram on a X axis, yet I can't seem to find one which will enable me to apply them to Kibana 4. I was unable to find the histogram panel and once I click on the discover tab there is the constant Searching loading.
If I have the following fields in my _source:
{"timestamp":"2015-06-02T10:16:44.0855","time":587,"threadName":"Thread Group 1-957","byte":1372,"status":"false","latence":306,"registerCall":"404"}
and I would like to have the number of bytes on the Y-axis and on the X-axis my timestamp.
Any help in the right direction will be appreciated :)
To create a value over time line chart in Kibana, follow these steps:
Go to visualize tab and select line chart
In the X-axis, select X-axis, Aggregation as Date Histogram and then select your timestamp field as the date field.
Next for the Y-Axis, select Sum as the aggregation and then bytes as the field.
For the X axis, what Alcanzar said is good, but as you notice, the Y axis is problematic.
Sum (suggested by "Limit") works, but since it's aggregated, it shows the total used in each aggregated bucket, but that may be meaningless depending on what you are trying to show. Your question isn't clear on what you want, so I'm just guessing here. One hour of requests, each of which ran for one minute and sent 1 megabyte is indeed 60 megabytes-minutes, if you are trying to show total capacity used over than hour (maybe you are paying a bill based on usage per time). On the other hand, if you are trying to show peak usage in each time, it would be wrong.
You said you already looked and Max and Min and they don't meet your needs. I don't suppose Standard Deviation would be any better?
I have the same concern. The best I've been able to do so far is
display Min and Max simultaneously in the Y axis. When they diverge, I know I'm zoomed out too far, so I zoom in until they align.
This is how I know I'm seeing individual events.
In any case, I share your frustration. I too would like to be able to show time series as easily as I can in, say, Excel.

how to find and visualize a spike/burst with kibana

what I have:
24h logged data in elasticsearch with a number field containing the byte size of transmitted messages (microsecond granularity).
Via a date histogram I can easily drill down to ms-intervals to determine network traffic spikes.
what I need:
a deterministic way to find the maximum traffic spike within the 24 hours based on a fixed size 100 ms interval.
find( max( sum(bytessize) of X ms interval)) over Yh range)
I'm new to the ELK-Stack, so any help how and where (elasticsearch or kibana) to solve such a problem is appreciated.
If i understand correctly, then something very similar to what you want can be done, and you are very close.
Click on the 'pencil' icon in the top right corder of you graph to edit the visualization. In your 'Y-Axis' aggregation choose 'Max' as the aggregation type.
In the 'X-Axis' aggregation section, I see you're already using a 'Date Histogram', so just define the interval to be 'Second' (That's the lowest possible interval in kibana, currently available)

Resources