Grafana - Show metric by field value - elasticsearch

I'm currently trying to create a graph on Grafana to monitor the status of my servers, however, I can't seem to find a way to use the value of a field as the value to be displayed on the graph. (Datasource is ElasticSearch)
The following "document" is going to be sent to GrayLog (which saves to Elastic) every 1 minute for an array of regions.
{
"region_key": "some_key",
"region_name": "Some Name",
"region_count": 1610
}
By using the following settings, I can get Grafana to display the count of messages it received for each region, however, I want to display the number on the region_count field instead.
Result:
How can I accomplish this? is this even possible using Elastic as the datasource?

1) Make sure that your document includes a timestamp in ElasticSearch.
2) In the Query box, provide the Lucene query which narrows down the documents to only those related to this metric
3) In the Metric line, press "Count" and change that to one which takes a specific field: for example, "Average"
4) Next to the "Average" box will appear "select field", which is a dropdown of the available fields. If you see unexpected fieldnames here, it's probably because your Lucene query isn't specific enough. (Kibana can be useful for getting this query right)

Related

Use #timestamp as metric in an elastic dashboard

Problem
I am trying to build a dashboard in elastic with a table to monitor job runs.
I want to have per run the minimum timestamp (ie. job start) and the number of processed messages. The minimum timestamp is my problem, I can't seem to get it.
What I have done
All my log lines have as (relevant) fields: #timestamp, nb_messages, run_id. run_id is unique per run, and a run creates multiple log lines.
I create a dashboard, add a TSVB panel, and select Table.
I use run_id as the field to group by.
I can use max(nb_message) in my table without issue.
But if I use min(#timestamp), or any other aggregation than count, I just get a -.
I first tried with a lens instead of a TSVB panel, and I had the same issue, but with as message: To use this function, select a different field.
I can confirm in the index that logging.timestamp has date for type.
Question
Is there a way to use the timestamp as metric?
I would use a "normal" data table visualization (navigate through Aggregation based option in the Visualization menu if you're using the latest version of Kibana) instead of the TSVB. There, the default metric is count representing the amount of events of the index pattern in the selected time range. You can use the min metric on the #timestamp field and aggregate/group your data as you want.
The preliminary is of course that the selected index pattern contains an #timestamp field.
I hope I could help you.

How to modify data table column values in Kibana

In Elastic search we store events, I have built data table which aggregates based on event types. I have filter which checks for event.keyword : "job-completed". I am getting the count as 1 or 0 but i want to display as completed / in-progress.
How to achieve this in Kibana ?
The best and more effective way to do this is to add another field with and to complete it at the ingest time.
It the best solution regarding on performance. But it can lead to an heavy work.
You can also use a scripted field to do this without touching your data.
Do to stack management > kibana > index pattern and select your index.
Select scripted field tab and fill in form.
Name : your_field
language: painless
type: string
format: string
script:
if(doc['event.keyword'].value=='job-completed'){
return "completed";
}else {
return "in progress";
}
I got to few information on your real data to be able to give you a working code, so you'll have to modify it to fit your needs.
Then refresh you visualization and you can use your new field

Kibana return results where a specific field is unique

So have been used to using GrayLog to build queries like this and struggling to get my head around Kibana so I need a few pointers to get me going.
I have an index that I want to search for various terms in a particular field. For example I want to search the index for the term "MFA" and this term will be in the adaptorid field. This returns some results so far so good but I would like to filter this a little more.
One field in particular that is of interest is trackingid, in fact it is actually the only field that I care about. The results that are returned can return multiple duplicate trackingids for each matched adaptorid.
What I would like to do is dedupe the trackingid so that I can get a count of the unique trackingid. The adaptorid field really doesn't matter in the final results and is just used to identify a particular sub set of trackingid fields from the index.
Assuming you are using Kibana 6.5+
Go to kibana -> Visualize -> Click on + icon -> Select Data Table -> Select your index -> Under Metrics select Unique Count Aggregation and your field trackingid-> Click on Play Icon |>
This will give you count of unique tracking Id's in you index. Now you can use kibana Add a Filter from top to filter MFA in adaptorid

How to name graph in grafana with elastic datasource

I have stack grafana + elasticsearch.
How to set labels of charts?
Now they are all "Max", basing on metric type.
current query
I want to set unique label for every query chart.
You can use alias field which is also seen in your screenshot. You may want to check below link to get more information about how to name time series in grafana when elastic search is data source.
https://grafana.com/docs/features/datasources/elasticsearch/#series-naming-alias-patterns

How to retrieve unique count of a field using Kibana + Elastic Search

Is it possible to query for a distinct/unique count of a field using Kibana? I am using elastic search as my backend to Kibana.
If so, what is the syntax of the query? Heres a link to the Kibana interface I would like to make my query: http://demo.kibana.org/#/dashboard
I am parsing nginx access logs with logstash and storing the data into elastic search. Then, I use Kibana to run queries and visualize my data in charts. Specifically, I want to know the count of unique IP addresses for a specific time frame using Kibana.
For Kibana 4 go to this answer
This is easy to do with a terms panel:
If you want to select the count of distinct IP that are in your logs, you should specify in the field clientip, you should put a big enough number in length (otherwise, it will join different IP under the same group) and specify in the style table. After adding the panel, you will have a table with IP, and the count of that IP:
Now Kibana 4 allows you to use aggregations. Apart from building a panel like the one that was explained in this answer for Kibana 3, now we can see the number of unique IPs in different periods, that was (IMO) what the OP wanted at the first place.
To build a dashboard like this you should go to Visualize -> Select your Index -> Select a Vertical Bar chart and then in the visualize panel:
In the Y axis we want the unique count of IPs (select the field where you stored the IP) and in the X axis we want a date histogram with our timefield.
After pressing the Apply button, we should have a graph that shows the unique count of IP distributed on time. We can change the time interval on the X axis to see the unique IPs hourly/daily...
Just take into account that the unique counts are approximate. For more information check also this answer.
Be aware with Unique count you are using 'cardinality' metric, which does not always guarantee exact unique count. :-)
the cardinality metric is an approximate algorithm. It is based on the
HyperLogLog++ (HLL) algorithm. HLL works by hashing your input and
using the bits from the hash to make probabilistic estimations on the
cardinality.
Depending on amount of data I can get differences of 700+ entries missing in a 300k dataset via Unique Count in Elastic which are otherwise really unique.
Read more here: https://www.elastic.co/guide/en/elasticsearch/guide/current/cardinality.html
Create "topN" query on "clientip" and then histogram with count on "clientip" and set "topN" query as source. Then you will see count of different ips per time.
Unique counts of field values are achieved by using facets. See ES documentation for the full story, but the gist is that you will create a query and then ask ES to prepare facets on the results for counting values found in fields. It's up to you to customize the fields used and even describe how you want the values returned. The most basic of facet types is just to group by terms, which would be like an IP address above. You can get pretty complex with these, even requiring a query within your facet!
{
"query": {
"match_all": {}
},
"facets": {
"terms": {
"field": "ip_address"
}
}
}
Using Aggs u can easily do that.
Writing down query for now.
GET index/_search
{
"size":0,
"aggs": {
"source": {
"terms": {
"field": "field",
"size": 100000
}
}
}
}
This would return the different values of field with there doc counts.
For Kibana 7.x, Unique Count is available in most visualizations.
For example, in Lens:
In aggregation based visualizations:
And even in TSVB (supporting normal fields as well as Runtime Fields, Scripted Fields are not supported):

Resources