Visualizing a single string of text in Kibana - elasticsearch

In Kibana, I have an index that looks like as follows
type (String)
value (String)
timestamp (Date)
I would like to have a visualization that shows the most recent value field where the type is equal to "battery", for example.
I would like the visualization to be similar to the "Metric" one, but displaying a string of text instead of a number, of course.
Is this possible with Kibana? If not, how can I get a similar result?

You can use a Data Table visualization.
In the search query you would specify type: "Battery"
In the metric section you would specify Max timestamp
In the Split Rows section you would specify Aggregation=Terms, Field=value, OrderBy=metric:Max timestamp, Order=descending, Size=1
You will have a result that is a table with 1 row and 2 columns, one of which being a value and the other a timestamp
If this does not satisfy your needs, you may look into available Kibana plugins that allow new visualizations (see the list of known plugins) or modify one of them to suite your needs.

Related

Use #timestamp as metric in an elastic dashboard

Problem
I am trying to build a dashboard in elastic with a table to monitor job runs.
I want to have per run the minimum timestamp (ie. job start) and the number of processed messages. The minimum timestamp is my problem, I can't seem to get it.
What I have done
All my log lines have as (relevant) fields: #timestamp, nb_messages, run_id. run_id is unique per run, and a run creates multiple log lines.
I create a dashboard, add a TSVB panel, and select Table.
I use run_id as the field to group by.
I can use max(nb_message) in my table without issue.
But if I use min(#timestamp), or any other aggregation than count, I just get a -.
I first tried with a lens instead of a TSVB panel, and I had the same issue, but with as message: To use this function, select a different field.
I can confirm in the index that logging.timestamp has date for type.
Question
Is there a way to use the timestamp as metric?
I would use a "normal" data table visualization (navigate through Aggregation based option in the Visualization menu if you're using the latest version of Kibana) instead of the TSVB. There, the default metric is count representing the amount of events of the index pattern in the selected time range. You can use the min metric on the #timestamp field and aggregate/group your data as you want.
The preliminary is of course that the selected index pattern contains an #timestamp field.
I hope I could help you.

How can I create a list of values for a field in Kibana?

I am using Kibana to view data from Elasticsearch index. There is a field only has a few values. When I do search the field, how can I make the search bar as a select rather than a free text input? I know that there is a filter list like below image:
but it doesn't work for the case that top 5 values in 500 records have one value. How can I show all values in the history as a list for a field?
I think your are looking for "controls" visualization.
Go to visualization > controls
Then choose option list, your index and your field.
The result will be a dropdown with values like if you did a select distinct on your field within the whole kibana range.
Add it to a dashboard to have a filtering interface human usable dashboard.
Update:
Maybe a simple filter on the discover page can answer to your question.

elasticsearch - Tag data with lookup table values

I’m trying to tag my data according to a lookup table.
The lookup table has these fields:
• Key- represent the field name in the data I want to tag.
In the real data the field is a subfield of “Headers” field..
An example for the “Key” field:
“Server. (* is a wildcard)
• Value- represent the wanted value of the mentioned field above.
The value in the lookup table is only a part of a string in the real data value.
An example for the “Value” field:
“Avtech”.
• Vendor- the value I want to add to the real data if a combination of field- value is found in an document.
An example for combination in the real data:
“Headers.Server : Linux/2.x UPnP/1.0 Avtech/1.0”
A match with that document in the look up table will be:
Key= Server (with wildcard on both sides).
Value= Avtech(with wildcard on both sides)
Vendor= Avtech
So baisically I’ll need to add a field to that document with the value- “ Avtech”.
the subfields in “Headers” are dynamic fields that changes from document to document.
of a match is not found I’ll need to add to the tag field with value- “Unknown”.
I’ve tried to use the enrich processor , use the lookup table as the source data , the match field will be ”Value” and the enrich field will be “Vendor”.
In the enrich processor I didn’t know how to call to the field since it’s dynamic and I wanted to search if the value is anywhere in the “Headers” subfields.
Also, I don’t think that there will be a match between the “Value” in the lookup table and the value of the Headers subfield, since “Value” field in the lookup table is a substring with wildcards on both sides.
I can use some help to accomplish what I’m trying to do.. and how to search with wildcards inside an enrich processor.
or if you have other idea besides the enrich processor- such as parent- child and lookup terms mechanism.
Thanks!
Adi.
There are two ways to accomplish this:
Using the combination of Logstash & Elasticsearch
Using the only the Elastichsearch Ingest node
Constriant: You need to know the position of the Vendor term occuring in the Header field.
Approach 1
If so then you can use the GROK filter to extract the term. And based on the term found, do a lookup to get the corresponding value.
Reference
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html
https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html
Approach 2
Create an index consisting of KV pairs. In the ingest node, create a pipeline which consists of Grok processor and then Enrich it. The Grok would work the same way mentioned in the Approach 1. And you seem to have already got the Enrich part working.
Reference
https://www.elastic.co/guide/en/elasticsearch/reference/current/grok-processor.html
If you are able to isolate the sub field within the Header where the Term of interest is present then it would make things easier for you.

Elasticsearch -- How to find specific element on a json formatted field in your index

I have an index named web with a field called csr that has a datatype of string -- the reason why we made it so, is because we have migrated from Mysql and is now trying to use Elasticsearch.
The csr field signifies the linked/joined data with another table during the Mysql phase (It shows that the web records HAS MANY linked csr records)
Here's an example screenshot of the index using Kibana:
Now, I'm trying to do a range query filter to display all the documents that belongs to a certain range of csr.csr_story_value. For example, I want to display documents that has a csr.csr_story_value values ranging from 4 to 5. However, since there are multiple json elements in the csr field, I presume that the query points out to the highest value in the csr field. Here's an example screenshot:
MAIN QUESTION
Is there a way in Elastcisearch to insert a match query in this range query so I can specifically match my kgp_id and cli_id to just specifically pick a certain json element in this field?

kibana unique count seeing names with - as different entery

I have an problem with the unique count feature.
I get data from elasticsearch for example an computer name (PC-01) in a field.
When i want to use a visualisation unique count then kibana makes from "DESKTOP-2D562R2" -> "DESKTOP" and "2D562R2" as a entery.
See this splitted field:
The data kibana gets from elastic search looks like this entery data:
The problem with this is that 2d562r2 and desktop two different "enterys" are in a kibana table or with unique count.
Your field is being analyzed (split into tokens). Change the mapping (or template, depending on how you're creating the indexes) to make this field not_analyzed.
Note that, as a hack, logstash's default template creates a ".raw" version of string fields that is not analyzed. You could refer to enterys.raw.

Resources