How can I mark a field which represent the time that events occurred? - elasticsearch

I am using kibana to search the document of elasticsearch, I found that kibana marked some filed ,which represents the time that event occurred.
When I search index with such documents ,I can make use of the datetime picker
I noticed that if some documents(in other index) without such field , the datetime picker is missing . So how can I select a field and marked as event time?

This issue is in the index patterning level:
When creating your index pattern you should be able to choose the "Time filter field name".
There you can choose the date field and then the datetime picker will be available.
If you don't seem to have it in your current index pattern - create a new one and use it instead.

As you declare your index mapping, apply the null_value parameter. It could simply have the value 0 (0th epoch second). That way, when you select the max date range, it's going to pull all your docs.

Related

Use #timestamp as metric in an elastic dashboard

Problem
I am trying to build a dashboard in elastic with a table to monitor job runs.
I want to have per run the minimum timestamp (ie. job start) and the number of processed messages. The minimum timestamp is my problem, I can't seem to get it.
What I have done
All my log lines have as (relevant) fields: #timestamp, nb_messages, run_id. run_id is unique per run, and a run creates multiple log lines.
I create a dashboard, add a TSVB panel, and select Table.
I use run_id as the field to group by.
I can use max(nb_message) in my table without issue.
But if I use min(#timestamp), or any other aggregation than count, I just get a -.
I first tried with a lens instead of a TSVB panel, and I had the same issue, but with as message: To use this function, select a different field.
I can confirm in the index that logging.timestamp has date for type.
Question
Is there a way to use the timestamp as metric?
I would use a "normal" data table visualization (navigate through Aggregation based option in the Visualization menu if you're using the latest version of Kibana) instead of the TSVB. There, the default metric is count representing the amount of events of the index pattern in the selected time range. You can use the min metric on the #timestamp field and aggregate/group your data as you want.
The preliminary is of course that the selected index pattern contains an #timestamp field.
I hope I could help you.

How can I create "duplicated index patterns" in Kibana

I'm using kibana 7.10.1.
I need it to use different 'time fields' for each index pattern. Is this possible to set multiple time fields for same index ?
You can pick any date (or date_nanos) field as the primary time field in an index pattern. Screenshot from the second page when creating it:
#timestamp is just a convention. Though you will need to create a different index pattern for each combination of index(es) and primary time field.

How do time filter shortcuts work in KIbana?

Kibana provides time filter shortcuts like Today, Yesterday, last 10 days etc in the dashboard. I want to know how do they work. When I click Today in Kibana, Which field is used in the query? How can i configure these links to take custom timestamp fields?
When you create your index the first time you choose after that Time filter field name:
this filter will be used when you choose last 15 minutes or another time filter.
By default it is the timestamp but if you have a time field in your fields list you can use it.

Elastic Search - Find document with a conflicting field type

I'm using Elastic Search 5.6.2 with Kibana and I'm currently facing a problem
My documents are indexed on the field timestamp which is normally an integer, however recently somebody has logged a document with a timestamp that is not an integer, and Kibana complains of conflicting type.
The discover panels display nothing and the following errors pop:
Saved "field" parameter is now invalid. Please select a new field.
Discover: "field" is a required parameter
How can I look for the document(s) causing these conflicts so that to find the service creating bad logs ?
The field type (either integer or text/keyword) is not defined on per document basis but rather on per index basis (in the mappings). I guess you are manipulating timeseries data, and you probably have un index per day (or per month or ...).
In Kibana Dev Tools:
List the created indices with GET _cat/indices
For each index (logstash-2017.09.28 in my example) do a GET logstash-2017.09.28/_mapping and check the type of the field in #timestamp
The field type is probably different between indices.
You won't be able to change the field type on created indices. Deleting document won't solve you're problem. The only solution is to drop the index or reindex the whole index with a new field type (in a specific mapping).
To avoid this problem on future indices, the solution is to create an index template with a mapping telling that the field #timestamp is of type date or whatever.

Visualizing a single string of text in Kibana

In Kibana, I have an index that looks like as follows
type (String)
value (String)
timestamp (Date)
I would like to have a visualization that shows the most recent value field where the type is equal to "battery", for example.
I would like the visualization to be similar to the "Metric" one, but displaying a string of text instead of a number, of course.
Is this possible with Kibana? If not, how can I get a similar result?
You can use a Data Table visualization.
In the search query you would specify type: "Battery"
In the metric section you would specify Max timestamp
In the Split Rows section you would specify Aggregation=Terms, Field=value, OrderBy=metric:Max timestamp, Order=descending, Size=1
You will have a result that is a table with 1 row and 2 columns, one of which being a value and the other a timestamp
If this does not satisfy your needs, you may look into available Kibana plugins that allow new visualizations (see the list of known plugins) or modify one of them to suite your needs.

Resources