Using Scripted field as time variable in Kibana - elasticsearch

Kibana allows to conveniently filter data or visualizations based on time.
Apparently Kibana should automatically detect a "time variable" and use it for time-based filtering. In my specific case the field providing information about time is a Scripted Field: how can I specify that I want to use it for time-based filtering operations?

You can create scripted fields in the Kibana as mentioned in this link.
Basically if you have index pattern, click on that index pattern and you should be able to view the below image. Note the Add scripted field section. I suggest you to explore it.
Once you do that, you should be able to see the scripted field name that you'd have created for that index in the visualiser and thereby you can make use of it as mentioned in the below image.
For e.g. I've created a field myscript as mentioned in above image and added doc['date'].value as script in it.
Important Note: You can only make use of this scripted date field as a filter option.
Kibana doesn't have an option to use this scripted field as the default date field or time filter field or as date field for TSVB as I suppose it requires the field to be indexed.
Hope it helps!

Update: Kibana now supports using Runtime Fields in TSVB visualizations. They are available since 7.11 and are GA since 7.12.
Runtime fields will appear in TSVB just like any other field does (but might be slower to aggregate on).

Related

elasticsearch unknown setting index.include_type_name

I'm in really weird situations, I need to create indexes in elasticsearch that contain typeless fields. I have a rails application that sends any data per second to my elasticsearch. about my architecture, I have to say I use elastic-stack on docker in ubuntu server and use socket to send data's to elk and all of them are the latest version.
In my rails application user could choose datatype for each field but the issues happen when the user want to change the datatype of one field right after it's created, logstash return this error
error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [field] of type [long] in document with id '5e760cac-cafc-4fd0-9e45-1c650967ccd4'. Preview of field's value: '2022-01-18T08:06:30'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"2022-01-18T08:06:30\
I found deadly queue letter plugins to save wrong input in my server after that I think if I could index documents without any type the problem is solved so I start to googling and found Removal of mapping types in elasticsearch documents and I follow instructions which describe in tutorials I get the following error:
unknown setting [index.include_type_name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
even I put "include_type_name" in the request to send to the elastic noting change I have the latest version of elastic.
I think maybe it's helpful to edit the default elasticsearch template but noting the change. could you please help me with what should I do?
As already mentioned in the comments, Elasticsearch does not support changing the data type of a field without a reindex or creating a new index.
For example, if a field is mapped as a numeric field like integer and the user wants to index a string value in this field, elasticsearch will return an mapping error.
You would need to change the mapping of the index and reindex it or create a entirely new index using the new mapping.
None of this is done automatically by elastic, you would need to deal with this in your application, you could catch the error and implement some logic to create a new index with the new mapping, but this also could lead to other problems as having too many indices in the cluster and query errors when the range of the query include index with the same field with different mappings.
One feature that Elasticsearch has that could help you in some way is the runtime fields, with runtime fields you can query a field that has a specific mapping using a different mapping.
For example, if you have a field that has date values, but was wrongly mapped as a keyword or text field, you could use a runtime field to query it as it was a date field.
But again, this will need that you implement a logic to build those runtime fields and also can lead to other problems, not all the data types are available to runtime fields and runtime fields can impact in the performance.
Another feature that could help you is to use of multi-fields, this, I think, is the closest you got of having a field with multiple data types.
Using multi-fields you could have a field named date with the date type and also a field named date.keyword with the keyword type, you also could have a field name code with the keyword type and a field name code.int with the integer type, you would also need to use the ignore_malformed setting in the mapping so elastic does not reject the entire document in case of mapping errors, just the field with the wrong mapping.
Just keep in mind that when use multi-fields, you will have a different field for each mapping, for example date is a field, date.keyword is another field, this will increase the storage usage.
But again, none of this is done automatically, it needs logic in your application, elasticsearch does not allows you to change the mapping of a field, if your application needs this, you will need to implement something in the application that can work with that limitations of elasticsearch.

Use #timestamp as metric in an elastic dashboard

Problem
I am trying to build a dashboard in elastic with a table to monitor job runs.
I want to have per run the minimum timestamp (ie. job start) and the number of processed messages. The minimum timestamp is my problem, I can't seem to get it.
What I have done
All my log lines have as (relevant) fields: #timestamp, nb_messages, run_id. run_id is unique per run, and a run creates multiple log lines.
I create a dashboard, add a TSVB panel, and select Table.
I use run_id as the field to group by.
I can use max(nb_message) in my table without issue.
But if I use min(#timestamp), or any other aggregation than count, I just get a -.
I first tried with a lens instead of a TSVB panel, and I had the same issue, but with as message: To use this function, select a different field.
I can confirm in the index that logging.timestamp has date for type.
Question
Is there a way to use the timestamp as metric?
I would use a "normal" data table visualization (navigate through Aggregation based option in the Visualization menu if you're using the latest version of Kibana) instead of the TSVB. There, the default metric is count representing the amount of events of the index pattern in the selected time range. You can use the min metric on the #timestamp field and aggregate/group your data as you want.
The preliminary is of course that the selected index pattern contains an #timestamp field.
I hope I could help you.

Does updating Elasticsearch indices requires updating Kibana index pattern?

I am using Elasticsearch and Kibana as plugin to view the data in the indices. I am using Kibana's DevTools to send commands for adding/deleting/updating indices etc.
I want to add a field to a certain text property so it will have a keyword field to be able to both make a full text searches and aggregate using this property.
1) Does a change like that means I need to update Kibana's index pattern as well?
2) I have read the ElasticSearch's docs on PUT Mappings and know how to use it to update the indices themselves, but I don't know how to update the index patterns.. I read the same API should be used to update it, but I don't know how to see the index pattern's original mapping in order to update it.
Yes, if you change the index mapping in ES, then you need to go in Kibana and refresh the related index patterns.
Right now, you need to go inside Kibana (Management > Index patterns), select the index pattern, and press the "Refresh" button at the top right of the window in order to pick up the mapping changes.
Also note that if you updated some text fields in order to have a keyword sub-field, you'll also need to call the _update_by_query API on your index in order to reindex the changed field in all your documents

Kibana Multiple Representations of single field

When I'm looking at my index in Kibana there are many representation in the index for the same field. Below is an example:
What I'm wondering about is, can I configure a way to hide the values that end users in Kibana don't need to see. I don't see anything in the "edit" section for each field that enables me to do so.
I'm trying to make kibana as user friendly as possible for end users and having 2 different representations is going to be confusing.
Is it something I need to configure in the Mapping? Sorry I'm just getting used to the new Kibana interface.
You can filter out a field by adding it to the source filters on the Index Pattern. Management > Index Patterns > Source Filters

How to do "where not exists" type filtering in Kibana/ELK?

I am using ELK to create dashboards from my log files. I have a log file with entries that contain an id value and a "success"/"failure" value, displaying whether an operation with a given id succeeded or failed. Each operation/id can fail an unlimited number of times and succeed at most once. In my Kibana dashboard I want to display the count of log entries with a "failure" value for each operation id, but I want to filter out cases where a "success" log entry for the id exists. i.e. I am only interested in operations that never succeeded. Any hints for tricks that would achieve this?
This is easy in Kibana 5 search bar. Just add a filter
!(_exists_:"your_variable")
you can toggle the filter or write the inverse query as
_exists_:"your_variable"
In Kibana 4 and Kibana 3 you can use this query which is now deprecated
_missing_:"your_variable"
NOTE: In Elasticsearch 7.x, Kibana now has a pull down to select KQL or Lucene style queries in the search bar. Be mindful that syntax such as _exists_:FIELD is a Lucene syntax and you need to set the pulldown accordingly.
In newer ELK versions (I think after Elasticsearch 6) you should use field:* to check if the field exist and not field:* to check if it's missing.
elastic search reference:
https://www.elastic.co/guide/en/elasticsearch/reference/6.5/query-dsl-query-string-query.html#_wildcards
! (_exists_:NAME) is not working for me. I use suggestion from:
https://discuss.elastic.co/t/kibana-5-0-0--missing--is-not-working-anymore/64336
NOT _exists_:NAME
UPDATE The problem I faced is that ES syntax forbids spaces after negation operators. Use one of:
NOT _exists_:FIELD
!_exists_:FIELD
-_exists_:FIELD
Check tutorial: https://www.timroes.de/2016/05/29/elasticsearch-kibana-queries-in-depth-tutorial/
NOTE: In Elasticsearch 7.x, Kibana now has a pull down to select KQL or Lucene style queries in the search bar. Be mindful that syntax such as _exists_:FIELD is a Lucene syntax and you need to set the pulldown accordingly.
In newer versions of Kibana the default language is now KQL (Kibana Query Language) not Lucene anymore. So most answers here are outdated. The query if a field exists is the following:
your_variable:*
and to answer your question you can just negate that:
not your_variable:*
You can find more documation on here: https://www.elastic.co/guide/en/kibana/7.15/kuery-query.html
You can also toggle back to Lucene if you click on that button inside the search field but in my opinion the new language is way easier to use:
One option would be to create an own query for this criteria in Kibana. Then just have your panel that does the counting just to use this query.
value:failure
More information here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax

Resources