How to set custom #timestamp in Kibana? - elasticsearch

I am using AWS managed ELK setup (ver 6.4.3). As per my understanding, Kibana uses #timestamp field to sort the data.
I have a field by name 'ts' (in epoch millis). I want to convert the format of 'ts' to that of #timestamp and want to use 'ts' field as the default in place of #timestamp.
I tried searching about it. It was related to using templates but couldnt really understand much there!
It would be great if anyone could help me here with the same.
Thanks!

Related

Time picker missing in Kibana Discover

Just learning elastic search and Kibana. It seems on my index the time picker is missing.
However I do have a date field in my index
This is ES7. I see references to #timestamp on google for previous versions but Im not sure what I should be doing in ES7
Updated Nov. 14
Below is a portion of my document. The save_date is what I want the time index to use. The document has over 800 fields so I didnt put in the whole thing.
This is also a portion of the mapping that Im interested in
Yes I was missing something basic. You set the timestamp when you create the index pattern
I had created the index pattern in kibana and as time went on I kept rebuild the indexes trying different fields. I totally missed the timestamp dropdown.

ELK : Change format of the field thanks to filebeat

In the logs I upload to Kibana have a "mytimestamp" field, supposedly of type Date but when I inject the logs and parsed it in json, my timestamp is converted to type string.
Do you know how can I convert my timestamp field from String to Date thanks to Filebeat ?
I must necessarily use Logstash ?
Thank you :)
This doesn't sound like you have an issue. Field can be timestamp in Kibana, but when you fetch results with REST API from elasticsearch you will get timestamps as strings because JSON itself doesn't have timestamp format defined, so it's up to the application that is parsing it to decide what is date and parse it properly. None of the tools for log ingestion is going to help you in this case.

Add field to crawled content with StormCrawler (and Elasticsearch)

I have followed the following tutorial for crawling content with stormcrawler and then store it in elasticsearch: https://www.youtube.com/watch?v=KTerugU12TY . However, I would like to add to every document the date it was crawled. Can anyone tell me how this can be done?
In general, how can I change the fields of the crawled content?
Thanks in advance
One option would be to create an ingest pipeline in Elasticsearch to populate a date field, as described here. Alternatively, you'd have to write a bespoke parse filter to put the date in the metadata and then index it using indexer.md.mapping in the configuration.
It would probably be useful to make this operation simpler, please feel free to open an issue on Github (or even better contribute some code) so that the ES indexer could check the configuration for a field name indicating where to store the current date, e.g. es.now.field.

What does the # mean in the grafana metrics?

i have just started to deal with grafana and elastic. in the dokus i see things like #timestamp or #value again and again. Is that a variable that you set somewhere?
can this be used for any elasticsearch database? i connected elastic without metricbeats… and only get to the timestamp when i walk over an object. Means : object.timestamp
# is used in Logstash( part of Elastic stack) like #timestamp. #timestamp variable is set as default, but you can change that and some other fields can be used instead of timestamp( you need a field that can be used as time or date for your graph to work). For example, if you have a time variable, you can use it instead of #timestamp by just typing 'time' in the text box of your query.
I don't know much about Logstash, but since they are both part of Elastic stack. I assume # will be used in elastic database. Hope this helps a little bit.

Getting elasticsearch to utilize Bro timestamps through Logstash

I'm having some issues getting elasticsearch to interpret an epoch millis timestamp field. I have some old bro logs I want to ingest and have them be in the proper orders and spacing. Thanks to Logstash filter to convert "$epoch.$microsec" to "$epoch_millis"
I've been able to convert the field holding the bro timestamp to the proper length of digits. I've also inserted a mapping into elasticsearch for that field, and it says that the type is "Date" with the format being the default. However, when I go and look at the entries it still has a little "t" next to it instead of a little clock. And hence I can't use it for my filter view reference in kibana.
Anyone have any thoughts or have dealt with this before? Unfortunately it's a stand alone system so I would have to manually enter any of the configs I'm using.
I did try and convert my field "ts" back to an integer after using the method described in the link above. So It should be a logstash integer before hitting the elasticsearch mapping.
So I ended up just deleting all my mappings in Kibana, and elasticsearch. I then resubmitted and this time it worked. Must have been some old junk in there that was messing me up. But now it's working great!

Resources