Time picker missing in Kibana Discover - elasticsearch

Just learning elastic search and Kibana. It seems on my index the time picker is missing.
However I do have a date field in my index
This is ES7. I see references to #timestamp on google for previous versions but Im not sure what I should be doing in ES7
Updated Nov. 14
Below is a portion of my document. The save_date is what I want the time index to use. The document has over 800 fields so I didnt put in the whole thing.
This is also a portion of the mapping that Im interested in

Yes I was missing something basic. You set the timestamp when you create the index pattern
I had created the index pattern in kibana and as time went on I kept rebuild the indexes trying different fields. I totally missed the timestamp dropdown.

Related

Ways to only process new(index after last run) data in Elasticsearch?

Is there a way to get the date and time that an elastic search document was written?
I am running es queries via spark and would prefer NOT to look through all documents that I have already processed. Instead I would like read the only documents that were ingested between the last time the program ran and now.
What is the best most efficient way to do this?
I have looked at;
updating to add a field with an array with booleans for if its been looked at by which analytic. The negative is waiting for the update to occur.
index per time frame method, which would be to break down the current indexes into smaller ones so by hour.The negative I see is the number of open file descriptors.
??
Elasticsearch version 5.6
I posted the question on the elasticsearch discussion board and it appears using the ingest pipeline is the best option.
I am running es queries via spark and would prefer NOT to look through
all documents that I have already processed. Instead I would like read
the only documents that were ingested between the last time the
program ran and now.
A workaround could be :
While inserting data using Logstash to Elasticsearch, Logstash appends a #timestamp key to the document which represents the time (in UTC) at which the document is created or we can use an ingest pipline
After that we can query based on the timestamp.
For more on this please have a look at :
Mapping changes
There is no way to ask ES to insert a timestamp at index time
Elasticsearch doesn't have such functionality.
You need manually save with each document date. In this case you will be able to search by date range.

add time field to kibana index results in No data found

I am new to elasticsearch and Kibana. I have just downloaded the latest versions and trying to work on logstash example. As described in this link
https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html I added the logstash* index with "Index contains time-based events" checked and choose #timestamp as the time field name. When I go to discover your data I do not see anything and I get No results found. If I create the index in Kibana without checking the time-based checkbox I can see the data. Any idea why? I have java 1.8.0_111
So actually there is data, its just that the data is old and the time-stamps in the data are around May 2015. When you create a new index the default time range that is selected is I think last 15 minutes which couldn't find that data as the data in that tutorial is old.
So try these steps.
when you go to discover, Click Time Picker in the Kibana toolbar
Now click on Absolute and select a date range from Jan 2015 or something
this should load your results.

reindexing elastic search or updating indexes?

I am now on elastic search, I cant figure out how to update elastic search index,type or document without deleting and reindexing? or is it the best way to achieve it?
So if I have products in my sql product table, should I better delete product type and reindex it or even entire DB as index on elasticsearc. what is the best use case and how can I achieve it?
I would like to do it with Nest preferably but if it is easier, ElasticSearch works for me as well.
Thanks
This can be a real challenge! Historic records in elasticsearch will need to be reindexed when the template changes. New records will automatically be formatted according to the template you specify.
Using this link has helped us a lot:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
You'll want to be sure to have the logstash filter set up to match the fields in your template.

Getting elasticsearch to utilize Bro timestamps through Logstash

I'm having some issues getting elasticsearch to interpret an epoch millis timestamp field. I have some old bro logs I want to ingest and have them be in the proper orders and spacing. Thanks to Logstash filter to convert "$epoch.$microsec" to "$epoch_millis"
I've been able to convert the field holding the bro timestamp to the proper length of digits. I've also inserted a mapping into elasticsearch for that field, and it says that the type is "Date" with the format being the default. However, when I go and look at the entries it still has a little "t" next to it instead of a little clock. And hence I can't use it for my filter view reference in kibana.
Anyone have any thoughts or have dealt with this before? Unfortunately it's a stand alone system so I would have to manually enter any of the configs I'm using.
I did try and convert my field "ts" back to an integer after using the method described in the link above. So It should be a logstash integer before hitting the elasticsearch mapping.
So I ended up just deleting all my mappings in Kibana, and elasticsearch. I then resubmitted and this time it worked. Must have been some old junk in there that was messing me up. But now it's working great!

sometimes when adding new fields in index, they don't get indexed in elasticsearch

Let's say I have an index test and which already exists. I want to add a new field newfield1 with some data for all documents in the database. Currently I am simply deleting all everything and then reinserting the data with the newfield1 data added in. I understand this isn't the most efficient way, but that's not my question right now.
Sometimes the data in newfield1 does not get indexed and I can't visualize it in Kibana. It's pretty annoying. Is there something wrong with what I'm doing?
NOTE: I CAN query this field in ElasticSearch which makes me think there's a problem with Kibana
Kibana caches the field mapping. Go to Settings -> Indices, select your index, and click the orange "Refresh" button.
Not much to go on here but first make sure your cluster is Green.
$ curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
If you are still struggling to understand the state of you cluster then perhaps consider installing on of the plugins like HQ https://github.com/royrusso/elasticsearch-HQ

Resources