how do you transform a date that's storred as a type long (epoch time) into a dateOptionalTime in Elasticsearch? - elasticsearch

I have a field in my database that's stored as Epoch time, which is a long. I'm trying to get Elasticsearch to recognize this field as what it actually is: a date. Once indexed by Elasticsearch, I want it to be of type dateOptionalTime.
My thinking is that I need to apply a transform to convert the Epoch long into a string date.
On my index, I have a mapping that specifies the type for my field as date with a format of dateOptionalTime. Finally, this timestamp is in all of my docs, so I've added my (attempted) mapping to _default_.
The code:
'_default_', {
'_all': {'enabled': 'true'},
'dynamic_templates': [
{ "date_fixer" : {
"match": "my_timestamp",
"mapping": {
"transform": {
"script": "ctx._source['my_timestamp'] = new Date(ctx._source['my_timestamp']).toString()"
},
'type': "date",
}
}
}]
}
I'm brand new to Elastic, so I'll walk through what I think is happening.
I'm setting this to type _default_ which will apply this to all new types Elastic encounters.
I set _all to enabled. I want Elastic to use the default mapping for all types with the exception of my timestamp field.
finally, I add my dynamic template that (a) converts the long into a date, and (b) applies a mapping to the timestamp field explicitly saying that it is a date
The Problem:
When I try to add my data to the index, I get the following exception.
TransportError(400, u'MapperParsingException[failed to parse [my_timestamp]]; nested: ElasticsearchIllegalArgumentException[unknown property [$date]]; ')
My data looks like:
{
"timestamp": 8374747594,
"owner": "text",
"some_more": {
"key": "val",
"key": "val"
}
}

Related

keeping [Europe/Berlin] (or other timezones in this fomat) while indexing Elasticsearch

I'm trying to familiarize myself with the Elasticsearch, specifically defining the mapping within a json file and creating a new index with it (with the help of the new Java API Client and Spring boot).
This is what my json file looks like:
{
"mappings": {
"properties": {
"Id": {
"type": "text"
},
"timestamp": {
"type": "date",
"format": "date_optional_time"
},
"metadata":{
"type": "nested"
},
"attributes": {
"type": "nested"
}
}
}
}
this can index my documents just fine, but I realized that if I use ZonedDateTime.now() for the data in my timestamp field, it fails to index due to the [Europe/Berlin] at the end. It works if I change it to
ZonedDateTime now = ZonedDateTime.now();
String date = now.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
which gives me the time but without [Europe/Berlin]! As far as I understand from my various googling and "stackoverflow-ing", ES does not take [Timezone] in its date types, only the +2:00 format. But is it possible to keep it? (Maybe through an ingest pipeline?)
There are various documents that I would like to reindex that has [Timezone] hanging at the end of it, but these older documents saved it as text.... I would like to be able to do date math with the timestamp field in the future, which is why I decided to try and create a new/better mapping with proper fields. Any pointers appreciated!

How to add a runtime field to index pattern that converts string to date?

I have an index that contains a "createdAt" string field I would like to convert to date.
I'm trying to that via the UI and since scripted fields are deprecated I understand I should use runtime fields.
I've figuired out to convert a string to date object and while it works for actual runtime queries, If i set a field using Index Pattern settings, the values don't seem to be shown on Kibana.
Here's how I setup the field:
And while the same code works, if I try to visualize the data in Kibana I see "no results found".
I don't understand where the issue is as the following query presents the field just fine:
GET mails/_search
{
"runtime_mappings": {
"exampleColumn": {
"type": "date",
"script": {
"source":
"""emit(new SimpleDateFormat('yyyy-mm-dd HH:mm:ss').parse(doc['createdAt.keyword'].value).getTime())"""
}
}
},
"fields" : ["exampleColumn"]
}
Does someone know what I'm doing wrong?
Any help will be appritiated.

ElasticSearch Mapping: is it possible to auto-truncate a date to fit it's format?

On our project we're using NEST to insert data into ElasticSearch (1.7). We'd like to be able to force ES to truncate all dates towards the mapped format.
Mapping example:
"dateFrom" : {
"type": "date",
"format": "dateHourMinute" // Or yyyy-MM-dd'T'HH:mm
}
Data example:
{
"dateFrom" : 2015-12-21T15:55:00.000Z
}
Inserting this data throws an IllegalArgumentException:
Invalid format: "2015-12-21T15:55:00.000Z" is malformed at ":00.000Z"
Obviously we don't need the last part of the date. Can't we configure ES to just truncate it instead of erroring out?
Keep in mind we're using 1.7 right now, since date formatting seems to have changed in recent versions...
In order to get the data to index correctly I could change the data type to date_optional_time (supported in 1.7)
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"date": {
"type": "date",
"format": "date_optional_time"
}
}
}
}
}
This will allow you to contribute date with time being optional.
such as:
PUT /my_index/my_type/1
{
"date": "2015-12-21"
}
or as you have it
PUT /my_index/my_type/2
{
"date": "2015-12-21T15:55:00.000Z"
}
Both are now valid submissions. I don't know of any transformation approaches within ES to support a truncation or transformation of field data at time of index. I would think if you want to parse the data and remove the time pre-submission you will need to do that outside of ES when you create the JSON object.
It appears ES is currently not capable of editing dates through a custom mapping. We ended up using JsonConverters (like this) to drop seconds and millis before inserting them into ES.

Why is elasticsearch trying to parse a timestamp found in a field consisting of (and mapped as) strings to a date, and how do I stop it?

How do you prevent Elasticsearch from attempting to parse dates it finds in string fields?
I have a simple json document like this:
{
key: val,
key2: val,
text_blob: ["hello", "world", "something else", "2015-01-01T00:00:00+1", "sentence"]
}
The timestamp's existence in the text_blob field is totally arbitrary. It was just present in the data and doesn't really mean anything. However, because it's there, Elastic seemingly thinks it's special and tries to map it to dateOptionalTime. I want it to just keep on being a plain ol' string!
I tried explicitly declaring a mapping on that field before loading in my data.
POST myindex
{
"mappings": {
"mytype": {
"_source": {"enabled": true},
"properties": {
"text_blob": {"type": "String"}
}
}
}
}
But it seems to have no effect. As soon as Elastic finds that datestring among the other strings it tries to apply a new mapping and explodes with:
MapperParsingException[failed to parse [text_blob]]; nested: MapperParsingException[failed to parse date field [None], tried both date format [dateOptionalTime], and timestamp number with locale []];
But this error is really somewhat of a red herring in my opinion. It's exploding because it can't parse the timestamp string that contains an offset. However, the core issue is why it's trying to parse it as a date at all.
Change your mapping to this
POST myindex
{
"mappings": {
"mytype": {
"_source": {"enabled": true},
"properties": {
"text_blob": {
"type": "String"
"index":"not_analyzed"
}
}
}
}
}
This will stop elasticsearch from analyzing the field in any way whatsoever. String fields by default are analyzed.

Elasticsearch change field date format

When i created my index and type a while ago I specified the date format of a field in the mapping as:
{"type": "date","format" : "dd/MM/yyyy HH:mm:ss"}
Is there a way to change the format of the field knowing that now i have more than 6000 docs indexed in my index ? I want the format to be:
{"type": "date","format" : "dd-MM-yyyy HH:mm:ss"}
You can update format mapping on an existing date field with the PUT mapping API:
PUT twitter/_mapping/user
{
"properties": {
"myDate": {
"format": "dd-MM-yyyy HH:mm:ss"
}
}
}
format is one of the few mappings that can be updated on existing fields without losing data
https://www.elastic.co/guide/en/elasticsearch/reference/2.0/mapping-date-format.html
You cannot change field mappings after you have indexed documents into Elasticsearch. You can add new fields but you cannot change existing fields.
You could create a new index with the new mappings and then re-index all the data into it. You could then delete the old index and create a new index alias with the old name point to the new index.
There are a few strategies documented for minimizing downtime when changing mappings in the Elasticsearch blog: https://www.elastic.co/blog/changing-mapping-with-zero-downtime
Overall I'd highly suggest using index aliases - they provide a high level of abstraction and flexibility over using index names directly within your application. Perfect for situations like this where you want to make a change to the underlying index: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-aliases.html
For elastic version <7.0 where mapping type is not deprecated
you can use something like this
PUT inf/_mapping/_doc
{
"properties": {
"ChargeDate": {
"type":"date",
"format": "dd-MM-yyyy HH:mm:ss"
}
}
}
where inf is your index and _doc is mapping type(which is deprecated in v >7.0)
or
PUT inf
{
"mappings": {
"_doc": {
"properties": {
"ChargeDate": {
"type":"date",
"format": "dd-MM-yyyy HH:mm:ss"
}
}
}
}
}

Resources