_timestamp has been deprecated, reference documentation says we should have our own fields which will have time, how we set the default value to current timestamp fields in milliseconds in ElasticSearch 6.2:
{
"properties":{
"defautlt_time":{
"type":"date",
"default_value":"current_now()"
}
}
}
You could use a "date" type and use the build in format epoch_millis
That means first change your mappings
PUT my_index
{
"mappings": {
"_doc": {
"properties": {
"default_time": {
"type": "date",
"format": "epoch_millis"
}
}
}
}
}
and then set the current time in your client.
Related
I want to change "2020-06-16T20:29:56.256+10:00" format to "2020-06-16T20:29:56.256Z" this format. How can I do that in Elasticsearch query.
Please help,
You can use a customized date format as shown in the below index mapping
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss.SSSZZZZZ"
}
}
}
}
I'm having an index created in elasticsearch 5.0, where it contains data from my MySQL db. There's a field which is a string in my table, which I need it as a double in ES.
So what I did was added the mapping when I created the index for the appropriate field using a PUT:
{
"mappings": {
"my_type": {
"properties": {
"chargeamount": {
"type": "double"
}
}
}
}
}
After I did this, a value which contains numbers after the decimal (ie: 23.23) returns the value properly as a double but where as numbers which has zeros after the decimal (ie: 23.00) returns it as a string itself (ie: 2300).
EDIT:
These are the steps which I did:
I initially created the index through a PUT request
(http://hostmachine:9402/indexname) with the above mapping as the
body.
Then I'm pushing the data (from my MySQL table) to the index using
logstash. I could provide the logstash conf if needed.
Once the data is being uploaded to the index, I tried querying as such in order to check whether the result shows a double value. The POST request (http://hostmachine:9402/indexname/_search? and the body as follows :
{
"size" : 0,
"query":{
"query_string":{
"query":"myquery"
}
},
"aggs":{
"total":{
"terms":{
"field":"userid"
},
"aggs":{
"total":{
"sum":{
"script":{
"lang": "painless",
"inline" : "doc['chargeamount'].value"
}
}
}
}
}
}
}
And the result looks like as in the snapshot below, where it should've been 267472.00:
Where am I going wrong? Any help could be appreciated.
You need to make sure that the mapping type in your index creation query is exactly the same as the document_type you have in your logstash config, namely message_logs:
PUT response_summary6
{
"mappings": {
"message_logs": { <--- change this
"properties": {
"userid": {
"type": "text",
"fielddata": true
},
"responsecode": {
"type": "integer"
},
"chargeamount": {
"type": "double"
}
}
}
}
}
I got the following exception, I'm very confused about this:
org.elasticsearch.hadoop.rest.EsHadoopParsingException:
Cannot parse value [2016-03-13T02:32:56] for field [create_time]
My mapping as following:
"mappings": {
"users": {
"properties": {
"create_time": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
}
}
}
it should be
"format": "strict_date_time_no_millis"
date_time_no_millis or strict_date_time_no_millis
A formatter that combines a full date and time without millis,
separated by a T: yyyy-MM-dd'T'HH:mm:ssZZ. custom-date-format
I'm using ES version 2.2.0
I have a field which may have long or double values.
How can I make ElasticSearch coerce long values to double so that I don't get conflicts when inserting new documents?
For example, if the value is 5 I'd like ES to coerce it into 5.0 so that I can insert 12.3 afterwards.
Is there some kind of dynamic index template I can apply to make that conversion automatic upon insertion?
Thanks for the help.
You can simply set the type of that field to double in the mapping and that does the job. Anything you'll feed into that field will get coerced into a double.
curl -XPUT localhost:9200/index -d '{
"mappings": {
"type": {
"properties": {
"myfield": {
"type": "double"
}
}
}
}
}'
You need to do that at index/mapping creation time, otherwise you cannot change the type after the mapping has been created.
UPDATE
You can also leverage dynamic mapping templates like this:
PUT my_index
{
"mappings": {
"my_type": {
"dynamic_templates": [
{
"doubles": {
"match_mapping_type": "long",
"mapping": {
"type": "double"
}
}
}
]
}
}
}
I have memory problems with aggregation queries.
my elastic version is 1.3.2
I tired to define _timestamp as doc value ,
but when I checked the mapping I can see it didn't work
It didn't happen in other fields.
Is there any known issue with timestamp field and doc values?
Lib
Have you tried this mapping?
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"format" : "YYYY-MM-dd"
}
}
I'm using specified version (13.2). I set up custom date field in my project like this and it's worked for me:
PUT 'http://127.0.0.1:9200/a252e39969665bb4d065/' -d
'{
"a252e39969665bb4d065": {
"mappings": {
"_default_": {
"properties": {
"createdDate": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}'
Please, note that i'm using default mapping here (default mapping for all types in index). You can use specified type in an index by replacing "default" in mapping.