changing the timestamp format of elasticsearch index - elasticsearch

I am trying to load log records into elasticsearch (7.3.1) and showing the results in kibana. I am facing the fact that although records are loaded into elasticearch and a curl GET shows them, they are not visible in kibana.
Most of the time, this is because of the timestamp format. In my case, the proper timestamp format should be basic_date_time, but the index only has:
# curl -XGET 'localhost:9200/og/_mapping'
{"og":{"mappings":{"properties":{"#timestamp":{"type":"date"},"componentName":{"type":"text","fields":{"keyword":{"type":"keyword","ignore_above":256}}}}}}}%
I would like to add format 'basic_date_time' to the #timestamp properties, but each try I do is either not accepted by elasticsearch or does not change the index field.
I simply fail to get the right command to do the job.
For example, the simplest I could think of,
Z cr 23;curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d'
{"mappings":{"properties":{"#timestamp":{"type":"date","format":"basic_date_time"}}}}
'
gives error
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"}],"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"},"status":400}%
and trying to do it via kibana with
PUT /og
{
"mappings": {
"properties": {
"#timestamp": { "type": "date", "format": "basic_date_time" }
}
}
}
gives
{
"error": {
"root_cause": [
{
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
}
],
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
},
"status": 400
}
I am not sure if I should even try this in kibana. But I would be very glad if I could find the right curl command to get the index changed.
Thanks for helping, Ruud

You can do it either via curl like this:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d '{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
'
Or in Kibana like this:
PUT /og/_mapping
{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
Also worth noting is that once an index/mapping is created you can usually not modify it (very few exceptions). You can create a new index with the correct mapping and reindex your data into it.

Related

ElasticSearch _reindex yields "no such index"

I have two instances of the same version of elasticsearch with the same data.
I am now trying to _reindex an index with:
curl -X POST -H 'Content-Type: application/json' 'localhost:9200/_reindex' -d '{"source": {"index": "my_index_v1"}, "dest": { "index": "my_index_v2" }}' | jq
On one of the machines, it works correctly and a new index is correctly created. However on second machine, it ends with:
{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index [my_index_v2] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*,.monitoring-*]) doesn't match",
"index_uuid": "_na_",
"index": "my_index_v2"
}
],
"type": "index_not_found_exception",
"reason": "no such index [my_index_v2] and [action.auto_create_index] ([.watches,.triggered_watches,.watcher-history-*,.monitoring-*]) doesn't match",
"index_uuid": "_na_",
"index": "my_index_v2"
},
"status": 404
}
I checked the elasticsearch.yml and the file is completely equal on both machines and contains following:
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*,.monitoring-*
I have no idea why this happens.
To be clear, the index really exists.
EDIT:
working machine
elastic version: 7.17.4
settings.json: https://gist.github.com/knyttl/90f5d0f5de194be534160ab729c5a83b
non-working machine
elastic version: 7.17.4
settings.json: https://gist.github.com/knyttl/fcb94cd5739d3626f4545a9b9c5cceef
From the diff it seems that the working machine contains also:
{
"persistent": {
"action": {
"auto_create_index": "true"
},
…
}
}
So I guess that can be the culprit.
Update auto_create_index in second machine as,
{
"persistent": {
"action": {
"auto_create_index": "true"
}
}
}
instead of
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*,.monitoring-*
could likely fix the issue.

elasticsearch mapping is empty after creating

I'm trying to create an autocomplete index for my elasticsearch using the search_as_you_type datatype.
My first command I run is
curl --request PUT 'https://elasticsearch.company.me/autocomplete' \
'{
"mappings": {
"properties": {
"company_name": {
"type": "search_as_you_type"
},
"serviceTitle": {
"type": "search_as_you_type"
}
}
}
}'
which returns
{"acknowledged":true,"shards_acknowledged":true,"index":"autocomplete"}curl: (3) nested brace in URL position 18:
{
"mappings": {
"properties": etc.the rest of the json object I created}}
Then I reindex using
curl --silent --request POST 'http://elasticsearch.company.me/_reindex?pretty' --data-raw '{
"source": {
"index": "existing_index"
},
"dest": {
"index": "autocomplete"
}
}' | grep "total\|created\|failures"
I expect to see some "total":1000,"created":5etc but some kind of response from the terminal, but I get nothing. Also, when I check the mapping of my autocomplete index, by running curl -u thething 'https://elasticsearch.company.me/autocomplete/_mappings?pretty',
I get an empty mapping result:
{
"autocomplete" : {
"mappings" : { }
}
}
Is my error in the creation of my index or the reindexing? I'm expecting the autocomplete mappings to show the two fields I'm searching for, ie: "company_name" and "serviceTitle". Any ideas how to fix?

Elasticsearch: Issues reindexing - ending up with more than one type

ES 6.8.6
I am trying to reindex some indexes to reduce the number shards.
The original index had a type of 'auth' but recently I added a template that used _doc. When I tried:
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type: application/json' -d'
{
"source": {
"index": "auth_2019.03.02"
},
"dest": {
"index": "auth_ri_2019.03.02",
"type": "_doc"
}
}
'
I get this error:
"Rejecting mapping update to [auth_ri_2019.03.02] as the final mapping would have more than 1 type: [_doc, auth]"
I understand that I can't have more than one type and that types are depreciated in 7.x. My question is can I change the type during the reindex operation.
I am trying to tidy everything up in preparation to moving to 7.x.
It looks like you have to write a script to change the document during the reindex process.
From the docs,
Like _update_by_query, _reindex supports a script that modifies the document.
You are indeed able to change type.
Think of the possibilities! Just be careful; you are able to change:
_id,
_type,
_index,
_version,
_routing
For your case add
"script": {
"source": "ctx._type = '_doc'",
"lang": "painless"
}
Full example
{
"source": {
"index": "auth_2019.03.02"
},
"dest": {
"index": "auth_ri_2019.03.02",
},
"script": {
"source": "ctx._type = '_doc'",
"lang": "painless"
},
}
Firstly thanks to leandrojmp for prompting me to reread the docs and noticing the example where they had type specified for both source and dest.
I don't understand why but adding a type to the source specification solved the problem.
This worked:
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type: application/json' -d'
{
"source": {
"index": "auth_2019.03.02",
"type": "auth"
},
"dest": {
"index": "auth_ri_2019.03.02",
"type": "_doc"
}
}
'

How can I create a meta data on `Elasticsearch`?

I am using Elasticsearch 6.8. And I'd like to save some meta data on my index. The index already existed. I followed this doc https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html#add-field-mapping
curl "http://localhost:9200/idx_1/_mapping"
{
"idx_1": {
"mappings": {
"1": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
}
In order to create _meta data, I need to create mapping type first.
And I run below code to create a _meta mapping type for version.
curl -X PUT -H 'Content-Type: application/json' "http://localhost:9200/idx_1/_mapping" -d '
{"_meta": { "version": {"type": "text"}}}'
I got below errors:
{
"error": {
"root_cause": [
{
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
}
],
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
},
"status": 400
}
It says mapping type is missing. I have specified the type for version as text. Why does it say missing type?
It turns out that I looked at the wrong document version. Based on the doc for Elasticsearch6, https://www.elastic.co/guide/en/elasticsearch/reference/6.3/mapping-meta-field.html, the correct request is:
curl -X PUT "http://localhost:9200/idx1/_mapping/_doc" -H 'Content-Type: application/json' -d '{"_meta": {"version": "1235kljsdlkf"}}'

Why is Elasticsearch rejecting my input as malformed when using a custom date time stamp?

ES Version 2.2.1
Below is the Mapping i have created for the testindex
"event_time_utc": {
"index": "not_analyzed",
"format": "YYYY-MM-dd HH:mm:ss",
"type": "date"
Below is the Curl command i am using to insert document into the testindex
curl -k -XPOST http://10.1.69.191:8080/testindex/logs -d '{
"Username": "user#mailbox.com",
"Application": "App Dev",
"Platform": "Unknown",
"Browser": "Unknown",
"Status": "Success",
"SourceIP": "1.1.1.1",
"event_time_utc" : "2016-12-08 23:44:40",
"LoginType": "Remote Access 2.0",
"timestamp": "2016-12-12T19:41:36.214Z"
}'
Below is the Error being generated by the elasticsearch applications anything it failed to parse the event_time_utc. And giving the malformed date as the reason. But i can't find anything wrong with this.
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed
to parse [event_time_utc]"}],"type":"mapper_parsing_exception","reason":"failed to parse
[event_time_utc]","caused_by":{"type":"illegal_argument_exception","reason":
"Invalid format: \"2016-12-08 23:44:40\" is malformed at \" 23:44:40\""}},
So i found out what the issue was. At the beginning of my mapping file i had defined a custom type (non defualt). This was throwing off my curl command. I wasn't adding a type.
The relevant mapping config from my index template.
"mappings": {
"loginevent": {
"dynamic": "false",
"properties": {
"event_time_utc": {
"index": "not_analyzed",
"format": "YYYY-MM-dd HH:mm:ss",
"type": "date"
}
Updated curl command
curl -k -XPOST 'http://10.1.69.191:8080/testindex/loginevent' -d '{
Successful Execution
{"_index":"testindex","_type":"loginevent","_id":"AVj-UoW23y1afHQdnQA-","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}
The date pattern in your mapping is wrong, it should be like below, i.e. you need to encode the space character between two single quotes.
"mappings": {
"logs": { <--- make sure you have this
"dynamic": "false",
"properties": {
"event_time_utc": {
"index": "not_analyzed",
"format": "yyyy-MM-dd' 'HH:mm:ss",
"type": "date"
Also make sure since you are sending your curl to /testindex/logs that your mapping type is called logs and not loginevent

Resources