Recreation of mapping elastic search - elasticsearch

logstash configI have created my index on elasticsearch and through kibana as well and have uploaded data. Now i want to change the mapping for the index and change some fields to not analyzed .Below is my mapping which i want to replace from existing one . But when i run below command it gives me error
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"}],"type":"index_already_exists_exception","reason":"already
exists","index":"rettrmt"},"status":400}
Kindly help to get it close.
curl -XPUT 'http://10.56.139.61:9200/rettrmt' -d '{
"rettrmt": {
"aliases": {},
"mappings": {
"RETTRMT": {
"properties": {
"#timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"#version": {
"type": "string"
},
"acid": {
"type": "string"
},
"actor_id": {
"type": "string",
"index": "not_analyzed"
},
"actor_type": {
"type": "string",
"index": "not_analyzed"
},
"channel_id": {
"type": "string",
"index": "not_analyzed"
},
"circle": {
"type": "string",
"index": "not_analyzed"
},
"cr_dr_indicator": {
"type": "string",
"index": "not_analyzed"
},
"host": {
"type": "string"
},
"message": {
"type": "string"
},
"orig_input_amt": {
"type": "double"
},
"path": {
"type": "string"
},
"r_cre_id": {
"type": "string"
},
"sub_use_case": {
"type": "string",
"index": "not_analyzed"
},
"tran_amt": {
"type": "double"
},
"tran_id": {
"type": "string"
},
"tran_particulars": {
"type": "string"
},
"tran_particulars_2": {
"type": "string"
},
"tran_remarks": {
"type": "string"
},
"tran_sub_type": {
"type": "string"
},
"tran_timestamp": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"tran_type": {
"type": "string"
},
"type": {
"type": "string"
},
"use_case": {
"type": "string",
"index": "not_analyzed"
}
}
}
},
"settings": {
"index": {
"creation_date": "1457331693603",
"uuid": "2bR0yOQtSqqVUb8lVE2dUA",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2000099"
}
}
},
"warmers": {}
}
}'

You first need to delete your index and then recreate it with the proper mapping. Here you're getting an error index_already_exists_exception because you try to create an index while the older index still exists, hence the conflict.
Run this first:
curl -XDELETE 'http://10.56.139.61:9200/rettrmt'
And then you can run your command again. Note that this will erase your data, so you will have to repopulate your index.

Did you try something like that ?
curl -XPUT 'http://10.56.139.61:9200/rettrmt/_mapping/RETTRMT' -d '
{
"properties": {
"actor_id": { // or whichever properties you want to add
"type": "string",
"index": "not_analyzed"
}
}
}
works for me

Related

Kibana does not search on nested field

working with Elasticsearch/Kibana and trying to search on field in a nested object. However it does not seem to work. Here's mapping that I use in a template:
{
"order": 0,
"template": "ss7_signaling*",
"settings": {
"index": {
"mapping.total_fields.limit": 3000,
"number_of_shards": "5",
"refresh_interval": "30s"
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "no",
"type": "string"
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"properties": {
"message": {
"index": "not_analyzed",
"type": "string"
},
"Protocol": {
"index": "not_analyzed",
"type": "string"
},
"IMSI": {
"index": "not_analyzed",
"type": "string"
},
"nested": {
"type": "nested",
"properties": {
"name": {
"type": "string",
"index": "not_analyzed"
}
}
},
"Timestamp": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
},
"#timestamp": {
"type": "date"
},
"#version": {
"index": "not_analyzed",
"type": "string"
}
},
"_all": {
"norms": false,
"enabled": false
}
}
},
"aliases": {
"signaling": {}
}
}
When I do search kibana on single fields - everything works fine. Still though i cannot search on nested fields like 'nested.name'.
Example of my query in kibana: nested.name:hi
Thanks.
Kibana uses the query_string query underneath, and the latter does not support querying on nested fields.
It's still being worked on but in the meantime you need to proceed differently.
UPDATE:
As of ES 7.6, it is now possible to search on nested fields

ElasticSearch Logstash template

I would like to index the SMTP receive log of my Exchange Server with ElasticSearch. So I created a logstash config file and it works very well but all of my fields are strings instead ip for source and target server for example. So I tried to change the default mapping in the logstash template:
I run the command curl -XGET http://localhost:9200/_template/logstash?pretty > C:\temp\logstashTemplate.txt
Edit the textfile and add my 'SourceIP' field
{
"template": "logstash-*",
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [{
"message_field": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
}, {
"string_fields": {
"mapping": {
"fielddata": {
"format": "disabled"
},
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}],
"_all": {
"omit_norms": true,
"enabled": true
},
"properties": {
"#timestamp": {
"type": "date"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "float"
}
}
},
"#version": {
"index": "not_analyzed",
"type": "string"
},
"SourceIP": {
"type": "ip"
}
}
}
},
"aliases": {}
}
I uploaded the edited template with the command curl -XPUT http://localhost:9200/_t
emplate/logstash -d#C:\temp\logstash.template
Restart the ElasticSearch server and index deleted/re-created
The 'SourceIP' field did not changed to type ip. What do I wrong? Can you please give me a hint? Thanks!

Create multi field in elastic search

I want to create a index and here is my mapping.I want to create a multi field on field - 'findings' one with the default mapping(analzyed) and other one with 'orig'(not_analyzed).
PUT nto
{
"mappings": {
"_default_": {
"properties": {
"date": {
"type": "string",
"index": "not_analyzed"
},
"bo": {
"type": "string",
"index": "not_analyzed"
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
},
"findings": {
"type": "multi_field",
"fields": {
"findings": {
"type": "string",
"index": "analyzed"
},
"orig": {
"type": "string",
"index":"not_analyzed"
}
}
}
}
}
}
}
Once I create the mapping I don't see the orig field being created. Here is the mapping that I see,
{
"ccdn": {
"aliases": {},
"mappings": {
"test": {
"properties": {
"bo": {
"type": "string",
"index": "not_analyzed"
},
"date": {
"type": "string",
"index": "not_analyzed"
},
"findings": {
"type": "string",
"fields": {
"orig": {
"type": "string",
"index": "not_analyzed"
}
}
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
}
}
},
"_default_": {
"properties": {
"bo": {
"type": "string",
"index": "not_analyzed"
},
"date": {
"type": "string",
"index": "not_analyzed"
},
"findings": {
"type": "string",
"fields": {
"orig": {
"type": "string",
"index": "not_analyzed"
}
}
},
"pg": {
"type": "string"
},
"rate": {
"type": "float"
}
}
}
},
"settings": {
"index": {
"creation_date": "1454893575663",
"uuid": "wJndGz1aSVSFjtidywsRPg",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2020099"
}
}
},
"warmers": {}
}
}
I don't see the default field 'findings' - analyzed being created.
Elasticsearch multi field has expired. See here.
You might need something like this for your findings field :
"findings": {
"type": "string",
"index": "analyzed",
"fields": {
"orig": { "type": "string", "index": "not_analyzed" }
}
}

Update ElasticSearch Mapping type without delete it

I have this mapping type on my Index.
{
"iotsens-summarizedmeasures": {
"mappings": {
"summarizedmeasure": {
"properties": {
"id": {
"type": "long"
},
"location": {
"type": "boolean"
},
"rawValue": {
"type": "string"
},
"sensorId": {
"type": "string"
},
"summaryTimeUnit": {
"type": "string"
},
"timestamp": {
"type": "date",
"format": "dateOptionalTime"
},
"value": {
"type": "string"
},
"variableName": {
"type": "string"
}
}
}
}
}
}
I want to update sensorId field to.
"sensorId": {
"type": "string",
"index": "not_analyzed"
}
Is there any way to update the index without delete and re-mapping it? I don't have to change type of field, I only set "index": "not_analyzed".
Thanks you.
What you can do is make a multi-field out of your existing sensorId field like this with a sub-field called raw which is not_analyzed:
curl -XPUT localhost:9200/iotsens-summarizedmeasures/_mapping/summarizedmeasure -d '{
"summarizedmeasure": {
"properties": {
"sensorId": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}'
However, you still have to re-index your data to make sure all sensorId.raw sub-fields get created.

MapperParsingException when creating mapping for elasticsearch index

Using sense, I'm trying to create a mapping for an index with three properties. When i try to create it i get the following response
{
"error": "MapperParsingException[Root type mapping not empty after parsing! Remaining fields: [mappings : {gram={properties={gram={type=string, fields={gram_bm25={type=string, similarity=BM25}, gram_lmd={type=string}}}, sentiment={type=string, index=not_analyzed}, word={type=string, index=not_analyzed}}}}]]",
"status": 400
}
This is what i have in the sense console
PUT /pos/_mapping/gram
{
"mappings": {
"gram": {
"properties": {
"gram": {
"type": "string",
"fields": {
"gram_bm25": {
"type": "string", "similarity": "BM25"
},
"gram_lmd": {
"type": "string"
}
}
},
"sentiment": {
"type": "string", "index": "not_analyzed"
},
"word": {
"type": "string", "index": "not_analyzed"
}
}
}
}
}
The name of the index is 'pos' and I call the type 'gram'.
I have created the index with the same name.
I have validated the json using http://jsonlint.com/
I tried using XPUT in the console and i got the 'aknowleged' response, but the mapping is still {} when i request it in sense.
this question does not solve my problem. I always use the same name everywhere.
Any suggestions?
Thanks!
You just have the API syntax wrong. You've combined two different methods, basically.
Either create your index, then apply a mapping:
DELETE /pos
PUT /pos
PUT /pos/gram/_mapping
{
"gram": {
"properties": {
"gram": {
"type": "string",
"fields": {
"gram_bm25": {
"type": "string",
"similarity": "BM25"
},
"gram_lmd": {
"type": "string"
}
}
},
"sentiment": {
"type": "string",
"index": "not_analyzed"
},
"word": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
Or do it all at once when you create the index:
DELETE /pos
PUT /pos
{
"mappings": {
"gram": {
"properties": {
"gram": {
"type": "string",
"fields": {
"gram_bm25": {
"type": "string",
"similarity": "BM25"
},
"gram_lmd": {
"type": "string"
}
}
},
"sentiment": {
"type": "string",
"index": "not_analyzed"
},
"word": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
Here's the code I used:
http://sense.qbox.io/gist/6d645cc069f5f0fcf14f497809f7f79aff7de161

Resources