ElasticSearch :: Exception while ceating index and document - elasticsearch

I am new to ElasticSearch & was trying to execute the example mentioned in their home page where I came across this erorr -
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.employee.properties.age.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
],
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.employee.properties.age.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings",
"suppressed": [
{
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.employee.properties.experience.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
{
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.employee.properties.name.analyzer] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
{
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.employee.properties.name.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
]
},
"status": 400
}
The url & body of the post request are as follows -
URL - > http://localhost:9200/company
BODY - >
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
},
"mappings": {
"employee": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "string",
"analyzer": "analyzer-name"
}
}
}
}
}
}
How to fix the error ?

There are two errors in syntax of your JSON body object:
Node settings must have only two childs: index and analysis. Node mappings must be root-level.
Field name has invalid type string, it must be text or keyword. Because you need this field to be analyzed, it should be text in your case.
So working query for ES version 6.x (that was current at the time of the question) should be like this:
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"employee": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "text",
"analyzer": "analyzer-name"
}
}
}
}
}
Starting from ES version 7.0, mapping types was removed from index definition, so the query above wouldn't work in ES 7.x.
Working query for ES version 7.x could be two types:
If index should contain data only about employees, you can simply delete employee mapping type and query would be like this:
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "text",
"analyzer": "analyzer-name"
}
}
}
}
If index should contain data about employees and some other data, you can use employee as field of the object type and query would be like this:
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"properties": {
"employee": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "text",
"analyzer": "analyzer-name"
}
}
}
}
}
}

As you mentioned that you are new to Elastic Search, better start with the basic and use the default settings of ElasticSearch. Use the following mapping:
curl -XPUT localhost:9200/company -d '{
"mappings": {
"employee": {
"properties": {
"age": {"type": "long"},
"experience": {"type": "long"},
"name": {"type": "string","index": "not_analyzed"}
}
}
}
}'

Related

Elastic Search doesn't allow me to index field in new template version previously set enabled false

I have a template with a lot of fields, below is an abbreviated sample.
{
"index_patterns": "test*",
"order": 2,
"version": 2,
"aliases": {
"tests": {
}
},
"settings": {
"number_of_shards": 5,
"analysis": {
"normalizer": {
"lowercase_normalizer": {
"type": "custom",
"char_filter": [
],
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"dynamic": "false",
"properties": {
"id": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
},
"emailAddress": {
"enabled": false
},
"createdTimestampEpochInMilliseconds": {
"type": "date",
"format": "epoch_millis"
},
"updatedTimestampEpochInMilliseconds": {
"type": "date",
"format": "epoch_millis"
},
"createdDate": {
"type": "date"
},
"updatedDate": {
"type": "date"
}
}
}
}
The field emailAddress is set to enabled=false and we have the requirement to make it searchable so we need to change the template and set this field type and normalizer like the id field. Then PUT the template and reindex the data from index test-2 to index test-4.
{
"index_patterns": "test*",
"order": 4,
"version": 4,
"aliases": {
"tests": {
}
},
"settings": {
"number_of_shards": 5,
"analysis": {
"normalizer": {
"lowercase_normalizer": {
"type": "custom",
"char_filter": [
],
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"dynamic": "false",
"properties": {
"id": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
},
"emailAddress": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
},
"createdTimestampEpochInMilliseconds": {
"type": "date",
"format": "epoch_millis"
},
"updatedTimestampEpochInMilliseconds": {
"type": "date",
"format": "epoch_millis"
},
"createdDate": {
"type": "date"
},
"updatedDate": {
"type": "date"
}
}
}
}
When trying to reindex either using Elastic Search ReindexOnServer or manually querying data and moving from one index to the other we receive 400 Bad Request error.
{
"index": "test-4",
"type": "_doc",
"id": "54e1ea11-d7b4-4310-90f1-11ddbecc4d21",
"cause": {
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_doc]: Mapping definition for [emailAddress] has unsupported parameters: [enabled : false]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [emailAddress] has unsupported parameters: [enabled : false]"
}
},
"status": 400
}
The error message is a bit confusing, changing the template version should create the index automatically using the latest template and respect the new field indexing type and normalizer.
Not sure what I miss here.
I would remove the old template as it is not useful anymore to you. The version field is purely informational and not used by ES.
What happens here is that both templates kick in and the latest one (v4) overrides the older one (v2) because of the order setting. However, the enabled setting is not supported anymore, which causes this error. If you remove the older template, you'll be fine.

ElasticSearch creating index returns error

I'm using elasticsearch v. 6.2.2 and try to Create an Index in Kibana 6.2.2:
I get this code from guide for beginners https://www.codementor.io/ashish1dev/getting-started-with-elasticsearch-du107nett
PUT /company
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
},
"mappings": {
"employee": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "string",
"analyzer": "analyzer-name"
}
}
}
}
}
}
I get an error after executing this request
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.properties.age.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
],
"type": "illegal_argument_exception",
"reason": "unknown setting [index.mappings.properties.age.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
"status": 400
}
After I wanted to do like here
POST /company/employee/?_create
{
"name": "Andrew",
"age" : 45,
"experience" : 10
}
Could you please answer what's wrong with this code
mappings cannot be nested inside settings and must be a top-level section:
PUT /company
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer-name": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
},
"mappings": {
"employee": {
"properties": {
"age": {
"type": "long"
},
"experience": {
"type": "long"
},
"name": {
"type": "string",
"analyzer": "analyzer-name"
}
}
}
}
}

Add mapping on elastic search index field

I would like to add anaylyser uax_url_email for search email field in elasticsearch document. However, I get the error.
Here is the code I am using to create this index
{
"user": {
"aliases": {},
"mappings": {
"user": {
"properties": {
"created": {
"type": "date"
},
"email": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
},
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "user",
"creation_date": "1521016015646",
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "uax_url_email",
"max_token_length": "255"
}
}
},
"number_of_replicas": "1",
"uuid": "RS96V9gFQbG5UmoQ2R_gLA",
"version": {
"created": "6010099"
}
}
}
}
}
PUT /user/_mapping/email
{
"mappings": {
"_doc": {
"properties": {
"email": {
"type": "text",
"fields": {
"my_analyzer": {
"email": "text",
"analyzer": "my_analyzer"
}
}
}
}
}
}
}
I got an error stating "root_cause":
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [mappings : {_doc={properties={email={type=text, fields={my_analyzer={email=text, analyzer=my_analyzer}}}}}}]"
}
],
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [mappings : {_doc={properties={email={type=text, fields={my_analyzer={email=text, analyzer=my_analyzer}}}}}}]"
},
"status": 400
}
Nothing will be found. I want my analyzer and tokenizer work on email field any help will be highly appreciated
This should work:
PUT /user/_mapping/user
{
"properties": {
"email": {
"type": "text",
"fields": {
"my_analyzer": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
}
Your mistake was that you thought that the index type was _doc, but looking at your mapping, the index type is user. Basically, you have a user index with a user type.
The format of the command is PUT /[INDEX_NAME]/_mapping/[TYPE_NAME] {"properties:" { "[FIELD_TO_BE_UPDATED]": {....} } }.

Elasticsearch mapping issue when one index already existed from before

whenever there is one index already present in Elastic search and I try to create another index with following steps.
1) create an empty index called dsi2 with settings and analyzer
curl -XPUT 'https://instance:9243/dsi2' -d '{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"analysis": {
"analyzer": {
"analyzer_keyword": {
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
}
}'
2) create mapping,
curl -XPUT 'https://instance:9243/_mapping/dsi2' -d '{
"_all": {
"enabled": true
},
"properties": {
"formTypeId": {
"type": "integer"
},
"status": {
"type": "integer"
},
"tenantId": {
"type": "integer"
},
"formDefinitionId": {
"type": "integer"
},
"instance": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"fields": {
"raw": {
"type": "integer",
"index": "not_analyzed"
}
.....
it throws
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "analyzer [analyzer_keyword] not found for field [raw]"
}
],
"type": "mapper_parsing_exception",
"reason": "analyzer [analyzer_keyword] not found for field [raw]"
},
"status": 400
}
there is no issue in creating mapping for dsi2 when dsi2 is the only index that was created with same steps 1) & 2) above. if there is already one index created to prior to dsi2 - though both indices are separate indices - is there any conflict with the mapping?

elasticsearch edge_ngrams analyzer is not found

I am following an official video from elasticsearch
and they said to do this:
PUT /blablabla/doc/_mapping
{
"properties": {
"title" : {
"type": "string",
"fields": {
"stemmed" : {
"type": "string",
"analyzer": "english"
},
"autocomplete" : {
"type": "string",
"analyzer": "edge_ngrams"
}
}
}
}
}
I got error that the analyzer edge_ngrams is not exists
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "analyzer [edge_ngrams] not found for field [autocomplete]"
}
],
"type": "mapper_parsing_exception",
"reason": "analyzer [edge_ngrams] not found for field [autocomplete]"
},
"status": 400
}
why please ?
I am on elasticsearch 2.2
Update
calling GET /blablabla i get the following
{
"blablabla": {
"aliases": {},
"mappings": {
"doc": {
"properties": {
"job": {
"type": "string"
},
"name": {
"type": "string"
}
}
}
},
"settings": {
"index": {
"creation_date": "1456267981541",
"number_of_shards": "5",
"number_of_replicas": "1",
"uuid": "5042-5UwR42QY45jMRw8jQ",
"version": {
"created": "2010199"
}
}
},
"warmers": {}
}
}
The correct name of analyzer is edgeNGram and not edge_ngrams. Checkout this link
The default values for min_gram and max_gram are 1 and 2 respectively. In most of the cases, you may need to provide your custom analyzer.
You may like to check how to do this at this reference

Resources