After updating to ElasticSearch 2 I am no more able to map the ContextSuggester for different types:
PUT /test/foo/_mapping
{
"properties": {
"suggest": {
"type": "completion",
"context": {
"type": {
"type": "category",
"path": "_type",
"default": [
"foo"
]
}
}
}
}
}
PUT /test/bar/_mapping
{
"properties": {
"suggest": {
"type": "completion",
"context": {
"type": {
"type": "category",
"path": "_type",
"default": [
"bar"
]
}
}
}
}
}
Putting the map for the second type ends in the following exception:
Mapper for [suggest] conflicts with existing mapping in other types: [mapper [suggest] has different [context_mapping] values]
The problem is that the default value differs for the different types. From my point of view, this should be the expected approach. How can I solve this problem?
Tested version of ES: 2.2.1
You have a field conflict.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.
Either create a separate index or rename the field for the other type.
Related
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true."
}
"status" : 400
I've read that the above is because type are deprecated in elasticsearch 7.7, is that valid for data type? I mean how am I suppose to say I want the data to be considered as a date?
My current mapping has this element:
"Time": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
I just wanted to create it again with type:"date", but I noticed even copy pasting the current mapping (which works) yields an error... The index and mappins I have are generate automatically by https://github.com/jayzeng/scrapy-elasticsearch
My goal is simply to have a date field, I have all my date in my index, but when I want to filter in kibana I can see it is not considered as a date field. And modyfing mapping doesn't seem like an option.
Obvious ELK noob here, please bare with me (:
The error is quite ironic because I pasted the mapping from an existing index/mapping...
You're experiencing this because of the breaking change in 7.0 regarding named doc types.
Long story short, instead of
PUT example_index
{
"mappings": {
"_doc_type": {
"properties": {
"Time": {
"type": "date",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
}
you should do
PUT example_index
{
"mappings": { <-- Note no top-level doc type definition
"properties": {
"Time": {
"type": "date",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
I'm not familiar with your scrapy package but at a glance it looks like it's just a wrapper around the native py ES client which should forward your mapping definitions to ES.
Caveat: if you have your fields defined as text and you intend to change it to date (or add a field of type date), you'll get the following error:
mapper [Time] of different type, current_type [text], merged_type [date]
You basically have two options:
drop the index, set the new mapping & reindex everything (easiest but introduces downtime)
extend the mapping with a new field, let's call it Time_as_datetime and update your existing docs using a script (introduces a brand new field/property -- that's somewhat verbose)
Currently, I'm trying to implement a variation of the car example here:
https://www.elastic.co/blog/managing-relations-inside-elasticsearch
If I run:
PUT /vehicle/_doc/1
{
"metadata":[
{
"key":"make",
"value":"Saturn"
},
{
"key":"year",
"value":"2015"
}
]
}
the code works correctly.
But if I delete the index and change 2015 from a string to a number:
DELETE /vehicle
PUT /vehicle/_doc/1
{
"metadata":[
{
"key":"make",
"value":"Saturn"
},
{
"key":"year",
"value":2015
}
]
}
I get the following error message:
{ "error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "mapper [metadata.value] of different type, current_type [long], merged_type [text]"
}
],
"type": "illegal_argument_exception",
"reason": "mapper [metadata.value] of different type, current_type [long], merged_type [text]" }, "status": 400 }
How do I fix this error?
PUT /vehicle/_doc/1
{
"metadata":[
{
"key":"make",
"value":"Saturn"
},
{
"key":"year",
"value":2015
}
]
}
After deleting the index and then trying to index a new document as above, the following steps takes place:
When elastic could not find an index by the name of vehicle and auto index creation is enabled (which is by default enabled) it will create a new index named as vehicle.
Based on the input document elastic now tries to make best guess of the datatype for the fields of the document. This is know as dynamic field mapping.
For the above document since metadata is an array of objects the field metadata is assumed to be of object data type.
Now comes the step to decide the data type of fields of individual object. When it encounters the first object it finds two fields key and value. Both these fields have string value (make and Saturn respectively) and hence elastic identifies the datatype for both the fields as text.
Elastic then try to define the mapping as below:
{
"properties": {
"metadata": {
"properties": {
"key": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"value": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
When it encounter the second object where the value of value field is numeric (2015) which it guesses the datatype as long. This results in conflict with the previously identified datatype which was text. A field can not be of mixed data type. Data types are strict, and hence the error.
To resolve the error you need to make sure the input values for the fields are of same type for each of the object as below:
PUT /vehicle/_doc/1
{
"metadata":[
{
"key":"make",
"value":2016
},
{
"key":"year",
"value":2015
}
]
}
For above it could be better used as:
PUT /vehicle/_doc/1
{
"metadata":[
{
"make":"Saturn",
"year": 2015
}
]
}
I have an index that contains documents of different types (not talking about _type here) and each document has a field document_type that states their type. Is it possible to define mappings for each type of document within this index?
Is it possible to define mappings for each type of document within this index?
No, if you think of using the same field name with different types. For instance, field name id of type string and integer won't work.
Having different document_type basically indicates different domains. What you could do is to group information under each respective domain or type. For instance, an employee and project, both have an id and name, but different types in this example. Some call that nesting.
An example index mapping:
PUT example
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"doc": {
"properties": {
"employee": {
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 64
}
}
}
}
},
"project": {
"properties": {
"id": {
"type": "keyword"
},
"name": {
"type": "keyword",
"ignore_above": 32
}
}
}
}
}
}
}
If you write the information, with different types.
PUT example/doc/1
{
"employee": {
"id": 4711,
"name": "John Doe"
},
"project": {
"id": "Project X",
"name": "Firebrand"
}
}
Others would argue to store employee and project in separate indices. This approach depends on your scenario and is also desirable. You allow both domains to evolve separately from each other.
Having a separate employee and project index gives you an advantage regarding maintenance. For querying some would argue, that you can group than with an alias. In the above example, it doesn't make sense since the field types are different. A search for the name over an analysed text field is different than over a keyword. Querying makes sense if you have the same field type.
No, if you want to use a single index, you would need to define a single mapping that combines the fields of each document type.
A better way might be to define separate indices on the same cluster for each document type. You can then create a single index alias that aliases to both of those indices if you want to be able to query across document types. Be sure that all fields that exist in both documents have the same data type in both mappings.
Having a single field name with more than one mapping type in the same index is not possible. Two options I can think of:
1. Separate the different doc types to separate indices.
2. Use different fields names for different doc types, so that each name can have different mapping. You can also use nesting, like: type_a.my_field and type_b.my_field, both in the same index.
I have to upload data to elk in the following format:
{
"location":{
"timestamp":1522751098000,
"resources":[
{
"resource":{
"name":"Node1"
},
"probability":0.1
},
{
"resource":{
"name":"Node2"
},
"probability":0.01
}]
}
}
I'm trying to define a mapping this kind of data and I produced he following mapping:
{
"mappings": {
"doc": {
"properties": {
"location": {
"properties" : {
"timestamp": {"type": "date"},
"resources": []
}
}
}
}
}
I have 2 questions:
how can I define the "resources" array in my mapping?
is it possible to define a custom type (e.g. resource) and use this type in my mapping (e.g "resources": [{type:resource}]) ?
There is a lot of things to know about the Elasticsearch mapping. I really highly suggest to read through at least some of their documentation.
Short answers first, in case you don't care:
Elasticsearch automatically allows storing one or multiple values of defined objects, there is no need to specify an array. See Marker 1 or refer to their documentation on array types.
I don't think there is. Since Elasticsearch 6 only 1 type per index is allowed. Nested objects is probably the closest, but you define them in the same file. Nested objects are stored in a separate index (internally).
Long answer and some thoughts
Take a look at the following mapping:
"mappings": {
"doc": {
"properties": {
"location": {
"properties": {
"timestamp": {
"type": "date"
},
"resources": { [1]
"type": "nested", [2]
"properties": {
"resource": {
"properties": {
"name": { [3]
"type": "text"
}
}
},
"probability": {
"type": "float"
}
}
}
}
}
}
}
}
This is how your mapping could look like. It can be done differently, but I think it makes sense this way - maybe except marker 3. I'll come to these right now:
Marker 1: If you define a field, you usually give it a type. I defined resources as a nested type, but your timestamp is of type date. Elasticsearch automatically allows storing one or multiple values of these objects. timestamp could actually also contain an array of dates, there is no need to specify an array.
Marker 2: I defined resources as a nested type, but it could also be an object like resource a little below (where no type is given). Read about nested objects here. In the end I don't know what your queries would look like, so not sure if you really need the nested type.
Marker 3: I want to address two things here. First, I want to mention again that resource is defined as a normal object with property name. You could do that for resources as well.
Second thing is more a thought-provoking impulse: Don't take it too seriously if something absolutely doesn't fit your case. Just take it as an opinion.
This mapping structure looks very inspired by a relational database approach. I think you usually want to define document structures for elasticsearch more for the expected searches. Redundancy is not a problem, but nested objects can make your queries complicated. I think I would omit the whole resources part and do it something like this:
"mappings": {
"doc": {
"properties": {
"location": {
"properties": {
"timestamp": {
"type": "date"
},
"resource": {
"properties": {
"resourceName": {
"type": "text"
}
"resourceProbability": {
"type": "float"
}
}
}
}
}
}
}
}
Because as I said, in this case resource can contain an array of objects, each having a resourceName and a resourceProbability.
In my search engine, users can select to search case-sensitively or not. If they choose to, the query will search on fields which use a custom case-sensitive analyser. This is my setup:
GET /candidates/_settings
{
"candidates": {
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "candidates",
"creation_date": "1528210812046",
"analysis": {
"analyzer": {
"case_sensitive": {
"filter": [
"stop",
"porter_stem"
],
"type": "custom",
"tokenizer": "standard"
}
}
},
...
}
}
}
}
So I have created a custom analyser called case_sensitive taken from this answer. I am trying to define my mapping as follows:
PUT /candidates/_mapping/candidate
{
"properties": {
"first_name": {
"type": "text",
"fields": {
"case": {
"type": "text",
"analyzer": "case_sensitive"
}
}
}
}
}
So, when querying, for a case-sensitive match, I can do:
simple_query_string: {
query: **text to search**,
fields: [
"first_name.case"
]
}
I am not even getting to the last step as I am getting the error described in the title when I am trying to define the mapping. The full stack trace is in the image below:
I initially thought that my error was similar to this one but I think that issue is only related to using the keyword tokenizer and not the standard one
In this mapping definition, I was actually trying to adjust the mapping for several different fields and not just first_name. One of these fields has the type long and that is the mapping definition that was throwing the error. When I remove that from the mapping definition, it works as expected. However, I am unsure as to why this fails for this data type?