I use multi-fields in a lot of my mappings. In the doc of Elastic Search there is an indication that multi-fields should be replaced with the "fields" parameter. See http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_multi_fields.html#_multi_fields
This works fine. However, to access a multi-field as a single field the documentation recommends to specify the copy_to parameter instead of the path parameter (see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#_accessing_fields)
Can somebody provide an example of such a mapping definition (thus using the "fields" parameter combined with "copy_to").
I have the impression that if you use the fields parameter you still need to specify the path parameter. And if you use copy_to, you no longer need to use a multi-fields approach; the fields just become separate fields and data of one field is copied to another at index time.
Hope somebody can help.
thx
Marc
I think that the copy_to option can be viewed as a cleaner variant of the Multi-fields feature (that is, the fields option). Both of these are easy to use when you want to "copy" values of a field to one or more other fields (to apply different mapping rules). However, if you need to "copy" values from multiple fields to the same field (that is, when you want a custom _all field), you must add the path option to the mapping, if you're using Multi-fields. On the other hand, with the copy_to option, you can simply point multiple source fields to the same destination field.
See this: https://www.elastic.co/guide/en/elasticsearch/reference/1.6/_multi_fields.html
copy_to would allow you to merge different fields like first_name and last_name into full_name
while multi field is used when you want to define several ways to index your field. For example
// Document mapping
{
"properties": {
"name": {
"fields": {
"name_metaphone": {
"type": "string",
"analyzer": "mf_analyzer"
},
"name_exact": {
"index": "not_analyzed",
"type": "string"
}
},
"type": "multi_field"
}
}
}
Related
I have a index in elasticsearch v7.3 with parent child relationship and i have indexed child documents before indexing parent documents so firstly i wanted to know if that's okay and then i have a field in my index whose ignore_above value has to be increased from an existing 256 so can i do it without deleting my index.
It's ok with me if the data for that specific field gets lost for existing documents but for indexing documents from now on i want that ignore_above limit for that specific field to get increased so can i do it?
PUT /test/_mapping
{
"properties": {
"handles": {
"type": "text",
"ignore_above": 1000
}
}
}
Currently i am getting this error,
"type": "mapper_parsing_exception",
"reason": "Mapping definition for [handles] has unsupported parameters: [ignore_above : 1000]"
so please help me in resolving this query.
The ignore_above parameter is only available for keyword fields, not on text fields. But yes, if your field was of type keyword, you'd be allowed to update the ignore_above setting for that field.
Since you're using dynamic templates, you could set the right value for ignore_above at index creation time.
I want to set doc_values for _id field in elastic search As want to perform sorting based on _id
hitting below api to update mapping gives me an error
PUT my_index/my_type/_mapping
{
"properties": {
"_id": {
"type": "keyword",
"doc_values": true
}
}
}
reason : Mapping definition for [_id] has unsupported parameters: [doc_value : true]
It is “doc_values”, you are using an incorrect parameter. https://www.elastic.co/guide/en/elasticsearch/reference/current/doc-values.html
Elastic discourages sorting on _id field. See this
The value of the _id field is also accessible in aggregations or for sorting, but doing so is discouraged as it requires to load a lot of data in memory. In case sorting or aggregating on the _id field is required, it is advised to duplicate the content of the _id field in another field that has doc_values enabled.
EDIT
Create a scripted field for your index pattern with name for. ex id of type string and script doc['_id'].value. See this link for more information on scripted fields. This will create a new field id and copy _id field's value for every document indexed into your indices matching your index pattern. You can then perform sorting on id field.
I have 2 loggers from 2 different clusters logging into my elasticsearch. logger1 uses indices mydata-cluster1-YYYY.MM.DD and logger2 uses indices mydata-cluster2-YYYY.MM.DD.
I have no way of touching the loggers. So i would like to add a field on the ES side when the data is indexed to show which cluster the data belongs to. Can i use mappings to do this?
Thanks
What if you use the PUT mapping API, in order to add a field to your index:
PUT mydata-cluster1-YYYY.MM.DD/_mapping/mappingtype <-- change the mapping type according to yours
{
"properties": {
"your_field": {
"type": "text" <--- type of the field
}
}
}
This SO could come in handy. Hope it helps!
I have some extra inner fields on a geo-shape type field. For example, "shape" is a geo-shape type field which has the regular required fields like "coordinates", "radius" etc., but it may also have other fields like "metadata" which I want elasticsearch to not parse and not store in the index. For example:
"shape": {
"coordinates":[6.77,8.99]
"radius": 500
"metadata": "some value"
}
Mapping schema looks like this:
"shape":{
"type":"geo_shape"
}
How can I achieve this ? By using "dynamic": false on mapping schema does not seem to be working.
Setting dynamic to false in your root mapping, like you did, is the way to go : are your sure it desn't work? Or are you saying that because it appears in your result hit _source?
Actually, by default, the _source attribute will contains the exact same document that you submitted.
However, it doesn't mean the extra metadata field has been indexed and/or stored.
If you want to check this, request specifically that field in your search like this :
POST _search
{
"fields": ["shape.metadata"]
}
You should have your search hits but without any fields value.
If it still bother you, disabled the _source attribute in your mapping.
Is it possible to do a suggestion completion on a type? I'm able to do it on an index.
POST /data/_suggest
{
"data" : {
"text" : "tr",
"completion" : {
"field" : "sattributes",
"size":50
}
}
}
when I do on a type:
POST /data/suggestion/_suggest
{
"data" : {
"text" : "tr",
"completion" : {
"field" : "sattributes",
"size":50
}
}
}
suggestion is the type.
I don't get any results. I need to do suggestion on two different types articles and books. Do I need to create separate indexes to make them work or is there a way in elasticsearch to accomplish this? In case if I have to search on my index data is there way to get 50 results for type article and 50 results for type book.
Any help is highly appreciated.
Lucene has no concept of types, so in Elasticsearch they are simply implemented as a hidden field called _type. When you search on a particular type, Elasticsearch adds a filter on that field.
The completion suggester doesn't use traditional search at all, which means that it can't apply a filter on the _type field. So you have a couple of options:
Use a different completion suggester field per type, eg suggestion_sattributes, othertype_sattributes
Index your data with the _type as a prefix, eg type1 actual words to suggest, then when you ask for suggestions, prepend type1 to the query
Use separate indices
In fact, option (2) above is being implemented at the moment as the new ContextSuggester which will allow you to do this (and more) automatically.