ES version 6.8.12
I want to map all types to a given field in an index, it should store all types of data , instead of bound to specific type.
im facing issue when the string is stored in Long type field.
[WARN ] 2020-09-14 06:34:36.470 [[main]>worker0] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"5f4632bab98bdd75a267546b", :_index=>"cdrindex", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x38a5506>], :response=>{"index"=>{"_index"=>"cdrindex", "_type"=>"doc", "_id"=>"5f4632bab98bdd75a267546b", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [caller_id_number] of type [long] in document with id '5f4632bab98bdd75a267546b'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"Anonymous\""}}}}}
Then you need to pick the text or keyword data type.
In your mapping, you need to set the caller_id_number data type explicitly to one of the above instead of letting Elasticsearch decide for you.
For instance:
PUT your-index
{
"mappings": {
"properties": {
"caller_id_number": {
"type": "text"
},
...
}
}
}
Note that you can leverage dynamic mappings if you want to automatically set the mapping for some fields:
PUT your-index
{
"mappings": {
"dynamic_templates": [
{
"sources": {
"match": "caller_*",
"mapping": {
"type": "text"
}
}
}
],
"properties": {
"specific_field": {
"type": "long"
}
}
}
}
With the dynamic mapping above, all fields starting with caller_ would get automatically mapped as text while specific_field would be mapped as long...
Related
I have an Elasticsearch index, I am saving document into the index.
Is there any way, when I try to index/save same document(with same _id) again, with new value/updated value for some field,
Elasticsearch should throw an exception,
only if that particular field is what we are trying to update? for other field it can work as default behavior.
For Ex: I have index mapping as below
PUT /_index_template/example_template
{
"index_patterns": [
"example*"
],
"priority": 1,
"template": {
"aliases": {
"example":{}
},
"mappings": {
"dynamic":"strict",
"_source":
{"enabled": false},
"properties": {
"SomeID": {
"type": "keyword"
},
"AnotherInfo": {
"type": "keyword"
}
}
}
}
}
Then I create an index based on this index mapping
PUT example01
After that I save a document against the this index
POST example01/_doc/1
{
"SomeId": "abcdedf",
"AnotherInfo":"xyze"
}
Now next time if I try to save the document again with different "SomeId" value
POST example01/_doc/1
{
"SomeId": "uiiuiiu",
"AnotherInfo":"xyze"
}
I want to say "Sorry "someId" field can not be updated"
basically, Preventing document field from getting updated in Elastic
Search.
Thanks in advance!
Elastic search support revision on documents by default it meant it trace the changes on indexed documents with their generated _id and each time you manipulate the document for example with id 17 it's increase the value of #Version field so you can not have two duplicated document with same id if you don't have custom_routing but if you have custom routing always be careful about duplication on _id field because this field is not just identifier it's also keep record of which shard it located.
More over i guess elastic has no way to enforce restrictions at the field level within a document and you may control restriction on updating fields on application level or field level security based on roles.
As an example of field level security consider below role definition grants read access only to the category, #timestamp, and message fields in all the events-* data streams and indices.
POST /_security/role/test_role1
{
"indices": [
{
"names": [ "events-*" ],
"privileges": [ "read" ],
"field_security" : {
"grant" : [ "category", "#timestamp", "message" ]
}
}
]
}
I have json data that has a "product_ref" field that can take these values as an example:
"product_ref": "N/A"
"product_ref": "90323"
"product_ref": "SN3005"
"product_ref": "2015-05-23"
When pushing the data to the index i get a mapping error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"mapper [root.product_ref] of different type, current_type [date], merged_type [text]"}],"type":"illegal_argument_exception","reason":"mapper [root.product_ref] of different type, current_type [date], merged_type [text]"},"status":400}
Any idea?
There is something called date detection, and by default, it is enabled.
If date_detection is enabled (default), then new string fields are checked to see whether their contents match any of the date patterns specified in dynamic_date_formats. If a match is found, a new date field is added with the corresponding format.
You just need to disable it by modifying your mappings:
PUT /products
{
"mappings": {
"doc": {
"date_detection": false,
"properties": {
"product_ref": { "type": "keyword" },
}
}
}
}
This is happening because ElasticSearch assumed you're indexing dates of a particular format, and a value which doesn't match that was attempted to be indexed. i.e. after indexing date, you index wrong format.
Make sure all the values are dates and none are empty,perhaps remove these in your ingestion layer.
EDIT: If you don't care to lose the date value you can use the dynamic mapping.
{
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "date",
"mapping": {
"type": "text"
}
}
}
]
}
as mentioned in the title, I want to disable index a specified field in elasticsearch, for example, I have a fields named #fileds which contains three sub-fields like name、age、salary, now I do not want to index the field #fields.age in elasticsearch, how can I achieve that? I have tried to use include_in_all parameters, but it doesn't work. mapping configuration like:
"mappings": {
"fluentd": {
"properties": {
"#fields": {
"properties": {
"age": {
"type": "text",
"include_in_all": false,
"index": "no"
}
}
}
}
}
}
when I use this mapping configuration above, I can only see #fields.age in the index's mapping, #fields.name and #fields.salary should appear on the index's mapping not the #fields.age, how can this happen? any answers will be appreciated.
After updating to ElasticSearch 2 I am no more able to map the ContextSuggester for different types:
PUT /test/foo/_mapping
{
"properties": {
"suggest": {
"type": "completion",
"context": {
"type": {
"type": "category",
"path": "_type",
"default": [
"foo"
]
}
}
}
}
}
PUT /test/bar/_mapping
{
"properties": {
"suggest": {
"type": "completion",
"context": {
"type": {
"type": "category",
"path": "_type",
"default": [
"bar"
]
}
}
}
}
}
Putting the map for the second type ends in the following exception:
Mapper for [suggest] conflicts with existing mapping in other types: [mapper [suggest] has different [context_mapping] values]
The problem is that the default value differs for the different types. From my point of view, this should be the expected approach. How can I solve this problem?
Tested version of ES: 2.2.1
You have a field conflict.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.
Either create a separate index or rename the field for the other type.
I'm mapping a couchbase gateway document and I'd like to tell elasticsearch to avoid indexing the internal attributes added by the gateway like the "_sync", this object contains another object named "channels" which has the following form:
"channels": {
"i7de5558-32ad-48ca-bf91-858c3a1e4588": 12
}
So I guess the mapping of this object would be like:
"channels": {
"type": "object",
"properties": {
"i7de5558-32ad-48ca-bf91-858c3a1e4588": {
"type": "integer",
"index": "not_analyze"
}
}
}
The problem is that the keys are always changing, so I don't know if I should use a wildcard like this "*": {"type": "integer", "index": "not_analyze"} for this property or do something else.
Any advice please?
If the fields are of integer types, you don't have to provide them explicitly in the mapping. You can create an empty mapping ,index documents with these fields. Elasticsearch will infer the type of field and update the mapping dynamically. You can also use dynamic templates for this.
{
"mappings": {
"my_type": {
"dynamic_templates": [
{
"analysed_string_template": {
"path_match": "channels.*",
"mapping": {
"type": "integer"
}
}
}
]
}
}
}
There`s a dynamic way to do that as you need, is called dynamic template
Using templates you are able to create rules like this:
PUT /my_index
{
"mappings": {
"my_type": {
"date_detection": false
}
}
}
In your case you could create a template to set all news fields inside the channel object as not_analyzed.
Hope it will help