problem with dynamic field elastic search - elasticsearch

i am running Packet-beat in my server.
i'm disabled dynamic field in index mapping . it mean if new data coming . don't create new fields.
in my mapping there is not extra field but when i send a request from postman for show records . there is a new field in my result but i'm sure its not in my mapping.
how is possible?

I'm founding the answer.
in elasticsearch when set dynamic:false its mean:
The dynamic setting controls whether new fields can be added dynamically or not. It accepts three settings:
true : Newly detected fields are added to the mapping. (default)
false : Newly detected fields are ignored. These fields will not be indexed so will not be searchable but will still appear in the _source field of returned hits. These fields will not be added to the mapping, new fields must be added explicitly.
strict : If new fields are detected, an exception is thrown and the document is rejected. New fields must be explicitly added to the mapping.
extra description in this link

Related

elasticsearch unknown setting index.include_type_name

I'm in really weird situations, I need to create indexes in elasticsearch that contain typeless fields. I have a rails application that sends any data per second to my elasticsearch. about my architecture, I have to say I use elastic-stack on docker in ubuntu server and use socket to send data's to elk and all of them are the latest version.
In my rails application user could choose datatype for each field but the issues happen when the user want to change the datatype of one field right after it's created, logstash return this error
error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [field] of type [long] in document with id '5e760cac-cafc-4fd0-9e45-1c650967ccd4'. Preview of field's value: '2022-01-18T08:06:30'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"2022-01-18T08:06:30\
I found deadly queue letter plugins to save wrong input in my server after that I think if I could index documents without any type the problem is solved so I start to googling and found Removal of mapping types in elasticsearch documents and I follow instructions which describe in tutorials I get the following error:
unknown setting [index.include_type_name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
even I put "include_type_name" in the request to send to the elastic noting change I have the latest version of elastic.
I think maybe it's helpful to edit the default elasticsearch template but noting the change. could you please help me with what should I do?
As already mentioned in the comments, Elasticsearch does not support changing the data type of a field without a reindex or creating a new index.
For example, if a field is mapped as a numeric field like integer and the user wants to index a string value in this field, elasticsearch will return an mapping error.
You would need to change the mapping of the index and reindex it or create a entirely new index using the new mapping.
None of this is done automatically by elastic, you would need to deal with this in your application, you could catch the error and implement some logic to create a new index with the new mapping, but this also could lead to other problems as having too many indices in the cluster and query errors when the range of the query include index with the same field with different mappings.
One feature that Elasticsearch has that could help you in some way is the runtime fields, with runtime fields you can query a field that has a specific mapping using a different mapping.
For example, if you have a field that has date values, but was wrongly mapped as a keyword or text field, you could use a runtime field to query it as it was a date field.
But again, this will need that you implement a logic to build those runtime fields and also can lead to other problems, not all the data types are available to runtime fields and runtime fields can impact in the performance.
Another feature that could help you is to use of multi-fields, this, I think, is the closest you got of having a field with multiple data types.
Using multi-fields you could have a field named date with the date type and also a field named date.keyword with the keyword type, you also could have a field name code with the keyword type and a field name code.int with the integer type, you would also need to use the ignore_malformed setting in the mapping so elastic does not reject the entire document in case of mapping errors, just the field with the wrong mapping.
Just keep in mind that when use multi-fields, you will have a different field for each mapping, for example date is a field, date.keyword is another field, this will increase the storage usage.
But again, none of this is done automatically, it needs logic in your application, elasticsearch does not allows you to change the mapping of a field, if your application needs this, you will need to implement something in the application that can work with that limitations of elasticsearch.

Is it possible to update `store` value of the mapping of an existing field in ElasticSearch 6.x index?

I have an index created by ElasticSearch 6.8.7. I query against some fields which don't correspond to document's fields, because they are merged copies of document's ones. At index creation their store value was set to false. Now I need to get highlights, but the query fields content is not stored. Can I update mapping and set store to true? Index's _source is enabled.
The docs don't mention this ability, and I can't try to update store on my production cluster.
No, it's not.
In general, the mapping for existing fields cannot be updated. There
are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
the ignore_above parameter can be updated.
Source.
Also, I tried to update mapping on a sample index, and ES didn't allow me to update store value of an existing field.
That is understandable, but still sad.

Elasticsearch: Check dynamic mapping type

I am trying to retrieve the mapping for an index as follows:
GET /twitter/_mapping/_doc
However there is no dynamic field for me to check whether the dynamic mapping type applied (strict / false / true).
How can I verify my dynamic mapping type?
As explained in the documentation on dynamic field mappings, if the dynamic setting is not specified (hence not returned by the _mapping call), the default value is true, which means that new fields will be created in the mapping if they don't exist yet.

Filtering Elasticsearch fields from index/store

I was wondering what is the recommended approach to filter out some of the fields that are sent to Elasticsearch from Store and Index?
I want to filter our some fields from getting indexed in Elasticsearch. You may ask why you are sending them to Elasticsearch from the first place. Unfortunately, it is sent via another application that doesn't accept any filtering mechanism. Hence, filtering should be addressed at the time of indexing. Here is what we have done, but I am not sure what would be the consequences of these steps:
1- Disable dynamic mapping ("dynamic": "false" ) in ES templates.
2- Including only the required fields in _source and excluding the rest.
According to ES website, some of the ES functionalities will be disabled by disabling _source fields. Given I don't need the filtered fields at all, I was wondering whether the mentioned solution will break anything regarding the remaining fields or not?
There are a few mapping parameters that allow you to do what you want:
index: true/false: if true the field value is indexed in order to be searched later on (default: true)
store: true/false: if true the field values are stored in addition to being indexed. Usually, the field values are stored in the source already, but you can choose to not store the source but store the field value itself (default: false)
enabled: true/false: only for the mapping type as a whole or for object types. you can decide whether to only store the value but not index it
So you can use any combination of the above parameters if you don't want to modify the source documents and simple let ES do it for you.

How to changing the field datatypes in elasticsearch using the Mapping API or reindexing

How to changing the field datatypes using the Mapping API or reindexing In Elasticsearch.
I am new to Elasticsearch but faced similar problem. According to my understanding once a mapping is decided for a index it cannot be changed.
Please check the following link:
https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping-intro.html
"We can update a mapping to add a new field, but we can’t change an existing field from analyzed to not_analyzed."
But what you can do is update the mapping of a new field. So you can change the datatype of the data and save it again as a new field.

Resources