ElasticSearch - what is the difference between an index template and an index pattern - elasticsearch

I have read an explanation to my question here:
https://discuss.elastic.co/t/whats-the-differece-between-index-pattern-and-index-template/54948
However, I still don't understand the difference. When defining an index PATTERN, does it not affect index creation at all? Also, what happens if I create an index but it doesn't have a corresponding index pattern? How can I see the mapping used for an index pattern so I can know how to use the Mapping API to update it?
And on a side note, the docs say you manage the index patterns by clicking the "Settings" and then "Indices" tab. I'm looking at Kibana and I don't see any settings tab. I can view the index patterns through the management tab, but I don't see any settings tab there

An index template is an ES feature for triggering the creation of new indexes whenever a name pattern is matched. For instance, let's say we create the following index template:
PUT _template/template_1
{
"index_patterns": ["foo*"],
"settings": {
"number_of_shards": 1
},
"mappings": {
...
}
}
As you can see, as soon as we want to index a document inside an index named (e.g.) foo-44 and that index doesn't exist, then that template (settings + mappings) will be used by ES in order to create the foo-44 index automatically.
You can update an index template at any time by simply PUTting a new settings/mappings definition like above.
An index pattern (not to be confounded with the index-patterns property you saw above, those are two totally different things), is a Kibana feature for telling Kibana what makes up an index (all the fields, their types, etc). Nothing can happen in Kibana without creating index patterns, which you can do in Management > Index Patterns.
Creating an index in ES will not create any index pattern in Kibana. Similarly, creating an index pattern in Kibana will not create any index in ES.
The reason why Kibana needs an index pattern is because it needs to store different kind of information as it available in an index mapping. For instance, let's say you create an index with the following mapping:
PUT my_index
{
"mappings": {
"doc": {
"properties": {
"timestamp": {
"type": "date"
},
"name": {
"type": "text"
}
}
}
}
}
Then the corresponding index pattern that you will create in Kibana will have the following content:
GET .kibana/doc/index-pattern:16a98050-a53f-11e8-82ab-af0d48c6ddd8
{
"type": "index-pattern",
"updated_at": "2018-08-21T12:38:22.509Z",
"index-pattern": {
"title": "my_index*",
"timeFieldName": "timestamp",
"fields": """[{"name":"_id","type":"string","count":0,"scripted":false,"searchable":true,"aggregatable":true,"readFromDocValues":false},{"name":"_index","type":"string","count":0,"scripted":false,"searchable":true,"aggregatable":true,"readFromDocValues":false},{"name":"_score","type":"number","count":0,"scripted":false,"searchable":false,"aggregatable":false,"readFromDocValues":false},{"name":"_source","type":"_source","count":0,"scripted":false,"searchable":false,"aggregatable":false,"readFromDocValues":false},{"name":"_type","type":"string","count":0,"scripted":false,"searchable":true,"aggregatable":true,"readFromDocValues":false},{"name":"name","type":"string","count":0,"scripted":false,"searchable":true,"aggregatable":false,"readFromDocValues":false},{"name":"timestamp","type":"date","count":0,"scripted":false,"searchable":true,"aggregatable":true,"readFromDocValues":true}]"""
}
}
As you can see, Kibana also stores the timestamp field, the name of the index pattern (which can span several indexes). Also it stores various properties for each field you have defined, for instance, for the name field, the index-pattern contains the following information that Kibana needs to know:
{
"name": "name",
"type": "string",
"count": 0,
"scripted": false,
"searchable": true,
"aggregatable": false,
"readFromDocValues": false
},

Related

Create a new index with field property marked as 'not-analyzed' in Kibana Dev Tools

How to create in the Kibana dev-tools a new Elasticsearch index and define one of its text field properties a not-analyzed ?
I know the command to create a new index is:
PUT /myNewIndexName
but I want one of the properties in the document to be not-analyzed
the analyzed value is from <7.X, the current approach to doing that is to use a keyword field - https://www.elastic.co/guide/en/elasticsearch/reference/7.15/keyword.html
PUT myNewIndexName
{
"mappings": {
"properties": {
"field_name": {
"type": "keyword"
}
}
}
}

Specifying Field Types Indexing from Logstash to Elasticsearch

I have successfully ingested data using the XML filter plugin from Logstash to Elasticsearch, however all the field types are of the type "text."
Is there a way to manually or automatically specify the correct type?
I found the following technique good for my use:
Logstash would filter the data and change a field from the default - text to whatever form you want. The documentation would be found here. The example given in the documentation is:
filter {
mutate {
convert => { "fieldname" => "integer" }
}
}
This you add in the /etc/logstash/conf.d/02-... file in the body part. I believe the downside of this practice is that from my understanding it is less recommended to alter data entering the ES.
After you do this you will probably get the this problem. If you have this problem and your DB is a test DB that you can erase all old data just DELETE the index until now that there would not be a conflict (for example you have a field that was until now text and now it is received as date there would be a conflict between old and new data). If you can't just erase the old data then read into the answer in the link I linked.
What you want to do is specify a mapping template.
PUT _template/template_1
{
"index_patterns": ["te*", "bar*"],
"settings": {
"number_of_shards": 1
},
"mappings": {
"type1": {
"_source": {
"enabled": false
},
"properties": {
"host_name": {
"type": "keyword"
},
"created_at": {
"type": "date",
"format": "EEE MMM dd HH:mm:ss Z YYYY"
}
}
}
}
}
Change the settings to match your needs such as listing the properties to map what you want them to map to.
Setting index_patterns is especially important because it tells elastic how to apply this template. You can set an array of index patterns and can use * as appropriate for wildcards. i.e logstash's default is to rotate by date. They will look like logstash-2018.04.23 so your pattern could be logstash-* and any that match the pattern will receive the template.
If you want to match based on some pattern, then you can use dynamic templates.
Edit: Adding a little update here, if you want logstash to apply the template for you, here is a link to the settings you'll want to be aware of.

Is it possible to update an existing field in an index through mapping in Elasticsearch?

I've already created an index, and it contains data from my MySQL database. I've got few fields which are string in my table, where I need them as different types (integer & double) in Elasticsearch.
So I'm aware that I could do it through mapping as follows:
{
"mappings": {
"my_type": {
"properties": {
"userid": {
"type": "text",
"fielddata": true
},
"responsecode": {
"type": "integer"
},
"chargeamount": {
"type": "double"
}
}
}
}
}
But I've tried this when I'm creating the index as a new one. What I wanted to know is how can I update an existing field (ie: chargeamount in this scenario) using mapping as a PUT?
Is this possible? Any help could be appreciated.
Once a mapping type has been created, you're very constrained on what you can update. According to the official documentation, the only changes you can make to an existing mapping after it's been created are the following, but changing a field's type is not one of them:
In general, the mapping for existing fields cannot be updated. There
are some exceptions to this rule. For instance:
new properties can be added to Object datatype fields.
new multi-fields can be added to existing fields.
doc_values can be disabled, but not enabled.
the ignore_above parameter can be updated.

Unanalyzed fields on Kibana

i need help to correct kibana field. when I try to visualizing the fields, shown me the following warning:
Careful! The field contains Analyzed selected strings. Analyzed
strings are highly unique and can use a lot of memory to visualize.
Values: such as bar will be foo-foo and bar broken into. See Core
Mapping Types for more information on setting esta field Analyzed as
not
Elasticsearch default dynamic mapping is to analyze any string field (break the field into tokens, for instance: aaa_bbb_ccc will be break down into aaa,bbb and ccc).
If you do not want such behavior you must change the mapping settings
before any document was pushed into the index.
You have two options to do that:
Change the mapping for a particular index using mapping API, in a static way or dynamic way (dynamic means that the mapping will be applies also to fields that still does not exist in the index)
You can change the behavior of any index according to a pattern, using the template API
This example shows a template that changes the mapping for any index that starts with "app", applying "not analyze" to any field in any type and make sure "timestamp" is a date (good for cases in with the timestamp is represented as a number of seconds from 1970):
{
"template": "myindciesprefix*",
"mappings": {
"_default_": {
"dynamic_templates": [
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index": "not_analyzed"
}
}
},
{
"timestamp_field": {
"match": "timestamp",
"mapping": {
"type": "date"
}
}
}
]
}
}
}
Really you dont have any problem is only a message of info, but if you dont want analyzed fields when you build your index in elasticsearch you must indicate that one field is a not analyzed field.

How to set existing elastic search mapping from index: no to index: analyzed

I am new to elastic search, I want to updated the existing mapping under my index. My existing mapping looks like
"load":{
"mappings": {
"load": {
"properties":{
"customerReferenceNumbers": {
"type": "string",
"index": "no"
}
}
}
}
}
I would like to update this field from my mapping to be analyzed, so that my 'customerReferenceNumber' field will be available for search.
I am trying to run the following query in Sense plugin to do so,
PUT /load/load/_mapping { "load": {
"properties": {
"customerReferenceNumbers": {
"type": "string",
"index": "analyzed"
}
}
}}
but I am getting following error with this command,
MergeMappingException[Merge failed with failures {[mapper customerReferenceNumbers] has different index values]
Though there exist data associated with these mappings, here I am unable to understand why elastic search not allowing me to update mapping from no-index to indexed?
Thanks in advance!!
ElasticSearch doesn't allow this kind of change.
And even if it was possible, as you will have to reindex your data for your new mapping to be used, it is faster for you to create a new index with the new mapping, and reindex your data into it.
If you can't afford any downtime, take a look at the alias feature which is designed for these use cases.
This is by design. You cannot change the mapping of an existing field in this way. Read more about this at https://www.elastic.co/blog/changing-mapping-with-zero-downtime and https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html.

Resources