Is there a way to add newly added field in one of the indexes to be included in index pattern? - elasticsearch

I've an alias setup for rolling indices in elastic search. Let's call the alias : "alias" for now. It points to a number of indexes and rolls over after every 100gb. Now, let's say the number of fields in previous indices associated with alias is 100 and I've added one more field while writing to latest index. so, the number of fields become 101.
I've setup an index pattern by the name of "alias" and I can see all the indices listed via that index pattern but I am unable to visualize the 101th field I just added in the recent indices. Is there a way to do it ?
Please let me know if more details are needed regarding the same.

Hope you added the new field in the write index that your alias is pointing to, an alias can have only one write index but can have many read index and if you added the new field to a read index of your alias, you will not be able to visualise it using your alias.

Related

Can you have an index pattern with a field with multiple field types?

Currently I have an elasticsearch index that rolls over periodically. We have an index mapping applied to a certain index pattern. We want to update the field type of the index for subsequent indices that gets rolled over.
If we change the mapping of a field from a string type to number for new rolled over indices, what happens in the index pattern when refreshed?
Would the index pattern have the field as one type over the other?
There is only one version of an index pattern at any given time. When you update it (i.e. change some mapping type), all the existing indices matching that index pattern remain unchanged. All future indices created out of that index pattern will get the modification (i.e. new field mapping type).
What you need to be aware is that you'll end up with (old) indices containing documents with a field having the old mapping type and (new) indices containing documents with a field having the new mapping type. Depending on the change you make, some of your queries running on old and new indices might not run correctly afterwards. Make sure that your queries still work with that mapping change.

Does updating Elasticsearch indices requires updating Kibana index pattern?

I am using Elasticsearch and Kibana as plugin to view the data in the indices. I am using Kibana's DevTools to send commands for adding/deleting/updating indices etc.
I want to add a field to a certain text property so it will have a keyword field to be able to both make a full text searches and aggregate using this property.
1) Does a change like that means I need to update Kibana's index pattern as well?
2) I have read the ElasticSearch's docs on PUT Mappings and know how to use it to update the indices themselves, but I don't know how to update the index patterns.. I read the same API should be used to update it, but I don't know how to see the index pattern's original mapping in order to update it.
Yes, if you change the index mapping in ES, then you need to go in Kibana and refresh the related index patterns.
Right now, you need to go inside Kibana (Management > Index patterns), select the index pattern, and press the "Refresh" button at the top right of the window in order to pick up the mapping changes.
Also note that if you updated some text fields in order to have a keyword sub-field, you'll also need to call the _update_by_query API on your index in order to reindex the changed field in all your documents

What is the use of maintaining two aliases for a single Elastic Search Index

I have been exploring Elastic Search lately.
I have been going through aliases. I see ES provides an API to create multiple aliases to a single index like below:
{ "actions" : [{ "add" : { "indices" : ["test1", "test2"], "alias" : "alias1" } }] }
Refer: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html#indices-aliases
I'm wondering what is the use case of this.
Won't the queries on aliases get split if an alias point to multiple indices?
I have tried getting the info, but failed to do so as everywhere it's being explained how to acheive this but not the use case.
Directing me to a resource where I could get more info would also help.
A possible use case is when your application has to switch from an
old index to a newindex with zero downtime.
Let's say you want to reindex an index because of some reasons and you're not using aliases with your index then you need to update your application to use the new index name.
How this is helpful?
Assume that your application is using the alias instead of an index name.
Let's create an index:
PUT /my_index
Create its alias:
PUT /my_index/_alias/my_index_alias
Now you've decided to reindex your index (maybe you want to change the existing mapping).
Once documents have been reindexed correctly, you can switch your alias to point to the new index.
Note: You need to remove the alias from the old index at the same time as we add it to the new index. You can do it using _aliases endpoint atomically.
A good read : elastic
As per your question usage of maintaining two aliases for a single index:
Create “views” on a subset of the documents in an index.
Using multiple indices having same alias:
Group multiple indices under same name, which is helpful if you want to perform a single query on multiple index at the same time.
But you can't insert/index data using this strategy.
Lets say that you have to types of events, eventA & eventB. You want to "partition" them by time, so you use alias to map multiple indices (e.g. eventA-20220920) to one alias ('eventA' in this case). And you want make one alias for all the event types, so you need to give all the eventA-* and eventB-* indices another alias 'event'.
That way when you add a third type of event (eventC) you can just add them to the 'event' alias and don't change your queries

How to detect when a new unique term has been inserted into an index on a specific field in a specific index in Elasticsearch?

I currently have a cron job that is looking at a field called "ex.set" and performs these tasks:
For every index, run a terms aggregation on the field "ex.set"
For every index, get every existing alias
For every unique term appearing in an index in "ex.set", if it does not have an existing alias, create a filtered alias
The job runs every ten minutes but most of the time does not find anything. Is there a way or a plugin (compatible with 2.3.x), that will automatically detect when a new unique term has been inserted into an index on a specific field in a specific index? And then if there is a unique item trigger the creation of a filtered alias on that index? Thank you in advance for any ideas or solutions.
Yes, I believe you can use Watcher plugin to do this. It has a default license valid for 30 days, after which some features are disabled and afterwards you'd need a valid license to have it fully working again.
The basic idea is that your first two steps can be put in a chain input as search inputs which will collect the data.
Then, the additional step which compares the existent aliases with the terms from that aggregation can be considered as a script condition where you do your magic of comparing the two sets. If your condition establishes that a new alias needs to be created then, in the action part of the watch you can use a webhook action to call the create alias REST command on the index.

How to find fields with mapping conflicts

My index settings in Kibana tell me that I have fields with mapping conflicts in my logstash-* index patterns.
What is the easiest way to find out which fields have a conflicting mapping and/or in which indices the conflict occurs?
As of at least Kibana 5.2, you can type "conflict" into the Filter field, which will filter all fields down to only those which have a conflict. At the far right there is a column named "controls", and for each field it has a button with a pencil icon. Clicking that will tell you which indices have which mapping.
Fields filtered to only those with conflicts:
Indices in which field mapping conflicts:
You can easily find how fields are mapped using the mapping API in Kibana.
If you know you have a mapping conflict, I will assume you know the field name that has the conflict. These will be listed under Management/Index Patterns/index_pattern
If you have indices that are created daily, such as production-2020.06.16, you can search across all the indices with production*.
Go to Dev Tools and enter this query, changing the index pattern (production*) and conflictedFieldname to suit your needs.
GET production*/_mapping/field/conflictedFieldname
This will pull all indices that match the production* pattern and will list the mapping for conflictedFieldname for each index. Scroll through and see which one is not like the other one.
You can also check out the Elasticsearch documentation here: Elasticsearch documentation: Get Field Mapping API
The reason you're getting a conflict is because the first value that goes into the index is used by Elasticsearch to make its best guess as to what data type it should be. You can ensure it is always the same type by placing a template for the index pattern you are concerned with.
Elasticsearch documentation: Put Index Template
In Elasticsearch 5.5.2, you can click on the dropdown on the right of the Filter search box and select "conflict". This is in the Index Patterns page.
It should be easy to spot those in the list of fields, when defining the pattern. Something like this:
Since I couldn't locate the mapping conflict in the gui. I went down the hard path analysed my config for missing/conflicting field type found the offender and reindexed my data.
If you click the type column on the index patterns page where the warning is displayed, it should sort the indexes by type. Conflicted fields will have type 'conflict'.

Resources