elasticsearch: reindexing an index - elasticsearch

I have an index which a few fields as keyword type. I need ot have these fields as text instead now.
Going through documentation, it seems to be not possible.
Documentation instead asks to create a new index and reindex it with documents from older index.
Can I keep new index name same as old one? Won't it cause issues during reindexing process?

no, you need to reindex to an index with a different name. One thing you could do, is to (1) reindex to e.g. original_index_name_v2, (2) create an index alias named original_index_name catching original_index_name_* indices, (3) delete the original index. This way, next time you'll need to change the mapping, you don't need to change the index name but just keep querying the same alias

Related

Use Elasticsearch Reindex API effectively

I am working on a task of reindexing my Elastic search indexes in case any change happens. There are 2 ways that I can find to implement this but they look same to me unless I am missing something.
I am getting data to my Elastic search service from Postgres of service B, which has a paginated endpoint.
Approach 1:
Create alias which will point to our existing index.
When reindex is triggered, create a new index and once the reindexing is complete, point the alias, which was pointing to old index, to the newly created index.
Delete the old index.
Approach 2:
Create a new Index.
Use the reindex API to copy the data from old index to new index, which will apply the new changes to the old documents.
To me, both of these look same. Disadvantage of using approach 2 seems that it will create a new index name, hence we will have to change the index names while querying.
Also, considering my reindexing operation would not be a frequent task, I am reading the data from a paginated endpoint and then creating indexes again, Approach 1 seems to make more sense to me.
In approach1, you are using alias. In approach 2, you are not using alias.
Both would be same if you add alias to approach2 as step3 and step4 - delete the old index.
Refer As you need to do little often.

Reindex all of ElasticSearch with Curator?

Is there a Recipe out there to Reindex all ElasticSearch Indices with Curator?
I'm seeing that it can Reindex a set of indices into one (Daily to Month use case), however I don't see anything that would suggest it could easily apply a new mapping file to every Elastic Index.
I'm taking a guess I'll need to write a wrapper script around Curator to grab index names and feed them into Curator.
I don't know if I got you right as you mentioned reindexing and mapping changes...
If you want to set/update a mapping in a collection of indices and if you know the indices to update by name (or pattern), you are able to apply the same mapping or a mapping change at once with https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html#_multi_index_2
For reindexing, there is no way to specify multiple source/target pairs at once but you can split one index into many. But as you sugessted, you can use subsequent calls to the reindex api.
BTW: The reindex api does not copy the settings nor mappings from the source into the destination index. You need to handle it by yourself, maybe using https://www.elastic.co/guide/en/elasticsearch/reference/6.4/indices-templates.html

Updating existing documents in ElasticSearch (ES) while using rollover API

I have a data source which will create a high number of entries that I'm planning to store in ElasticSearch.
The source creates two entries for the same document in ElasticSearch:
the 'init' part which records init-time and other details under a random key in ES
the 'finish' part which contains the main data, and updates the initially created document (merges) in ES under the init's random key.
I will need to use time-based indexes in ElasticSearch, with an alias pointing to the actual index,
using the rollover index.
For updates I'll use the update API to merge init and finish.
Question: If the init document with the random key is not in the current index (but in an older one already rolled over) would updating it using it's key
successfully execute? If not, what is the best practice to perform the update?
After some quietness I've set out to test it.
Short answer: After the index is rolled over under an alias, an update operation using the alias refers to the new index only, so it will create the document in the new index, resulting in two separate documents.
One way of solving it is to perform a search in the last 2 (or more if needed) indexes and figure out which non-alias index name to use for the update.
Other solution which I prefer is to avoid using the rollover, but calculate index name from the required date field of our document, and create new index from the application, using template to define mapping. This way event sourcing and replaying the documents in order will yield the same indexes.

Elasticsearch: Reindex aliases (copy them)

I need to create a new index and reindex document from first created index to the last one.
After having using ReIndex API I've been able to "copy" all documents to destination index.
Nevertheless, I'm really looking forward to knowing whether I'm able to "copy" aliases from first index to the last one.
Any ideas?

Elasticsearch Reindexing while updating documents?

What if I've changed mapping for my index and wants to reindex?
I'm currenly using the Java API which does not yet have the reindex functionality, so using bulk would solve my problems. So the solution would look something like this
ref How to reindex in ElasticSearch via Java API
Long time ago
create index MY_INDEX_1
create mapping for MY_INDEX_1
create alias MY_INDEX_1 -> MY_INDEX
create documents in MY_INDEX
Time to reindex!
List item
create index MY_INDEX_2
create mapping for MY_INDEX_2
scroll search + bulk all documents from MY_INDEX_1 to MY_INDEX_2
Renaming and deletion of old index
create alias MY_INDEX_2 -> MY_INDEX
delete alias MY_INDEX_1 -> MY_INDEX
delete index MY_INDEX_1
But what happens, while reindexing all documents, a document that was reindexed in the beginning is updated from a user.
Or that between reindexing and rename aliases the above happpens?
Possible Solutions ?
One way would be using external version, such as it does not overwrite an document with an higher version
Or could it be solved in another way?
Or between renaming aliases and deleting my_index_1, reindexing all documents that has been indexed since the reindexing? But then still it would be the case that a document has been updated between renaming aliases and second reindexing
Or should we lock while reindexing? Seems like a bad solution..
I think this is your real question:
But what happens, while reindexing all documents, a document that was reindexed in the beginning is updated from a user. Or that between reindexing and rename aliases the above happpens?
I just asked a question that is very close, but still has questions that need to be resolved separately. However, my research allows me to answer this question. See the question for details and references.
To answer your question, you create a second alias just before reindexing. I call this a duplicate_write_alias and you have your application, if it sees this second alias, write to first the old and then the new index via the two aliases. (the order is important to cancel a potential race). When the indexing is done, your indexing process deletes this duplicate_write_alias and moves your MY_INDEX alias to the new MY_INDEX_2 as noted above. Do the alias switch in one atomic command.
As I noted in my question, you still have to deal with potential 'index does not exist' errors because of a remaining race between your application's checking for existence of the alias and the alias being deleted. I'm hoping there's a better answer than 'always write twice and ignore errors' or 'check and hope for the best'...
I think there is also another (more ugly way):
You can disable write operations for the source index while reindexing, this leads to temporary not usable apis, you don't have to:
Maintain a second storage to hold the truth
Deal with inconsistency
Flag documents for delete which should be deleted after migration
You can use elastic search engine storage to create snapshots between indecies
You can signal users of your api to send their change again later (when the indexing is done)
Downsides:
You have a downtime at least for write operations
You need more logic to handle errors, if the index would not be set to allow-writes-again mode (automatic recovery etc.)
Holding more than one index causes more storage space to be used.
For more information look here:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/index-modules.html

Resources