Elastic Search index - elasticsearch

I am uploading data to elasticSearch using batch process. I am getting data once in a day from third party which need to be uploaded in elasticSearch.
My question is can I maintain past, current & future version of index in elasticSearch?
Below are the thinking:
If Batch process is success :
1.Upload the data in future version of index.
2.Copy the data of current version of index to past.
3.Copy future version of index data to current version.
If Batch process is fail:
1.Do nothing and continue with the current version of index.
Can anyone please help me with this?

This is usually done with aliases. E.g.
Alias pointing to working yesterday's index:
working_index -> index_2016_12_01
Create new index_2016_12_02, upload data, if everything is ok switch alias (Alias API allows transactional changes.)
working_index -> index_2016_12_02
Then you can archive or delete or just leave untouched the old index
Always perform all the queries against alias instead of real index name.

Related

Copy documents in another index on creation in Elasticsearch

We want to keep track of all the changes of a document, so we want to store all the document versions in separate index.
Is there a way when a new document is added or changes to send the entire document in another index? Maybe there is a processor for this use case?
As far as I know, Elasticsearch as such supports only version numbers but there is no way to trace back to previous version.
You could maintain version history in a seperate elastic index
Whenever you update main_index ensure that you update main_index as well
POST main_index/_doc/doc_id
POST main_index/_doc/doc_id_version
May be you can configure logstash to do this...not sure

Elasticsearch Reindexing while updating documents?

What if I've changed mapping for my index and wants to reindex?
I'm currenly using the Java API which does not yet have the reindex functionality, so using bulk would solve my problems. So the solution would look something like this
ref How to reindex in ElasticSearch via Java API
Long time ago
create index MY_INDEX_1
create mapping for MY_INDEX_1
create alias MY_INDEX_1 -> MY_INDEX
create documents in MY_INDEX
Time to reindex!
List item
create index MY_INDEX_2
create mapping for MY_INDEX_2
scroll search + bulk all documents from MY_INDEX_1 to MY_INDEX_2
Renaming and deletion of old index
create alias MY_INDEX_2 -> MY_INDEX
delete alias MY_INDEX_1 -> MY_INDEX
delete index MY_INDEX_1
But what happens, while reindexing all documents, a document that was reindexed in the beginning is updated from a user.
Or that between reindexing and rename aliases the above happpens?
Possible Solutions ?
One way would be using external version, such as it does not overwrite an document with an higher version
Or could it be solved in another way?
Or between renaming aliases and deleting my_index_1, reindexing all documents that has been indexed since the reindexing? But then still it would be the case that a document has been updated between renaming aliases and second reindexing
Or should we lock while reindexing? Seems like a bad solution..
I think this is your real question:
But what happens, while reindexing all documents, a document that was reindexed in the beginning is updated from a user. Or that between reindexing and rename aliases the above happpens?
I just asked a question that is very close, but still has questions that need to be resolved separately. However, my research allows me to answer this question. See the question for details and references.
To answer your question, you create a second alias just before reindexing. I call this a duplicate_write_alias and you have your application, if it sees this second alias, write to first the old and then the new index via the two aliases. (the order is important to cancel a potential race). When the indexing is done, your indexing process deletes this duplicate_write_alias and moves your MY_INDEX alias to the new MY_INDEX_2 as noted above. Do the alias switch in one atomic command.
As I noted in my question, you still have to deal with potential 'index does not exist' errors because of a remaining race between your application's checking for existence of the alias and the alias being deleted. I'm hoping there's a better answer than 'always write twice and ignore errors' or 'check and hope for the best'...
I think there is also another (more ugly way):
You can disable write operations for the source index while reindexing, this leads to temporary not usable apis, you don't have to:
Maintain a second storage to hold the truth
Deal with inconsistency
Flag documents for delete which should be deleted after migration
You can use elastic search engine storage to create snapshots between indecies
You can signal users of your api to send their change again later (when the indexing is done)
Downsides:
You have a downtime at least for write operations
You need more logic to handle errors, if the index would not be set to allow-writes-again mode (automatic recovery etc.)
Holding more than one index causes more storage space to be used.
For more information look here:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/index-modules.html

How to update an index/indice in Elasticsearch?

I've already got my index (response_summary) created using logstash, which puts data into the index from a MySQL database.
My concern here is, how will I be able to update the index manually whenever a new set of records are being added to the database without deleting and recreating the index yet again.
Or is there a way that it can be done automatically, whenever a db change is done?
Any help could be appreciated.
No way with ES. There were the rivers in ES, but they were removed in ES 2.0. The alternative is the Logstash JDBC input plugin to automatically pickup changes based on a defined schedule.
For doing the same with files, you have the LS file input plugin which is tailing the files to pick up the new changes and, also, to keep track of where it left off in case LS is restarted.

Elasticsearch : How to get all indices that ever existed

is there a way to find out the names of all the indices ever created? Even after the index might have been deleted. Does elastic store such historical info?
Thanks
Using a plugin that keeps an audit trail for all changes that happened in your ES cluster might do the trick.
If you use the changes plugin (or a more recent one), then you can query it for all the changes in all indices using
curl -XGET http://localhost:9200/_changes
and your response will contain all the index names that were at least created. Not sure this plugin works with the latest versions of ES, though.

Update ElasticSearch Document while maintaining its external version the same?

I would like to update an ElasticSearch Document while maintaining the document's version the same. I'm using version_type=external as indicated in the versioning section of the index_ documentation. Updating a document with another of the same version is normally prevented as indicated in that section: "If the value provided is less than or equal to the stored document’s version number, a version conflict will occur and the index operation will fail."
The reason I want to keep the version unaltered is because I do not create a new version of my object (stored in my database) when one adds new tags to that object, but I would like the new tags to show up in my ElasticSearch index. Is this possible with ElasticSearch?
I tried deleting the document and then adding a new document with the same Id and Version but that still gives me the following exception:
VersionConflictEngineException[[myindex][2] [mytype][6]: version
conflict, current 1, provided 1]
Just for reference, I'm using PHP Elastica (with methods $type->deleteDocument($doc); and $type->addDocument($doc);) but this question should apply to ElasticSearch in general.
The time for which elasticsearch keeps information about deleted documents is controlled by index.gc_deletes parameter. By default this time is 1m. So, theoretically, you can decrease this time to 0s, wait for a second, delete the document, index a new document with the same version, and set index.gc_deletes back to 1m. But at the moment that would work only on master due to a bug. If you are using older version of elasticsearch, you will not be able to change index.gc_deletes without closing the index first.
There is a good blog post on elasticsearch.org web site that describes how versions are handled by elasticsearch in details.

Resources