How to sync data between elasticsearch clusters? - elasticsearch

I want elasticsearch data backup in different physical location.
I have tried to put all elasticsearch nodes into a same cluster at first, but when program query or update elasticsearch, large data will transfer on internet. It will cause a lot of money for network traffic and there is a network delay.
Is there any easy way to sync data between two elasticsearch clusters? so that I can only sync the changed data on the internet.
PS:
I don't so care about data sync delay, less then 1 min is acceptable

In case if you are running the latest version of Elasticsearch (5.0 or 5.2+), you need to have or add date field updatedAt or similar name and then on destination cluster side run cron every 1 minute which will run Reindex API query like this:
POST _reindex
{
"source": {
"remote": {
"host": "http://sourcehost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"range": {
"updatedAt": {
"gte": "2015-01-01 00:00:00"
}
}
},
"dest": {
"index": "dest"
}
}
More information on Reindex API you can get here - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html
In case if you are using older Elasticsearch (<5.0), then you can use tool elasticdump (https://github.com/taskrabbit/elasticsearch-dump) to transfer data using similar approach with updatedAt field.

Related

How to reindex and change _type

We need to migrate a number of indexes from ElasticSearch 6.8 to ElasticSearch 7.x. To be able to do this, we now need to go back and fix a large number of documents are the _type field of these documents aren't _doc as required. We fixed this for newer indexes, but some of the older data which we still need has other values in here.
How do we reindex these indexes and also change the _type field?
POST /_reindex
{
"source": {
"index": "my-index-2021-11"
},
"dest": {
"index": "my-index-2021-11-n"
},
"script": {
"source": "ctx._type = '_doc';"
}
}
I saw a post indicating the above might work, but on execution, the value for _type in the next index was still the existing of my-index.
The one option I can think of is to iterate through each document in the index and add it to the new index again which should create the correct _type, but that will take days to complete, so not so keen on doing that.
I think below should work . Please test it out, before running on actual data
{
"source": {
"index": "my-index-2021-11"
},
"dest": {
"index": "my-index-2021-11-n",
"type":"_doc"
}
}
Docs to help in upgradation
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/reindex-upgrade-inplace.html

keeping [Europe/Berlin] (or other timezones in this fomat) while indexing Elasticsearch

I'm trying to familiarize myself with the Elasticsearch, specifically defining the mapping within a json file and creating a new index with it (with the help of the new Java API Client and Spring boot).
This is what my json file looks like:
{
"mappings": {
"properties": {
"Id": {
"type": "text"
},
"timestamp": {
"type": "date",
"format": "date_optional_time"
},
"metadata":{
"type": "nested"
},
"attributes": {
"type": "nested"
}
}
}
}
this can index my documents just fine, but I realized that if I use ZonedDateTime.now() for the data in my timestamp field, it fails to index due to the [Europe/Berlin] at the end. It works if I change it to
ZonedDateTime now = ZonedDateTime.now();
String date = now.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
which gives me the time but without [Europe/Berlin]! As far as I understand from my various googling and "stackoverflow-ing", ES does not take [Timezone] in its date types, only the +2:00 format. But is it possible to keep it? (Maybe through an ingest pipeline?)
There are various documents that I would like to reindex that has [Timezone] hanging at the end of it, but these older documents saved it as text.... I would like to be able to do date math with the timestamp field in the future, which is why I decided to try and create a new/better mapping with proper fields. Any pointers appreciated!

elasticsearch : How can i tell _reindex api in to continue indexing docs while source index still receiving new docs?

I have daily created indices, these indices are filled by an agent which collects a logs every second of the day, and i'am reindexing them (by field) to new indices using _reindex api.
How can i tell _reindex api to still reindixing while the source index still receiving new documents ?
Any help woould be really appriciated!
Thank you
you cannot force reindex API to be online to reindex new received documents.
but I have solution. you can add a date field (index_time) to your source index. write an hourly cron job to run reindex API with a query to get last hour indexed docs via index_time.
POST _reindex
{
"source": {
"index": "my-index-000001",
"query": {
"filter" :{
"query": {
"range": {
"index_time": {"gte" : "now-1h"}
}
}
}
}
},
"dest": {
"index": "my-new-index-000001"
}
}

Elasticsearch reindex only missing documents

I am trying to reindex an index of 200M of documents from cluster A to cluster B. I used the Reindex API with a remote source and everything worked fine. In the menwhile of my reindex some documents were added into the cluster A so I want to add them as well into the cluster B.
I launched again the reindex request but it seems that the reindex process is taking a lot, like if it was reindexing everything again.
My question is, is the cluster reindexing from scratch all the documents, even if they didn't change ?
My elasticsearch version is the 5.6
The elasticsearch does not know there is a change in the documents or not. So it tries to have each document completely in both indices. If you have a field like insert_time in your data, you can use reindex with query to limit the part of index of A to become reindex on B. This will let you use your older reindex and finish it faster. Reindex by query would be something like this:
POST _reindex
{
"source": {
"index": "A",
"query": {
"range": {
"insert_time": {
"gt": "time you want"
}
}
},
"dest": {
"index": "B"
}
}

Reindex fail due to SearchContextMissingException

My company is using elasticsearch 2.3.4.
We have a cluster that contains 38 ES nodes, and we've been having a problem with reindexing some of our data lately...
We've reindexed before very large indexes and had no problems, but recently, when trying to reindex much smaller indexed (less than 10GB) - we get : "SearchContextMissingException [No search context found for id [XXX]]".
We have no idea what's causing this problem or how to fix it. We'd like some guidance.
Has anyone saw this exception before?
From github comments on issues related to this , i think this can be avoided by changing batch size :
From documentation:
By default _reindex uses scroll batches of 1000. You can change the batch size with the size field in the source element:
POST _reindex
{
"source": {
"index": "source",
"size": 100
},
"dest": {
"index": "dest",
"routing": "=cat"
}
}
I had the same problem with an index that holds many huge documents. I had to reduce the batch size down to 10. (100 and 50 both didn't work).
This was the request that worked in the end:
POST _reindex?slices=5&refresh
{
"source": {
"index": "source_index",
"size": 10
},
"dest": {
"index": "dest_index"
}
}
You should also set the slices to the number of shards you have in your index.

Resources