Best way to synchronize Elasticsearch with Mysql - elasticsearch

I am using elasticsearch in my spring web mvc project (spring-data-elasticsearch) and to synchronize with database (MySQL).
I am indexing the document from my app, if any new entity going to add in db tables at the same time, from service layer, I request to index this document to elasticsearch also.
Both db tables and elasticsearch index have same data and to delete and update operation on I am using same concept, performing the change operation on elasticsearch and db table, it is working fine.
Now I want to know what will be the disadvantages of this approach.
Or is there any best way to make our elasticsearch index up to date from db. I used logstash but what about the deleted entities

The disadvantage of Synchronous indexation is there is no retry if there is an error while creating index data.
At your place i will create a cronjob/batch ( for trigger it depends how much data are updated and how important is the update of index )
and this job will have execution status with logs
you will have the clear idea about your index and no missing data
And for indexes you can a FULL index mode & an UPDATE indexes mode ( you should add an update date on your tables )
Indexing strategy you have two phases and you can choose TWO_PHASES : you need a master & slave ==> while executing indexing on master the slave will respond to requests and when the indexing is over you synchronize DIRECT_MODE : drop index & create new one

Related

Elastic Search:Update of existing Record (which has custom routing param set) results in duplicate record, if custom routing is not set during update

Env Details:
Elastic Search version 7.8.1
routing param is an optional in Index settings.
As per ElasticSearch docs - https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html
When indexing documents specifying a custom _routing, the uniqueness of the _id is not guaranteed across all of the shards in the index. In fact, documents with the same _id might end up on different shards if indexed with different _routing values.
We have landed up in same scenario where earlier we were using custom routing param(let's say customerId). And for some reason we need to remove custom routing now.
Which means now docId will be used as default routing param. This is creating duplicate record with same id across different shard during Index operation. Earlier it used to (before removing custom routing) it resulted in update of record (expected)
I am thinking of following approaches to come out of this, please advise if you have better approach to suggest, key here is to AVOID DOWNTIME.
Approach 1:
As we receive the update request, let duplicate record get created. Once record without custom routing gets created, issue a delete request for a record with custom routing.
CONS: If there is no update on records, then all those records will linger around with custom routing, we want to avoid this as this might results in unforeseen scenario in future.
Approach 2
We use Re-Index API to migrate data to new index (turning off custom routing during migration). Application will use new index after successful migration.
CONS: Some of our Indexes are huge, they take 12 hrs+ for re-index operation, and since elastic search re-index API will not migrate the newer records created between this 12hr window, as it uses snapshot mechanism. This needs a downtime approach.
Please suggest alternative if you have faced this before.
Thanks #Val, also found few other approaches like write to both indexes and read from old. And then shift to read new one after re-indexing is finished. Something on following lines -
Create an aliases pointing to the old indices (*_v1)
Point the application to these aliases instead of actual indices
Create a new indices (*_v2) with the same mapping
Move data from old indices to new using re-indexing and make sure we don't
retain custom routing during this.
Post re-indexing, change the aliases to point to new index instead of old
(need to verify this though, but there are easy alternatives if this
doesn't work)
Once verification is done, delete the old Indices
What do we do in transition period (window between reindexing start to reindexing finish) -
Write to both Indices (old and new) and read from old indices via aliases

mongoDB laravel search query taking too much time

i have 400000+ records now stored in MongoDB with a regular indexed but when i fire a update or search query through laravel elenquote it's taking too much time to get the particular records.
in where condition we have use indexed columns only.
we are using atlas M10 cluster instance with multiple replicas
so anyone have a some idea about it please share us
my replication lag graph
this is my profiler data
My Indexs in schema

Fetching Index data from elasticsearch DB using Spring Data ElasticSearch

I have a java code which connects to Elasticsearch DB using Spring-data-elasticsearch and fetches all the index data by connecting to the repository and executing the findAll() method. The data received from ES is being processed by a seperate application. When new data is inserted into elastic search, I have the below queries
1. How can I fetch only the newly inserted data Programatically ?
2. Apart from using the DSL queries, Is there a way to Asyncronously get the new records as and when new data is inserted into elasticsearch DB.
I dont want to execute the findAll() method again. Because it returns the entire data ( including the previously processed records as well) .
Any help on this is much appreciated.
You will need to add a field (I call it createdAt here) to your entities that contains the timestamp when your application inserts into Elasticsearch. One possibility would be to use the auditing support of Spring Data Elasticsearch to have the value set automatically, or you set the value in your application. If the data is inserted by some other application you need to make sure that it contains a timestamp in a format that maps the field type definition of this field in your application.
Then you'd need to define a method in your repository like
SearchHits<T> findByCreatedAtAfter(Timestamp referenceValue);
As for getting a notification in some form when new data is inserted: I'm not aware that Elasticsearch offers something like that. You will probably need to regularly call the method that retrieves the data.

Is it possible for an Elasticsearch index to have a primary key comprised of multiple fields?

I have a multi-tenant system, whereby each tenant gets their own Mongo database within a MongoDB deployment.
However for elastic search indexing, this all goes into one elastic instance via Mongoosastic, tagged with a TenantDB to keep data separated when searching.
Currently we have some of the same _id's reused across the multiple databases in test data for various config collections(Different document content, same _id), however this is causing a problem when syncing to elastic as although they're in separate databases when they come into elastic with the same Type and ID one of them gets dropped.
Is it possible to specify both the ID and TenantDB as the primary key?
Solution 1: You can search for multiple index in Elasticsearch. But, If you can not separate your index for database, you can follow like below method. While syncing your data to elasticsearch, use a pattern to create elastic document _id. For example, from mongoDb1 use mdb1_{mongo_id}, from mongoDb2 use mdb2_{mongo_id} , etc. This will be unique your _ids if you have not same id in same mongo database.
Solution 2: Separate your index.

how do you update or sync with a jdbc river

A question about rivers and data syncing with a production database using elastic search:
Are rivers suited for only bulk loading data initially, or does it somehow listen or monitor for changes.
If I have a nightly import of data, is it just better to delete rivers and indexes, and re-index and recreate the rivers?
If I update or change a river, do I have to delete and re-create the index?
How do I set up a schedule with a river to fetch new data periodically. Can it store last maxid so that it can do diff queries in the sql to select into the river?
Any suggestions on a better way to keep the database and elastic search in sync - without calling individual index update functions with a PUT command?
All of the Elasticsearch rivers are different - some are provided directly by Elasticsearch, many more are developed by third parties:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html
Each operates differently, so to answer your questions you have to choose a specific river. For your case, since you're looking to index data from a production database, I'll assume that the JDBC river is what you would use:
https://github.com/jprante/elasticsearch-river-jdbc
This river will index data from your JDBC source, including picking up changes. It can do so on a schedule (there is detailed documentation on the schedule parameter on this page: https://github.com/jprante/elasticsearch-river-jdbc). However, this river will not pick up deletes:
https://github.com/jprante/elasticsearch-river-jdbc/issues/213
you may find this discussion useful, concerning getting around the lack of delete support with building a new river/index daily and using index aliases: ElasticSearch river JDBC MySQL not deleting records
You can just map your id in your DB to be _id with alias, this way the elastic will identify when the document was changed or not.

Resources