What is the recommended way to check for username / email availability? Should I use mutation or query?
According to the official definition of Mutation.
If you have an API endpoint that alters data, like inserting data into a database or altering data already in a database, you should make this endpoint a Mutation rather than a Query.
In the case of checking for availability it does not change or create any data, so it should be a Query.
Related
I am new to elastic and starting to sync my database tables into elastic indexes. I have started by using the table ID(UUID) as the elastic id, but I am starting to wonder if this is a mistake in terms of performance or flexibility in the long term? Any advice would be appreciated.
I think this approach should actually be a best practice. When you update data in your ES index from the (changed) DB, you can address the document directly.
It has worked great for us to use the _bulk update API, which requires an explicit id per item.
On every change on the DB side, we enqueue change notifications, the changed object gets JSON-serialized and sent to ES, asynchronously, and in larger batches. That is making a huge performance difference. Search performance, on the other side, does not depend on the length of the _id AFAIK, not even when you look up by _id. So your DB UUID should be just fine. Especially since _ids can be alphanumeric, they are not limited to just numbers.
Having a 1:1 relationship via _id between the ES result and your system of record (I assume that's what your DB is for) is advantageous also for transparency purposes. In any case, you want to store the database ID as some field, ideally indexed, at least, to help you understand where that document came from.
So, rather than creating your own ID field, you may as well use the built-in _id field right away, with your DB-supplied data.
From elasticsearch > 2, there is no _timestamp field. we have to explicitly populate time fields like created_on and updated_on
One way i know to populate these fields is check item to be populated is already existing in Database using uid (assume uid generated on client side using some item properties). If item exists in Database, update all fields except created_on. If item does not exist, create entry in database with item and created_on equal to current time.
My questions are:
* Isn't checking every time i create/update redundant ??
* Is there any better way to implement created_on and updated_on logic on client side without redundant (without querying elasticsearch) ??
Using a "middleware" for this is a good way to avoid having this kind of logic in the client, once you change the design, you would need to perform changes on every client implementation, so I think is a good use case for ingesting pipelines and there is an example in the doc.
Accessing Ingest Metadata Fields:
Beyond metadata fields and source fields, ingest also adds ingest metadata to the documents that it processes. These metadata properties are accessible under the _ingest key. Currently ingest adds the ingest timestamp under the _ingest.timestamp key of the ingest metadata. The ingest timestamp is the time when Elasticsearch received the index or bulk request to pre-process the document.
If you need more intelligent middleware, mind the Script Processor which allows inline and stored scripts to be executed within ingest pipelines.
We have base database where the Candidate records are created/updated/deleted. In elasticsearch we are using es generated ID. However database do not know the ES ID generated for the record. In a batch process we are fetching the 500 records from DB and sending them to ES but we do not know which records needs to insert and which record needs to update. Also we do not know the ES ID of already updated records.
Using CURL request in BULK is there any way we can first check whether the record is present or not using unique email address of each record. If it is present then send Update request or if it is not present then send Insert request.
Is there any way we can write the script inside the BULK call?
Regards,
Jayesh Bhoyar
I'm currently using elasticsearch and running a cron job every 10 minutes that will find newly created/updated data from my DB and sync it with elasticsearch. However, I want to use bulk to sync instead of making and arbitrary amount of requests to update/create documents in an index. I'm using the elasticsearch.js library created by elasticsearch.
I face 2 challenges that I'm uncertain about how to handle:
How to use bulk to update a document if it exists and create a document if it doesn't within bulk without knowing if it exists in the index.
How to format a large amount of JSON to run through bulk to update/create the document because bulk api expects the body to be formatted a certain way.
The best option when trying to stream in data from an SQL database is to use Logstash's JDBC Input to do it for you (the documentation). This can hopefully just do it all for you.
Not all SQL schemes make this easy, so for your specific questions:
How to use bulk to update a document if it exists and create a document if it doesn't within bulk without knowing if it exists in the index.
Bulk currently accepts four different types of sub-requests, which behave differently than you probably expect coming from an SQL world:
index
create
update
delete
The first, index, is the most commonly used option. It means that you want to index (the verb) something to the Elasticsearch index (the noun). However, if it already exists in the index given the same _id, then it will replace it. The rest are probably a bit more obvious.
Each one of the sub-requests behaves like the individual option that they're associated with (so update is an UpdateRequest under the hood, delete is a DeleteRequest, and index is an IndexRequest). In the case of create, it is a specialization of index, which effectively says "add this if it doesn't exist, but fail it if is does exist".
How to format a large amount of JSON to run through bulk to update/create the document because bulk api expects the body to be formatted a certain way.
You should look into using either the Logstash approach or any of the existing client language libraries, such as the Python client, which should work well from cron. The clients will take care of the formatting for you. One for your preferred language most likely already exists.
I am working on a large scale oracle database. There is a requirement to validate whether the email address is available in the database when user enters it. If a direct database call is made it would be exact match
e.g Select email from Users where emailaddress = "sampleemail#domain.com" it is not a LIKE.
It has been suggested that rather than doing a direct database exact search it would be better to do a solr search for this. Even so it will be an exact match.
I would like understand, Can there be a significant advantage in using solr in this scenario as it is a exact match. If so how
No, don't do that. Build an index in the database (a unique btree) and query that. Whoever suggested solr for this is highly misinformed about the trade offs. This is literally and exactly why there are indexes inside the database at all.