ElasticSearch routing field - elasticsearch

Are there any guidelines for selecting a field to be used for routing in ElasticSearch? Previously we've had success using the user_id for an index of about ~750K documents (we have about 10K users).
For a second index, I didn't use the user_id field, and instead used a different ID that was much larger (750K). Performance seems subpar, but its hard to pin down if the routing field is the cause (it is required), as there are 30 million documents stored.
(I'll admit I'm thinking the query analyzer works in a similar fashion to a traditional SQL based one, and this may not be the case.)

Related

Does query multiple indices slower than querying single index in Elasticsearch?

I have a retention index which is used to save transaction data. The index pattern in on yearly, which means transaction-2000, transaction-2001, etc. There is a timestamp field inside each document which indicate the time of this document occurred.
I also have an alias transaction which points to all yearly transaction indices. When I query the transaction data in my application, I just use the alias name rather than the yearly index name.
My question is if I query just one year document based on the timestamp field, e.g. 2000, will the query be faster if I only query the single index transaction-2000 rather than the alias transaction? Or whether they are the same speed?
Joey , this is a classic elasticsearch problem. When you have multiple aliases behind an alias , all of them are queried. One way to overcome this is to use routing which comes in extremely handy if you already know which index to go to. At query time if you already know the ts field (2000 or 2000 and 2001 for example) , you can specifically instruct to search only 2 indexes behind the alias using the route
https://www.elastic.co/blog/customizing-your-document-routing
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-shard-routing.html
We had a similar issue when we scaled for a large DataSet search (similar design where multiple indexes were behind an alias) , routing came in handy and the queries scaled to our requirements. Hope this helps

Should I be using database ID's as Elastic ID's

I am new to elastic and starting to sync my database tables into elastic indexes. I have started by using the table ID(UUID) as the elastic id, but I am starting to wonder if this is a mistake in terms of performance or flexibility in the long term? Any advice would be appreciated.
I think this approach should actually be a best practice. When you update data in your ES index from the (changed) DB, you can address the document directly.
It has worked great for us to use the _bulk update API, which requires an explicit id per item.
On every change on the DB side, we enqueue change notifications, the changed object gets JSON-serialized and sent to ES, asynchronously, and in larger batches. That is making a huge performance difference. Search performance, on the other side, does not depend on the length of the _id AFAIK, not even when you look up by _id. So your DB UUID should be just fine. Especially since _ids can be alphanumeric, they are not limited to just numbers.
Having a 1:1 relationship via _id between the ES result and your system of record (I assume that's what your DB is for) is advantageous also for transparency purposes. In any case, you want to store the database ID as some field, ideally indexed, at least, to help you understand where that document came from.
So, rather than creating your own ID field, you may as well use the built-in _id field right away, with your DB-supplied data.

ElasticSearch vs Relational Database

I'm creating a microservice to handle the contacts that are created in the software. I'll need to create contacts and also search if a contact exists based on some information (name, last name, email, phone number). The idea is the following:
A customer calls, if it doesn't exist we create the contact asking all his personal information. The second time he calls, we will search coincidences by name, last name, email, to detect that the contact already exists in our DB.
What I thought is to use a MongoDB as primary storage and use ElasticSearch to perform the query, but I don't know if there is really a big difference between this and querying in a common relational database.
EDIT: Imagine a call center that is getting calls all the time from mostly different people, and we want to search fast (by name, email, last name) if that person it's in our DB, wouldn't ElasticSearch be good for this?
A relational database can store data and also index it.
A search engine can index data but also store it.
Relational databases are better in read-what-was-just-written performance. Search engines are better at really quick search with additional tricks like all kinds of normalization: lowercase, รค->a or ae, prefix matches, ngram matches (if indexed respectively). Whether its 1 million or 10 million entries in the store is not the big deal nowadays, but what is your query load? Well, there are only this many service center workers, so your query load is likely far less than 1qps. No problem for a relational DB at all. The search engine would start to make sense if you want some normalization, as described above, or you start indexing free text comments, descriptions of customers.
If you don't have a problem with performance, then keep it simple and use 1 single datastore (maybe with some caching in your application).
Elasticsearch is not meant to be a primary datastore so my advice is to use a simple relational database like Postgres and use simple SQL queries / a ORM mapper. If the dataset is not really large it should be fast enough.
When you have performance issues on searches you can use a combination of relation db and Elasticsearch. You can use Elasticsearch feeders to update ES with your data in you relational db.
Indexed RDBMS works well for search
If your data is structured i.e. columns are clearly defined, searching 1 million records will also not be a problem in RDBMS.
When to use Elastic
Text Search: Searching words across multiple properties (e.g. description, name etc.)
JSON Store and search: If data being stored is in json format and later needs to be searched
Auto Suggestions: Elastic is better at providing autocomplete suggestions
Elastic as an application data provider
Elastic should not be seen as data store, even if you storing data in it. It is about how you perceive elastic. Elastic should be used to store and setup data for the application. It is the application which decides how and when to use elastic (search and suggestions). Elastic is not a nosql storage alternative if compared to RDBMS, you should use a nosql database instead.
This perception puts elastic in line with redis and kafka. These tools are key components of an application design and they are used to serve as events stores, search engines and cache etc. to the applications.
Database with Elastic
Your design should use both. For storing the contacts use the database, index the contacts for querying. Also make the data available in elastic for searching, autocomplete and related matches.
As always, it depends on your specific use case. You briefly described it, but how are you acually going to use the data?
If it's just something simple like checking if a customer exists and then creating a new customer, then use the RDMS option. Moreover, if you don't expect a large dataset, so that scaling isn't an issue (hence the designation that Elasticsearch is for BigData), but you have transactions and data integrity is important, then a RDMS will be the right fit. Some examples could be for tax, leasing, or financial reporting systems.
However, if you have a large dataset, you need a wide range of query capabilities, such as a fuzzy search or searches where the user
can select multiple filters on the data or you want to do some predictive analysis on the data, then Elasticsearch is the clear choice.
For example, I worked on an web based app with a large customer base: 11 million, with 200+ hits per second at peak time for a find a doctor application. The customer could check some checkboxes to determine, specialty, spoken languages, ratings, hospitals, etc. all sorted by the distance from the users location with a 2 second or less response time. It would be very difficult for a RDMS to match that.

Elasticsearch Searching over large number of fields in a large index

On Elasticsearch 5.6.
We've got a requirement to implement a context free search (a simple google like search anything) feature that could operate over an index with 1000 fields. The index itself can be big (1 million docs per day).
I was looking at the query_string query with a fields as '*'. I came across this section
https://www.elastic.co/guide/en/elasticsearch/reference/master/tune-for-search-speed.html#_search_as_few_fields_as_possible
where it says searching over multiple fields will slow down the search and a general pattern is to have an "all like field with all the values munged and run a search on it.
While this is perfectly possible, my requirement is a bit more complex that these 1000 fields are protected by document level security by using x-pack security. Therefore if I search only for the "all like" field, I might be bringing the top result as the one for which the user actually didn't have any fields relevant to their permission settings. Somewhere there's a gap here is what I foresee. Any thoughts and possible solutions?

ElasticSearch multiple types with same mapping in single index

I am designing an e-Commerce site with multiple warehouse. All the warehouses have same set of products.
I am using ElasticSearch for my search engine.
There are 40 fields each ES document. 20 out of them will differ in value per warehouse, rest 20 fields will contain same values for all warehouses.
I want to use multiple types (1 type for each warehouse) in 1 index. All of the types will have same mappings. Please advise if my approach is correct for such scenario.
Few things not clear to me,
Will the inverted index be created only once for all types in same index?
If new type (new warehouse) is added in future how it will be merged with the previously stored data.
How it will impact the query time if I would have used only one type in one index.
Depending on all types being assigned to the same index, it will only created once and
If a new type is added, its information is added to the existing inverted index as well - adding new terms to the index, adding pointers to existing terms in the index, adding data to doc values per new inserted document.
I honestly can't answer that one, though it is simple to test this in a proof of concept.
In my previous project, I experienced the same setting implementing a search engine with Elasticsearch on a multishop-platform. In that case we had all shops in one type and when searching per shop relevant filters were applied. Though, the approach to separate shop-data by "_type" seems pretty clean to me. We applied it the other way, since my implementation was already able to cover it by filters at the moment of the feature request.
Cheers, Dominik

Resources