Elasticsearch - equivalent of LEFT JOIN - elasticsearch

I have 20,000,000 line items in Elasticsearch that I am happily searching (it's working amazingly well).
There is an added dimension though that I don't know how to solve:
A user can "buy" those items (in batches of 1,000 to 100,000) and I need my search to only return the items that they have not previously "bought". I'd solve this with a LEFT JOIN in SQL.
I could add a boughtBy[] field to each item, but then I would need to update lots of documents every time a user buys. Feels kind of wrong?

Elasticsearch uses Lucene which supports blockjoin. In Elasticsearch that is Parent-Child Relationships. It gives you a join but it also comes with limitations (it's no longer possible to arbitrarily distribute documents across nodes, memory requirements can explode in certain scenarios).
Elasticsearch documentation gives you a nice overview of the relationship modelling options.
If you need deep joins, more complex relationships, etc., you might consider looking into the SIREn plugin.
(disclaimer: I currently work for the company that develops SIREn)

Related

Elastic Search: One index with custom type to differentiate document schemas VS multiple index, one per document type?

I am not experienced in ES (my background is more of relational databases) and I am trying to achieve the goal of having a search bar in my web application to search the entire content of it (or the content I will be willing to index in ES).
The architecture implemented is Jamstack with a gatsby application fetching content (sometimes at build time, sometimes at runtime) from a strapi application (headless cms). In the middle, I developed a microservice to write the documents created in the strapi application to the ES database. At this moment, there is only one index for all the documents, regardless the type.
My problem is, as the application grows and different types of documents are created (sometimes very different from one another, as example I can have an article (news) and a hospital) I am having hard time to correctly query the database as I have to define a lot of specific conditions when making the query (to cover all types of documents).
My solution to this is to keep only one index and break down the query in several ones and when the user hits the search button those queries are run and the results will be joined together before being presented OR break down the only index into several ones, one per document which leads me to another doubt, is it possible to query multiple indexes at once and define specific index fields in the query?
Which is the best approach? I hope I could make my self clear in this.
Thanks in advance.
According to the example you provided, where one type of document can be of type news and another type is hospital, it makes sense to create multiple indices(but you also need to tell, how many such different types you have). there are pros and cons with both the approach and once you know them, you can choose one based on your use-case.
Before I start listing out the pros/cons, the answer to your other question is that you can query multiple indices in a single search query using multi-search API.
Pros of having a single index
less management overhead of multiple indices(this is why I asked how many such indices you may have in your application).
More performant search queries as data are present in a single place.
Cons
You are indexing different types of documents, so you will have to include a complex filter to get the data that you need.
Relevance will not be good, as you have a mix of documents which impacts the IDF of similarity algo(BM25), and impacts the relevance.
Pros of having a different index
It's better to separate the data based on their properties, for better relevant results.
Your search queries will not be complex.
If you have really huge data, it makes sense to break the data, to have the optimal shard size and better performance.
cons
More management overhead.
if you need to search in all indices, you have to implement multi-search and wait for all indices search result, which might be costly.

ES7 - how to model 1-n parent-child relations - different ES types

I am migrating an old ES instance to ES7.
We need 1-n parent-child relations.
We used to have multiple types in the same index and it was easy.
Some types were related to their parent via _parent.
But ES7 will only allow single-type indices.
Which makes me think I will convert the old types to separate indices.
I read the docs and they suggest using join for parent-child relations, however those seem to apply only to documents belonging to a single index.
https://www.elastic.co/blog/removal-of-mapping-types-elasticsearch
So if I convert my previous types to separate indices, in my understanding join will not help.
So what is the right solution to model parent-child relation between different types (or should I say indices) in ES7?
Or maybe I should not model my data as separate types/indices in ES7. But in that case, how to solve this?
Thanks in advance
Yes, that's correct in using indices instead of types as ES deprecated that in version 7 hence we have to create multiple indexes to manage this use-case.
So now we have only two options:
Option 1: Denormalize the data and ingest documents accordingly.
Here again you can manage it in two ways:
Denormalize significantly in a way you continue to make use of join field or let's say denormalize 1-to-n child types into n indexes of to 1-to-1 parent-child type. Basically you would have as many indexes as many parent-child relations you've had in earlier version, however with parent being same in all the indexes. No of indexes = No of parent-child relationships
Second way to achieve this would be to completely denormalize the data in such a way you have a single index with all the information of all children from all types you've had in a single document. In this case no of index = 1
I guess if your children has unique fields, in that case I think the second one with single index may perform, but again you have not mentioned the number of documents you have so you would probably need to find a balance. Another technique is to make use of both as well.
Disadvantages in this case would be
Management of ingestion layer or jobs
Complexity in maintaining the structure of index
Performance issues as per this link in using join type
Keep an eye on future ES versions if they decide to modify parent-child feature although this is not to be considered for now.
Advantages:
Probably at the service layer which doesn't have to deal with Option 2 as discussed below
Able to co-relate with the use-cases you may have from the front-end application usage.
Options 2: Manage Join at application layer
Have a single parent index and multiple child indexes but manage the join at the application layer. If you have multiple 1-to-n mapping, then the number of indices would be n (parent = 1, child = n-1)
Disadvantages:
May or may not be able to easily co-relate with use-cases
Writing separate join logic at the application layer. Not to mention if you would want to do aggregation between parent and child, you'd have to write several for loops with multiple individual aggregation queries.
Advantages:
Ease of maintaining jobs or ingestion layer
Management of indexes would be less painful
Alternatively you can mix and match both the above options, depending on what use-cases you'd have.
So you see, both have their pluses and minus. If ingestion layer is easy in one, it becomes cumbersome in another, if service layer is easier to maintain in one, it becomes difficult in another.
Best way is to go ahead with some mock data, do some performance testing and see what factors you'd pitch in, ease of querying, maintenance of index, query or aggregation performances, ease of developing/managing both ingestion jobs and service layer etc.
May not be exactly what you are looking for, but I just hope this helps!

Best way to store votes in elasticsearch for a reddit like system

I am building a site similar to reddit using elasticsearch and trying to decide where is the best place to store the up/down votes. I can think of couple options.
Store as part of the document.
In this case, any vote will trigger an update on the document. According to elasticsearch document, this is essentially a replace of the whole document. That seems to be a very expensive operation.
Store in another database.
Store votes in other database like SQL/MongoDB and update elasticsearch periodically. In this case, we have to tolerate some delay for the new votes to affect search result which is not so ideal and will also increase complexity and maintenance cost.
Store in another index in elasticsearch
This can separate the concern by index - one mostly RO, one RW. Is there an efficient way to merge the two indices so that I can order by votes at query time?
Any suggestions on those options or other better way to handle this?
There is a forth option - store votes in a separate document with a different type but in the same index as the original document. The votes type can be made a child of the article type. This setup will enable you to perform queries against articles and votes at the same time using has_child filters and queries. It will also require reindexing of only a small votes document every time a vote occurs instead of the large article document. On the negative side, the has_child and has_parent queries require loading of the parent/child map into memory, so this approach has a non-trivial memory footprint comparing to all other options that you have described.

Querying a view using multiple keys

Given the following view for the gamesim-sample example:
function (doc, meta) {
if (doc.jsonType == "player" && doc.experience) {
emit([doc.experience,meta.id], doc.id);
}
}
I would like to Query the leaderboard for users who only belong to specific group (the grouping data is maintained in an external system).
For e.g. if the view has users "orange","purple","green","blue" and "red" I would like the leaderboard to give me the rankings of only "orange" and "purple" without having to query their respective current experience points.
...view/leaderboard?keys=[[null,"orange"],[null,"purple"]
The following works fine, but it requires additional queries to find the experience point of "orange" and "purple" beforehand. However, this does not scale for obvious reasons.
...view/leaderboard?keys=[[1,"orange"],[5,"purple"]
Thanks in advance!
Some NoSql vs. SQL Background
First, you have to remember that specifically with Couchbase, the advantage is the super-fast storage and retrieval of records. Indicies were added later, as a way to make storage a little more useful and less error-prone (think of them more as an automated inventory) and their design really constrains you to move away from SQL-style thinking. Your query above is a perfect example:
select *
from leaderboard
where id in ('orange','purple')
order by experience
This is a retrieval, computation, and filter all in one shot. This is exactly what NoSql databases are optimized not to do (and conversely, SQL databases are, which often makes them hopelessly complex, but that is another topic).
So, this leads to the primary difference between a SQL vs a NoSQL database: NoSql is optimized for storage while SQL is optimized for querying. In conjunction, it causes one to adjust how one thinks about the role of the database, which in my opinion should be more the former than the latter.
The creators of Couchbase originally focused purely on the storage aspect of the database. However, storage makes a lot more sense when you know what it is you have stored, and indices were added later as a feature (originally you had to keep track of your own stuff - it was not much fun!) They also added in map-reduce in a way that takes advantage of CB's ability to store and retrieve massive quantities of records simultaneously. Neither of these features were really intended to solve complex query problems (even though this query is simple, it is a perfect example because of this). This is the function of your application logic.
Addressing Your Specific Issue
So, now on to your question. The query itself appears to be a simple one, and indeed it is. However,
select * from leaderboard
is not actually simple. It is instead a 2-layer deep query, as your definition of leaderboard implies a sorted list from largest to smallest player experience. Therefore, this query, expanded out, becomes:
select * from players order by experience desc
Couchbase supports the above natively in the index mechanism (remember, it inventories your objects), and you have accurately described in your question how to leverage views to achieve this output. What Couchbase does not support is the third-level query, which represents your where clause. Typically, a where in Couchbase is executed in either the view "map" definition or the index selection parameters. You can't do it in "map" because you don't always want the same selection, and you can't do it in the index selection parameter because the index is sorted on experience level first.
Method 1
Let's assume that you are displaying this to a user on a web page. You can easily implement this filter client-side (or in your web service) by pulling the data as-is and throwing out values that you don't want. Use the limit and skip parameters to ask for more as the user scrolls down (or clicks more pages, or whatever).
Method 2
Reverse the order of your index, and sort by "group" (aka color) first, then experience level. Run separate queries to select the top 'N' users of each color, then merge and sort on the client side. This will take longer to load up-front but will give you a larger in-memory data set to work with if you need it for that reason. This method may not work well if you have a very uneven distribution of categories, in which case 'N' would need to be tailored to match the statistical distribution(s) within the categories.
Bottom Line
One parting thought is that NoSql databases were designed to deal with highly dynamic data sets. This requires some statistical thinking, because there no longer is a single "right" answer. Some degree of inconsistency and error is to be expected (as there always is in the real world). You can't expect a NoSql database to return a perfect query result - because there is no perfection. You have to settle for "good enough" - which is often much better than what is needed anyway.

MongoDB text index search slow for common words in large table

I am hosting a mongodb database for a service that supports full text searching on a collection with 6.8 million records.
Its text index includes ten fields with varying weights.
Most searches take less than a second. Some searches take two to three seconds. However, some searches take 15 - 60 seconds! The 15-60 second search cases are unacceptable for my application. I need to find a way to speed those up.
Searching takes 15-60 seconds when words that are very common in the index are used in the search query.
I seems that the text search feature does not support lazy parameters. My first thought was to cache a list of the 50 most common words in my text index and then ask mongodb to evaluate those last (lazy) and on top of the filtered results returned by the less common parameters. Hopefully people are still with me. For example, say I have a query "products chocolate", where products is common and chocolate is uncommon. I would like to be able to ask mongodb to evaluate "chocolate" first, and then filter those results with the "products" term. Does anyone know of a way to achieve this?
I can achieve the above scenario by omitting the most common words (i.e. "products") from the db query and then reapplying the common term filter on the application side after it has received records found by db. It is preferable for all query logic to happen on the database, but am open to application side processing for a speed payout.
There are still some holes in this design. If a user only searches common terms, I have no choice but to hit the database with all the terms. From preliminary reading, I gather that it is not recommended (or not supported) to have multiple text indexes (with different names) on the same collection. My plan is to create two identical tables, each with my 6.8M records, with different indexes - one for common words and one for uncommon words. This feels kludgy and clunky, but am willing to do this for a speed increase.
Does anyone have any insight and/or advice on how to speed up this system. I'd like as much processing to happen on the database as possible to keep it fast. I'm sure my little 6.8M record table is not the largest that mongodb has seen. Thanks!
Well I worked around these performance issues by allowing MongoDB full text search to search in OR based format. I'm prioritizing my results by fine tuning the weights on my indexed fields and just ordering by rank. I do get more results than desired, but that's not a huge problem because my weighted results that appear at the top will most likely be consumed before my user gets to less relevant results at the bottom.
If anyone is struggling with MongoDB text search performance using AND searching only, just switch back to OR and control your results using weights. It performs leaps better.
hth
This is the exact same issue as $all versus $in. $all only uses the index for the first keyword in the array. I believe your seeing the same issue here, reason why the OR a.k.a. IN works for you.

Resources