I'm wondering if having thousands of different indexes is a bad idea?
I'm adding a search page to my web app based on ElasticSearch. The search page lets users search for other users on the site by filtering on a number of different indexed criteria (name, location, gender etc). This is fairly straight forward and will require just one index that contains a document every user of the site.
However, I want to also create a page where users can see a list of all of the other users they follow. I want this page to have the same filtering options that are available on the search page. I'm wondering if a good way to go about this would be to create a separate index for each user containing documents for each user they follow?
While you can certainly create thousands of indices in elasticsearch, I don't really see the need for it in your use case. I think you can use one index. Simply create an additional child type followers for the main user record. Every time user A follows user B, create a child record of B with the following content: {"followed_by" : "A"}. To get the list of users that current user is following, you can simply add Has Child Filter to you query.
I would like to add to Igor's answer that creating thousand of indexes on a tiny cluster (one or two nodes) can cause some drawbacks.
Each shard of an index is a full Lucene instance. That said, you will have many opened files (probably too many opened files) if you have a single node (or a small cluster - in term of nodes).
That's one of the major reasons why I would not define too many indices...
See also File descriptors on installation guide
Related
Popular search engines are quite performant when it comes to full text searches and many other aspects, however, I am not sure how to map the main document storage system security policies to ES and/or SOLR?
Consider Google Drive and it's folders. Users can share any folder - then files and folders below are also shared. Content management systems use something similar.
But how to map that to the external search engines (that is, not built-in to application's content management system), especially, if there are millions of documents in many tens of thousands of folders, tens of thousands of users? Will it help if, for example, depth (nestedness) of the folders is limited to some small number?
I know ES has user roles, but I can't see it can help here, because accesses are given more or less arbitrary. Another approach is to somehow materialize user access in the documents (folders and documents) themselves, but then changes in users' roles, local to some folder, will result in changing many thousands of documents.
Also, searches can be quite arbitrary and lengthy, so it is desired to have pagination, so, for example, fetching "everything" and then sorting out user access on application side is not an option.
I believe the scenario described is quite common, but I can't find any hints how to implement it.
I had used solr as search engine and solr's Data Import Handler (DIH) feature for importing the data from database to Solr.
I would suggest you to go with the approach of indexing the acl's along with the documents.
I had done the same approach and its working fine till now.
I agree that you have re-index the data on the solr side when there is any changes on folder access or change in the access of level of documents. We do need to re-index the document if the metadata of the document is changes or the content of the document is changes. Similarly we can also update the documents on the solr side for any changes in the ACL(Access Control List).
Why to index the ACL along with Document information.
The reason is whenever user search for a document, you can pass the user acl as part of the query in the form of filter query and get the documents which are accessible to user.
I feel this removes the complexity of applying the acl logic at the back end side.
If you dont index the ACL in solr, then you have to filter out the documents after you retrieve from solr by checking the document is and whatever the acl logic applies.
Or the last option could be index the document without acls. Let the user search all the documents. When he tries to perform any action on those documents then you can check the permission and allow the user to perform the action or deny the user saying you dont have enough permission to access the document.
Action could be like View, Download, Update etc..
You need to decide whichever approach suits and works out in your case.
I have a web app that is used to search and view documents in Elastic Search.
The goal now is to maintain two values.
1. How many times the document was fetched in total (life time views)
2. How many times the document was fetched in last 30 days.
Achieving the first is somewhat possible, but the second one seems to be a very hard problem.
The two values need to be part of the document as they will be used for sorting the results.
What is the best way to achieve this.
To maintain expiring data like that you will need to store each view with its timestamp. I suppose you could store them in an array in the ES document, but you're asking for trouble doing it like that, as the update operation that you'd need to call every time the document is viewed will have to delete and recreate the document (that's how ES does updates), and if two views happen at the same time it will be difficult to make sure they both get stored.
There are two ways to store the views, and make use of them in the query:
Put them in a separate store (could be a different index in ES if you like), and run a cron job or similar every day to update every item in the main index with the number of views from the last thirty days in the view store. Even with a lot of data it should be possible to make this quite efficient, depending on your choice of store for views.
Use the ElasticSearch parent/child datatype to store views in the same index as the main documents, as children. I'm not sure that I'd particularly recommend this approach, but I think it should be possible with aggregations to write a query that sorts primary documents by the number of children (filtered by date). It might be quite slow though.
I doubt there is any other way to do this with current versions of ES, because it doesn't support joining across indices. Either the data must be aggregated in advance onto the document, or it has to be available in the same index.
Database structure for my Python application is very similar to Instagram one. I have users, posts, and users can follow each other. There are public and private accounts.
I am indexing this data in ElasticSearch and searching works fine so far. However, there is a problem that search returns all posts, without filtering by criteria if user has access to it (e.g. post is created by another user who has private account, and current user isn't following that user).
My data in ElasticSearch is indexed simply across several indexes in a flat format, one index for users, one for posts.
I can post-process results that ElasticSearch returns, and remove posts that current access doesn't have access to, but this introduces additional query to the database to retrieve that user followers list, and possibly blocklist (I don't want to show posts to users that block each other too).
I can also add list of follower IDs for each user to ElasticSearch upon indexing and then match against them, but in case where user has thousands of followers, these lists will be huge, and I am not sure how convenient it will be to keep them in ElasticSearch.
How can I efficiently do this? My stack is backend Python + Flask, PostgreSQL database and ElasticSearch as search index.
Maybe you already found a solution...
Using elastic "terms lookup" can solve this problem if you have an index with the list of followers you can filter on, as you said here:
I can also add list of follower IDs for each user to ElasticSearch
upon indexing and then match against them, but in case where user has
thousands of followers, these lists will be huge, and I am not sure
how convenient it will be to keep them in ElasticSearch.
More details in the doc:
https://www.elastic.co/guide/en/elasticsearch/reference/7.5/query-dsl-terms-query.html#query-dsl-terms-lookup
Note that there's a limitation of 65 536 terms (but it can be overwritten) so if your service don't have millions of users default limit will be fine.
I'm using ElasticSearch 7.1.1 as a full-text search engine. At the beginning all the documents are accessible to every user. I want to give users the possibility to edit documents. The modified version of the document will be accessible only to the editor and everyone else will only be able to see the default document.
To do this I will add two array to every document:
An array of users excluded from seeing the doc
An array with the only user that can see the this doc
Every time someone edit a document I will:
Add to the excluded users list the user that made the edit
Create document containing the edit available only to that user.
This way in the index I'll have three types of documents:
Documents accessible to everyone
Documents accessible to everyone except some users
Documents accessible only to a specific users
I use ElasticSearch not only to fetch documents but also to calculate live aggregations (e.g. sums of some field) so query-time I will be able to fetch user specific documents.
I don't expect a lot of edits, less than 1% of the total documents.
Is there a smarter, and less query intensive, way to obtain the same results?
You could implement a document level security.
With that you can define roles that restrict the read-access to certain documents that match a query (e.g. you could use the id of the document).
So instead of updating the documents each time via your proposed array-solution, you would instead update the role respectively granting the roles to the particular users. This would of course require that every user has an elasticsearch user.
This feature is the only workaround to fulfill your requirements that Elasticsearch brings on the table "out of the box" as far as I know.
I hope I could help you.
We have a SAAS product where companies create accounts and populate their own private data. We are thinking about using ElasticSearch to allow the customer to search all their own data in our system.
As an example we would have a free text search where the user can type anything and the API would return multiple different types of objects. E.g. they type John and the API returns the user object for users matching a first name containing John, or an email containing John. Or it might also return a team object where the team name matches John (e.g. John's Team) etc.
So my questions are:
Is ElasticSearch a sensible choice for what we want to do from a
concept perspective?
If we did use ElasticSearch what would be the
best way to index the data so we can search all data for a
particular customer? Does each customer have its own index?
Are there any hints on how we keep ElasticSearch in sync with the data in the database (DynamoDB)? If we index the data for a customer and then update the data as it changes is it sensible to then also reindex the data on a scheduled basis too?
Thanks!
I will try to provide general answers from my own experience with splitted customer data with elastic search:
If you want to search through a lot of data really fast, ES is always a really good solution for this - it comes with the cost of an secondary data storage that you will have to keep in sync with your database.
You cant have diffrent data types in one index, so the case would be either to create one index per data type and customer (carefull, indices come with an overhead - avoid creating too much with little data in it) - or you create one index per data type and add a property to your data where you then can filter it with e.g. a customer number.
You will have to denormalize your data as much as possible to benefit from elastic search.
As mentioned in 1 you will need to keep both in sync - there are plenty ways too do that. As an example we use a an event driven approach to push critical updates into elasticsearch as soon as possible (carefull: its not SQL - so you will always have some concurrency issues when u need read and write safety). For data that is not highly critical we use jobs that update them regulary. When you index a document with the same id it will get completely updated.
Hope this helps, feel free to asy questions.