How to model this multi-tenancy usecase? - elasticsearch

We are a multi-tenant platform.
The platform has a construct called Entity.
Users can create entities to model any real-life object eg: Customers, Orders, Payment, Inventory, Cart, pretty much anything.
Each entity will have its set of attributes, for example, a customer entity can have: name, email, phone, address (can be another nested entity), etc.
The requirement is to provide query/OLAP capabilities on these entities. For example, find all customers where name = 'john'.
The requirement includes all types of queries such as DATE RANGE, CONTAINS, LIKE, NUMERIC RANGE, FULL-TEXT Queries, etc. We also need Sorting, Aggregation, Pagination features.
Current design
We use elasticsearch to store entity data.
Each tenant is assigned a separate index.
When an entity is created in a tenant, the corresponding mappings are created inside the associated index. The mappings have roughly the following form:
{
"properties": {
"Customer": {
"properties": {
"name": {
"values": {
"type": "text"
}
},
"age": {
"values": {
"type": "integer"
}
}
}
},
"Order": {
"properties": {
"id": {
"values": {
"type": "text"
}
},
"eta": {
"values": {
"type": "integer"
}
}
}
}
//... other entities of this tenant
}
}
Major Problems with this design
Ever-growing mappings.
Frequent updates to mappings and hence the nodes are busy circulating cluster update information, leading to search/indexing latencies and occasional timeouts.
Existing mappings can't be altered if required. We have to go for the entire re-index procedure.
The current design was able to serve us for a few years until recently when the issues started popping up.
What would be a good design to model the above multi-tenancy requirement? Which database solution and schema modeling will be appropriate?

If you decide to stick with ES, you'll need to give your tenants more than one index, i.e. one index per tenant/entity instead of one index per tenant. The reasons for this are exactly the ones you mentioned, i.e. ever-growing mapping and the difficulty to update existing mappings.
You'll end up with more indexes for sure (N tenants x M entities) and the challenge will be to properly size those indexes in terms of how many primary shards you need for each of their entities. But I think it's even more difficult now that all entities are stored in a single index per tenant, so it'll turn out to be easier to fan out your tenant index into several.
Another option is to come up with a very generic mapping, which only contains typed fields like int_field_1, int_field_2, text_field_1, text_field_2, etc, and you keep a per-tenant mapping between the generic field names and the tenant specific field names:
name -> text_field_1
age -> int_field_1
...
That way you've less mappings to manage, it's more flexible in terms of what kind of data you can accommodate, but it comes at the price of keeping the above field mapping up-to-date.
In any case, you need to end up having more indexes for your tenants in order to make it easier to manage their mappings and keep their size at bay. it will also allow you to scale better, because it's easier to spread several smaller indices over your data nodes, than very big indexes, especially given the new sizing recommendation made by Elastic.

Related

how to join indexes on elasticSearch

I have two indexes : Student
"mappings": {
"properties": {
"name": {
"type": "text"
}
}
}
}
and University having a onetomany relationship with student
how to declare mappings for university?
While you can use the Join field type here, you should really beware of trying to use Elasticsearch as a relational database. It doesn't really support joins without a big performance tax and without some limitations (indexing parent and child into the same shard).
Usually the answer would be to de-normalize the relation. For example, in this case put some of the University fields directly in the Student document, allowing you to search on them directly.

Elasticsearch: document relationship

I'm doing a elastic search autocomplete-as-you-type
I'm using cool features like ngram's and other stuff to create needed analyzer.
currently I break my had around indexing following data.
Let say I have Payments type,
each document in this type looks like this
{
..elastic meta data..
paymentId: 123453425342,
providerAccount : {
id: 123456
firstName: Alex,
lastName: Web
},
consumerAccount : {
id: 7575757,
firstName: John,
lastName: Doe
},
amount: 556,
date : 342523454235345 (some unix timestamp)
}
so basically this document represents not only the payment itself but it also shows the relationship of the payment, the 2 entities which related to the payment.
Payment always have its provider and consumer.
I need this data in payment document because I want to show it in UI.
By indexing it like so, it might be a big pain for handling the updates of Consumer or Provider because each time some of them change its properties I have to update all the payments which has this entity.
Another possible solution is to store only id's of this consumers/providers and make a query on payments and then 2 queries for the entities for retrieving needed fields, but i'm not sure about this because i'm doing ajax requests each time a character entered, so here comes the performance question.
I have also looked into parent/child relationship solution which basically fits my case but I wasn't able to figure out if I can retrieve also the parent(consumer/provider) fields while I querying child(payment).
What would you suggest?
Thanks!
Yes, you can retrieve the parent while querying child using has_child.
Considering payment as child and consumer as parent, You can search all the consumers by :
GET /index_name/consumer/_search
{
"query": {
"has_child": {
"type": "payment",
"query": {
// any query on payment table
},
"inner_hits": {}
}
}
}
This would fetch you all the consumer based on the query on child i.e payment in your case.
inner_hits is what you are looking for. This will retrieve you the children as well. But it was introduced in elasticsearch 1.5.0. So version should be greater than elasticsearch 1.5.0.
You can refer https://www.elastic.co/blog/elasticsearch-1-5-0-released.
Your problem is not an issue. I suppose you want tot freeze data after the pay, right? So you don't need to update the accounts data in existing payment documents.
Further: parent/schild is easy for updating, but less efficient with querying. For auto complete, stay using your current mapping!

ElasticSearch content ACL Filtering performance

Following is my content model.
Document(s) are associated with user & group acls defining the principals who have access to the document.
The document itself is a bunch of metadata & a large content body (extracted from pdfs/docs etc).
The user performing the search has to be limited to only the set of documents he/she is entitled to (as defined by the acls on the document). He/She could have access to the document owing to user acls or owing to the group the user belongs to.
Both group membership and acls on the document are highly transient in nature meaning a user's group membership changes quite often so are the ACLs on the document itself.
Approach 1
Store the acls on the document along with its metadata as a non-stored field. Expand the groups in the ACL to the individual users (since the acl can be a group).
At the time of query, append a filter to the user query which will do a bool filter to include only documents with the userid in the acl field
"filter" : {
        "query" : {
            "term": {
                "acls": "1234"
            }
        }
      }
The problem i see with this approach is that documents need to get re-indexed though the document metadata/content is not changed.
Every time a user's group membership changes
Every time the ACL on the document changes (permission changed for the document)
I am assuming that this will lead to a large number of segment creation and merges and especially since the document body (one of the fields of the document) is a pretty large text section.
Approach 2:
This is a modification on the approach 1. This approach attempts to limit the updates on the document when the updates are strictly acl related.
Instead of having the acls defined on the metadata. This approach entails creating multiple types
In the Document Index
Document (with metadata & text body) as a parent
id
text
userschild Document (parent id & user acls only). This document will exist for each parent
id
parentid
useracls
groupschild Document (parent id & group acls only). This document will exist for each parent with group acls
id
parentid
groupacls
In the Users Index
An entry for each user in the system with the groups he/she is associated with
User
id
groups
The idea here is that updates are now localized to the different ElasticSearch entities.
In case of user acl changes only the userschild document will get updated (avoiding a potentially costly update on the parent document).
In case of the group acl changes only the groupschild document will get updated (again avoiding a potentially costly update on the parent document).
In case of user group membership changes again only the secondary index will get updated (avoiding the update on the parent document).
The query itself will look as follows.
"filter" : {
"query" : {
"bool": {
"should": [
{
"has_child": {
"type": "userschild",
"query": {
"term": {
"users": "1234"
}
}
}
},{
"has_child": {
"type": "groupschild",
"query": {
"terms" : {
"groups" : {
"index" : "users",
"type" : "user",
"id" : "1234",
"path" : "groups"
}
}
}
}
}
]
}
}
}
I have doubts with regards to its scalability owing to the nature of the query that will be involved. It involves two terms query one of which that has to be built from a separate index. I am considering improving the terms lookup using fields with docvalues enabled.
Will the approach 2 scale? The concerns I have are around the has_child query and its scalability.
Could someone clarify my understanding in this regard?
I think perhaps this is overcomplicated by expanding groups before querying. How about leaving group identifiers intact in the documents index instead?
Typically, I'd represent this in two indices (no parent-child, or any sort of nested relationships at all).
Users Index
(sample doc)
{
"user_id": 12345,
"user_name": "Swami PR"
"user_group_ids": [900, 901, 902]
}
Document Index
(sample doc)
{
"doc_id": 98765,
"doc_name": "Lunch Order for Tuesday - Top Secret and Confidential",
"doc_acl_read_users": [12345, 12346, 12347],
"doc_acl_write_users": [12345],
"doc_acl_read_groups": [435, 620],
"doc_acl_write_groups": []
}
That Users Index could just as easily be in a database... your app just needs "Swami's" user_id and group_ids available when querying for documents.
Then, when you query [Top Secret] documents as Swami PR, (to read), make sure to add:
"should": [
{
"term": {
"doc_acl_read_users": 12345
}
},
{
"terms": {
"doc_acl_read_groups": [900, 901, 902]
}
},
"minimum_should_match": 1
]
I can see 2 main types of updates that can happen here:
Users or Groups updated on Document := reindex one record in Document Index
User added to/removed from Group := reindex one record in User Index
With an edge-case
User or Group deleted
okay, here you might want to batch through and reindex all documents
periodically to clean out stale user/group identifiers... but
theoretically, stale user/group identifiers won't exist in the
application anymore, so don't cause issues in the index.
i have implemented the approach #2 in my company, now i'm researching a graph DB to handle ACL. Once we have crossed 4 million documents for one our clients the updates and search queries became quite frequent and didn't scale as expected.
my suggestion to look into graph frameworks to solve this.

Laravel 5 Eloquent model dynamically generated at runtime

I am looking to make some sort of "GenericModel" class extending Eloquent's Model class, that can load database configuration (like connection, table name, primary key column) as well as relationships at runtime based on a configuration JSON file.
My reasons for wanting this are as follows: I'm going to have a lot of database tables and thus a lot of models, but most don't really have any complicated logic behind them. I've developed a generic CRUD API and front-end interface to interact with them. Each model has a "blueprint" JSON file associated with it that describes things like its attributes and relationships. This lets me automatically generate, say, a view to create a new model and it knows what attributes I need to fill in, what input elements to use, what to label them, which are mandatory, how to validate, whether to check for uniqueness, etc. without ever needing code specific to that model. Here's an example, project.json:
{
"db_table": "projects",
"primary_key": "projectid",
"display_attr": "title", // Attribute to display when picking row from list, etc
"attributes": {
"projectid": { // Attribute name matches column name
"display": "ID", // Display this to user instead of db column name
"data_type": "integer" // Can be integer, string, numeric, bool...
},
"title": {
"data_type": "string",
"unique": true // Check for uniqueness when validating field
},
"customer": {
"data_type": "integer", // Data type of local key, matches customer PK
"relationship": { // Relationship to a different model
"type": "manytoone",
"foreign_model": "customer"
},
"user": "autocomplete" // User input element/widget to use, queries customer model for matches as user types
},
"description": {
"data_type": "string",
"user": "textarea" // Big string, use <textarea> for user input
"required": false // Can be NULL/empty, default true
}
},
"views": {
"table": [ // Show only these attributes when viewing table
"customer",
"title"
],
"edit_form": [ // Show these when editing
"customer",
"title",
"description"
],
...
}
}
This works extremely well on the front end, I don't need any more information than this to describe how my models behave. Problem is I feel like I just end up writing this all over again in most of my Model classes and it seems much more natural to have them just pull information from the blueprint file as well. This would result in the information being in one place rather than two, and would avoid extra effort and possible mistakes when I change a database table and only need to update one file to reflect it.
I'd really just like to be able to do something like GenericModel::blueprint('project.json')->find($id) and get a functioning "product" instance. Is this possible, or even advisable? Or is there a better way to do this?
Have you looked at Migrations (was Schema Builder)? It allows you to programatically build models (from JSON if necessary).
Then you could leverage Eloquent on your queries...

Elasticsearch with multiple parent/child relationship

I'm building an application with complicated model, says Book, User and Review.
A Review contains both Book and User id.
To be able to search for Books that contain at least one review, I've set the Book as Review's parent and have routing as such. However I also need to find Users who wrote reviews that contain certain phrases.
Is it possible to have both the Book and User as Review's parent? Is there a better way to handle such situation?
Note that I'm not able to change the way data is modeled/not willing to do so because the data is transfered to Elasticsearch from a persistence database.
As far as I know you can't have a document with two parents.
My suggestion based on Application-side join chapter of Elasticsearch the definitive guide:
Create a parent/child relationship Book/Review
Be sure you have user_id property in Review mapping which contain the user id who wrote that review.
I think that covers both uses cases you described as follows:
Books that contain at least one review It can be solved with has child filter/query
Users who wrote reviews that contain certain phrases It can be solved by querying to reviews with the phrase you want to search and perform a cardinality aggregation on field user_id. If you need users information you have to query your database (or another elasticsearch index) with the ids retrieved.
Edit: "give me the books that have reviews this month written by user whose name started with John"
I recommend you to collect all those advanced uses cases and denormalize the data you need to achieve them. In this particular case it's enough with denormalizing the user name into Review. In any case elasticsearch people has written about managing relations in their blog or elasticsearch the definitive guide
Somths like (just make Books type as parent for Users and Reviews types)
.../index/users/_search?pretty" -d '
{
"query": {
"filtered": {
"filter": {
"and": [
{
"has_parent": {
"parent_type": "books",
"filter": {
"has_child": {
"type": "Reviews",
"query": {
"term": {
"text_review": "some word"
}
}
}
}
}
}
]
}
}
}
}
'
You have two options
Elasticsearch Nested Objects
Elasticsearch parent&child
both are compared and evaluated nicely here

Resources