I am converting a normal working application to multi tenant application. Currently I have my mail server configured now I am not able to figure out a way to configure multiple mail server per tenant.
I should be able to add mail server on the fly when new tenant is added.
Please help.
We can think of 2 options
Option 1
You can create a table called as TenantSettings in which you can store key-value pairs of data. The value column being straight forward data or json or any format.
In this you can store the data in the following format or we can store the data in the key-value pair like SMTPServer, SMTPPort etc..
----------------------------------------------------------------------------------------------------------------
Id TenantId Setting Value CreatedBy CreatedOn ......
----------------------------------------------------------------------------------------------------------------
1 123 SMTPConfig {SMTPServer..}
----------------------------------------------------------------------------------------------------------------
2 124 SMTPConfig {SMTPServer..}
----------------------------------------------------------------------------------------------------------------
Option 2
We can store the tenant specific settings inside the tenant specific database itself if you are having a tenant wise database so that once tenant is identified, we can load all the settings for the tenant and use wherever required.
If you can share more details on your current models, we can build a right approach.
Related
I am building the inventory service, all tables keep track the owner of each record in column createdBy which store the user id.
The problem is this service does not hold the user info, so it cannot map the id to username which is required for FE to display data.
Calling user service to map the username and userid for each request does not make sense in term of decouple and performance. Because 1 request can ask for maximum 100 records. If I store the username instead of ID, there will be problem when user change their username.
Is there any better way or pattern to solve this problem?
I'd extend the info with the data needed with from the user service.
User name is a slow changing dimension so for most of the time the data is correct (i.e. "safe to cache")
Now we get to what to do when user info changes - this is, of course, a business decision. In some places it makes sense to keep the original info (for example what happens when the user is deleted - do we still want to keep the original user name (and whatever other info) that created the item). If this is not the case, you can use several strategies - you can have a daily (or whatever period) job to go and refresh the users info from the user service for all users used in the inventory, you can publish a daily summary of changes from the user service and have the inventory subscribe to that, you can publish changes as they happen and subscribe to that etc. - depending on the requirement for freshness. The technology to use depends on the strategy..
In my option what you have done so far is correct. Inventory related data should be Inventory Services' responsibility just like user related data should be User Services'.
It is FE's responsibility to fetch the relevant user details from User Service that are required to populate the UI (Remember, call backend for each user is not acceptable at all. Bulk search is more suitable).
What you can do is when you fetch inventory data from Inventory Service, you can publish a message to User Service to notify that "inventory related data was fetched for these users. So there is a possibility to fetch user related data for these users. Therefore you better cache them."
PS - I'm not an expert in microservices architecture. Please add any counter arguments if you have any.*
I have created a custom entity in common data services(CDS) that streams in data from a survey.
I'll however need to give access to the data to various people. In my dataset, I have a column called community, which should represent which people have access to what data based on the community they've entered the column.
How exactly can I filter the data, after it has streamed in to ensure I only give access to people of a particular community. And yes every community is exclusive, no two people can be in two different communities.
I want to filter by the community, such that those in community A see only A and not B or C.
There’s no straight one step OOB configuration to achieve this. Because the row level security depends on the column value ie. community field value of each record.
One way is to create owner teams and add the users to right teams, then the custom entity record has to be owned by respective team - owner team of each custom entity record can be filled/assigned automatically based on the community field value on create using plugin/workflow/Flow.
Most important, in security role for that custom entity - read privilege has to be given only for user level. Assign the security role to Teams.
I have a multi-user website and each user has own data which I can store on s3.
I want to integrate(embed) QuickSight to my website, in that way so each user able to see own data.
I want to have one analysis to be able to modify if for all users.
Are there some recommendations on how to achieve this?
Firstly, you will need to add the user's identifier (email, name, generated ID, whatever) to each row that belongs to them in the S3 data. I'm kind of assuming that you are storing the data in a tabular format (e.g. CSV) but let me know if I'm wrong. So let's assume you added this user identifier as a new column called userId.
Secondly, you will need to generate a manifest file that points to all of your users' S3 files.
Then, create a new data set, pointing to that manifest.
Then, you will need to create another new data set that ties a QuickSight UserName to the new userId column you have added. You will need to maintain this data set somehow, but fortunately the QuickSight UserName has a pattern to it (something like embed_role\user_name).
An example of this new data set might look like
UserName,userId
your_embed_role\user3479125,user3479125
Once you have this data set you can attach it to the S3 data set created earlier as row-level-security (RLS). You can think of QuickSight as performing an inner join on userId between the RLS data set and the actual visual data set, thus limiting the data to the given UserName.
I'm basically trying to create a primary ID between CRM and QuickBooks. Figured I'd just use the existing PK in CRM for the lookup. I'd like the PK to visible to the user, but not editable in CRM.
This has presented several problems in that you can't do that out of the box. I thought I read somewhere you could either via business rule or calculated field, but I haven't had luck with that.
It sounds like it would require web resources if I were to go this route.
The other option would be to just generate unique values for every record in Accounts and Contacts.
Does this automatically populate existing records or just new records? How do I get it to populate existing records?
You can use Auto number manager for configuring an auto-number attribute in every entity. This seeds a number based on configured format for new records. Uniqueness assured by SQL sequence feature & no need of any extra plugin/workflow.
For existing records - you can design a workflow along with a temp entity to assign auto-number. Read more.
Otherwise you can use SSIS + Kingswaysoft package to generate auto-number & assign for existing records.
I am suggesting you to create a new text field on the entity and create a pre plugin that will get the record primary GUID id from context and will set this GUID into the newly added attribute. You can set this field as read-only of form as well.
OR you can generate new GUID as well into the plugin.
I want to implement a function to migrate a user from one tenant to another.
I know this could be achieved simply by change user's tenant id.
The problem is, I have an order has this user id and tenant id recorded, if I simply updated this user's tenant id, i won't be able to locate the user using the data recorded in the order.
If I have to update all the orders during migrating process, this could take up a long time for a large database.
Since the default PK type for ABP is long, is it possible to replace it as guid or are there any options I could choose?
Thanks.