Is there a way to create database and seed data in asp.net core 2 when changing the connection string - multi-tenant

Is there a way to create the database and seed data in asp.net core 2 when changing the connection string through the OnConfiguring method of the DbContext?
I have designed my app for multi-tenancy (multi-database model) and should be able to register tenants dynamically each with connection string. Now my problem is, how can I create the database and seed data dynamically without restarting the app?
OnConfiguring screenshot

Basically the process of provisioning a database for a new tenant would require some time. Hence, you could follow the below steps.
Create a new tenant in the code
Post a message to the service bus with the necessary information like the tenanid / name etc.
Have a job ( typically the web Job) listen to the messages and run restore a new database from a master dacpac or script. Finally rename the database to the tenant.
Push back a message via service bus to let the application know about the database.
On receipt of the above message, the application updates the connection details for the tenant in the database.
In the other option, you can take a look at the Azure shard map for the tenant database sharding.
HTH

Related

How to run laravel queue jobs in multiple databases?

I have multiple databases in my project based on company we are giving new database for that company.i am developing automation workflows in my current project for that i was planned to implemented queue-jobs concept to achieve this one.
we are maintaining one more database which will contain all databases list (companies which are using them).i was little confused how to approach these kind of scenarios,either i have to maintain jobs table in my commonDatabase or if i have to create jobs table inside each database separately.
Note:- EVery time user tried to login he has to give company name(All requests we are sending in headers) that indicates database name.
My doubts are:-
i created jobs table in each database but it's not inserting records
in particular database,instead of it's inserting in commonDatabase
jobs table?
what is the best way to acheive this kind of scenario ?
if any user logged out the queue will run background or not ?
The thing I understand from you question is that you want to covert your project to multi-tenant multi-database and every company will generate a request to build tenant for them. The answers of your question are following:
I created jobs table in each database but it's not inserting records in particular database,instead of it's inserting in commonDatabase jobs table?
I must said you to watch this youtube play list.
If the Job is related to a company process i.e. you want to process any company invoice email then you will dispatch job in company database and if job is related to commonDatabase i.e. you want to configure any company database then run migrations & seeder into it, then it should be dispatch in commonDatabase.
if any user logged out the queue will run background or not?
yes, the queue will still run in background because the queue worker run on server and it doesn't have any concern with login session or any other authentication medium. You must need to read following articles/threads
Official Laravel Doc on queue
How to setup laravel queue worker

EF Core Migration - multiple databases

Is there a way to run EF Core migrations on multiple databases having the same set of tables. This is for multi-tenancy architecture where there's a master database (has metadata of all tenant databases including the tenant database connection string) and one database per tenant having the same set of database objects. We need to be able to run these migrations when a new tenant database is created automatically in SaaS model and also run these migrations whenever there are changes to the database structure (new columns, data type changes, new indexes etc.)
I've posted this exact same question on EF Core's GitHub.
The answer is, it can't be done at design time. You basically need to run your migration scripts manually on each tenant's database.
Executing migrations at runtime, however, is easy. You can instantiate a dbContext for each of your connection strings when your app launches (before WebHost.Run() if it's a web app) and execute your migrations like this: dbContext.Database.Migrate();
This is not ideal, of course, because it makes it harder for you to rollback your migrations to a certain point using Visual Studio Package Manager Console or CLI using dotnet ef commands.
The CLI command can be provided a connection string. So you could run it once per db, providing the connection string for each.
The command would look like this:
dotnet ef database update --connection "Server=client1.db;Database=client1"
Our team has about 10 developers, our application is one front-end connect to 20 databases(same scheme), and new database will be add when there is new client. Time to time someone will need update DB scheme, we end up doing this.
if you need scheme change, create SQL script and create the change request by email
only one person in the team run those script, and update database access layer
git push
tell the team dinner is ready
The person doing this created a EXE project for DB migration, he keep adding script to a folder, so the folder will contain all the script
0001.InitTables.sql
0002.MoreTabels.sql
0003.UpdateDropdowns.sql
.
.
.
then he use a library like DbUp (https://dbup.readthedocs.io/en/latest/) help him track those scripts and run on DB server.
He will run for DEV server first, on the release date, he will run this for production.
List<string> connectionStrings=new List<string>{
"ConStr1","ConStr2", "ConStr3"
};
foreach(var conStr in connectionStrings){
var upgrader = DeployChanges.To
.SqlDatabase(conStr)
.WithScriptsEmbeddedInAssembly(
Assembly.GetExecutingAssembly()
)
.LogToConsole()
.Build();
var result = upgrader.PerformUpgrade();
}

sync client database data to master database - laravel

I'm building a multi-tenant saas application using laravel 5.7 and vuejs. Whatever new client register the system will create new database for him as well all table migrations and seeding will be done via events.
But when super admin manage the application, how to load each client data to super admin panel, or let's say super admin want to make a announcement to al of his client, how to handle this in laravel so announcement data get synced to all database.
Maybe create a separate DB for SUPER-ADMIN, and that DB will be contains clients_table and other data needed to read/write data in clients-DB (data like client db name, user, password, for establish connection to his db etc.).
Alternatively - you can create special table `clients_announcments' in super-admin-db (or may be new db: common_clients_db) and use it for that (and read it from clients) - depends of how many clients you have and what efficency you need
If you create so "big" saas system with many DB, I also encourage you to hard separation between backend and frontend - this means laravel backend will only provide Restful API (NO html-css-js code - only pure php), and frotend client will be separate vue/angular/react project which will consume that API. Key words "micro-service architecture", "restful api"

How to use Multitenant in Nhibernate with Spring in MVC

I have an application in MVC with Hnibernate deployed on a server. Currently only one client is using this application. Now there are many clients and all will use this app with different database but the schema will same for all.
For this implementation I am thinking an approach-
I have made a new database in which table hold the information regarding the individual Client database connection strings.
When the application run, Nhiberate makes multiple session factories for all database which includes all client databases and the main database.
For example- there are two clients 'A' and 'B' with their database name 'A_db' and 'B_db'. And the other main database which hold connection strings as 'All_db'. Then in this case nhibernate make 3 session factories for all three db.
So when user enter their login credential, i'll check the related connection string for that client from the main database. and then destroy all session factories which are not related to that client database connection string. So by doing this there will be only one session factory remain that belong to his database.
Is this my approach is correct??
And i am going in right direction then provide some code for this approach as making multiple session factories and after then removing all session factories except the related one?
You can supply a connectionstring to the GetSession method. Check out this link for more intel.

Passing client userid to Oracle when using a pooled connection

Our company has an audit requirement to track individual user interaction with the application by user id. We use Tomcat 7 with the Tomcat connection pool and an Oracle 11.2 database (soon to be 12c). We connect using a Type 4 datasource managed by JNDI on the server which uses a system user id. Users log on using SSO to the web application. We want to make sure that whenever a user modifies a database record, their SSO signon id instead of the system id is used in an audit column to identify who made the modification.
My research shows that using getClientInfo and setClientInfo may be the way to go and I want to be clear on how I would implement this. Would I use the setClientInfo on the pooled datasource Connection to set the client user id, or run the Oracle stored procedure DBMS_APPLICATION_INFO.SET_CLIENT_INFO to pass the client user id? I know that in the Oracle virtual table v$session there is a column for OSUSER that holds this information. How do I get it there? Could I make a direct update to that column?
Since this is a multi-threaded application server, I don't want to end up with a concurrancy issue.

Resources