I have multiple databases in my project based on company we are giving new database for that company.i am developing automation workflows in my current project for that i was planned to implemented queue-jobs concept to achieve this one.
we are maintaining one more database which will contain all databases list (companies which are using them).i was little confused how to approach these kind of scenarios,either i have to maintain jobs table in my commonDatabase or if i have to create jobs table inside each database separately.
Note:- EVery time user tried to login he has to give company name(All requests we are sending in headers) that indicates database name.
My doubts are:-
i created jobs table in each database but it's not inserting records
in particular database,instead of it's inserting in commonDatabase
jobs table?
what is the best way to acheive this kind of scenario ?
if any user logged out the queue will run background or not ?
The thing I understand from you question is that you want to covert your project to multi-tenant multi-database and every company will generate a request to build tenant for them. The answers of your question are following:
I created jobs table in each database but it's not inserting records in particular database,instead of it's inserting in commonDatabase jobs table?
I must said you to watch this youtube play list.
If the Job is related to a company process i.e. you want to process any company invoice email then you will dispatch job in company database and if job is related to commonDatabase i.e. you want to configure any company database then run migrations & seeder into it, then it should be dispatch in commonDatabase.
if any user logged out the queue will run background or not?
yes, the queue will still run in background because the queue worker run on server and it doesn't have any concern with login session or any other authentication medium. You must need to read following articles/threads
Official Laravel Doc on queue
How to setup laravel queue worker
Related
I have started using hbase recently, just wanted to check here if anyone came across the scenario which i have been facing right now.
I have a webservice deployed in couple of servers and accessing the HBase to update a field. Now this field update is conditional means i have to read the field from hbase and if its value is "A",then update to "B". If the concurrent update is "C" do not update. But since different servers and concurrent requests, possible that both read existing value as A and one update with B and other with "C".
If there are requests coming concurrently from different servers, then there is no use of thread level locking. Also multiple request from same server.
Is there a way to lock at the HBase level, so that i can aquire the lock at service layer and lock the row and then update it.
There is RowLock in HBase API, but we are using the higher version (1.1.2.3) of hbase where that class is removed.
Appreciate if someone could show a direction!!
Thanks in advance.
I'm doing an application in Laravel that contains several connections to different databases, in which each one reads a service audit table. The application is to visualize logs of different applications.
To improve the reading speed, could be possible download every X minutes all data from different bases to a local base in Redis and read the queries directly from it?
You can do this via scheduled tasks:
https://laravel.com/docs/5.7/scheduling#scheduling-artisan-commands
This will allow you to run an Artisan Command
https://laravel.com/docs/5.7/artisan
In this command you can get the data from your DB and save it to your Redis table
To access mulitple Databases follow the details here:
https://laravel.com/docs/5.7/database#read-and-write-connections
And to setup redis here is the docs
https://laravel.com/docs/5.7/redis
All that you will need to do is track what you have transfered.
Get what you have not transferred and then save that data to the Redis table
Is there a way to create the database and seed data in asp.net core 2 when changing the connection string through the OnConfiguring method of the DbContext?
I have designed my app for multi-tenancy (multi-database model) and should be able to register tenants dynamically each with connection string. Now my problem is, how can I create the database and seed data dynamically without restarting the app?
OnConfiguring screenshot
Basically the process of provisioning a database for a new tenant would require some time. Hence, you could follow the below steps.
Create a new tenant in the code
Post a message to the service bus with the necessary information like the tenanid / name etc.
Have a job ( typically the web Job) listen to the messages and run restore a new database from a master dacpac or script. Finally rename the database to the tenant.
Push back a message via service bus to let the application know about the database.
On receipt of the above message, the application updates the connection details for the tenant in the database.
In the other option, you can take a look at the Azure shard map for the tenant database sharding.
HTH
Current Setup:
SQL Server OLTP database
AWS Redshift OLAP database updated from OLTP
via SSIS every 20 minutes
Our customers only have access to the OLAP Db
Requirement:
One customer requires some additional tables to be created and populated to a schedule which can be done by aggregating the data already in AWS Redshift.
Challenge:
This is only for one customer so I cannot leverage the core process for populating AWS; the process must be independent and is to be handed over to the customer who do not use SSIS and don't wish to start. I was considering using Data Pipeline but this is not yet available in the market in which the customer resides.
Question:
What is my alternative? I am aware of numerous partners who offer ETL like solutions but this seems over the top, ultimately all I want to do is execute a series of SQL statements on a schedule with some form of error handling/ alert. Preference of both customer and management is to not use a bespoke app to do this, hence the intended use of Data Pipeline.
For exporting data from AWS Redshift to another data source using datapipeline you can follow a template similar to https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RedshiftToRDS using which data can be transferred from Redshift to RDS. But instead of using RDSDatabase as the sink you could add a JdbcDatabase (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html). The template https://github.com/awslabs/data-pipeline-samples/blob/master/samples/oracle-backup/definition.json provides more details on how to use the JdbcDatabase.
There are many such templates available in https://github.com/awslabs/data-pipeline-samples/tree/master/samples to use as a reference.
I do exactly the same thing as you, but I use lambda service to perform my ETL. One drawback of lambda service is, it can run max of 5 mins (Initially 1 min) only.
So for ETL's greater than 5 minutes, I am planning to set up PHP server in AWS and with SQL injection I can run my queries, scheduled at any time with help of cron function.
I have a database in oracle, my task is to create/update a user in Microsoft Active directory running on different server, when a user detail is entered / updated in Oracle table.
If its not possible then what is the best way to achieve this.
Currently we are doing it by running a code written in C# which reads from oracle and does job in AD, it triggers from user creation package but some times triggering fails and whole process fails. User details are updated in DB but not in AD.
Oracle is running at different server. User details are entered in DB through a different package.