I'm doing an application in Laravel that contains several connections to different databases, in which each one reads a service audit table. The application is to visualize logs of different applications.
To improve the reading speed, could be possible download every X minutes all data from different bases to a local base in Redis and read the queries directly from it?
You can do this via scheduled tasks:
https://laravel.com/docs/5.7/scheduling#scheduling-artisan-commands
This will allow you to run an Artisan Command
https://laravel.com/docs/5.7/artisan
In this command you can get the data from your DB and save it to your Redis table
To access mulitple Databases follow the details here:
https://laravel.com/docs/5.7/database#read-and-write-connections
And to setup redis here is the docs
https://laravel.com/docs/5.7/redis
All that you will need to do is track what you have transfered.
Get what you have not transferred and then save that data to the Redis table
Related
I have multiple databases in my project based on company we are giving new database for that company.i am developing automation workflows in my current project for that i was planned to implemented queue-jobs concept to achieve this one.
we are maintaining one more database which will contain all databases list (companies which are using them).i was little confused how to approach these kind of scenarios,either i have to maintain jobs table in my commonDatabase or if i have to create jobs table inside each database separately.
Note:- EVery time user tried to login he has to give company name(All requests we are sending in headers) that indicates database name.
My doubts are:-
i created jobs table in each database but it's not inserting records
in particular database,instead of it's inserting in commonDatabase
jobs table?
what is the best way to acheive this kind of scenario ?
if any user logged out the queue will run background or not ?
The thing I understand from you question is that you want to covert your project to multi-tenant multi-database and every company will generate a request to build tenant for them. The answers of your question are following:
I created jobs table in each database but it's not inserting records in particular database,instead of it's inserting in commonDatabase jobs table?
I must said you to watch this youtube play list.
If the Job is related to a company process i.e. you want to process any company invoice email then you will dispatch job in company database and if job is related to commonDatabase i.e. you want to configure any company database then run migrations & seeder into it, then it should be dispatch in commonDatabase.
if any user logged out the queue will run background or not?
yes, the queue will still run in background because the queue worker run on server and it doesn't have any concern with login session or any other authentication medium. You must need to read following articles/threads
Official Laravel Doc on queue
How to setup laravel queue worker
I'm trying to create a multi-tenant application and which will be hosted in a single instance. To complete this application I need to implement a Redis cache system. I have two solutions for multi-tenant caching. They are,
I can prefix the keys with tenant name like tenant1:myKey etc
I can use different in-memory DB provided by Redis. I can store different tenant details in different DB. To fetch them connect with the respective DB.
If I go with the second option, is there any disadvantages/performance issue? and if you can suggest any other solutions please help me!!
Note: I don't want to use Redis clustering
I connected the main database in the dev.exs and it works fine. But in my project I plan to use several databases. I know that in the file dev.exs can connect multiple databases but this option doesn’t suit me. Databases connections will be stored in the main project database. I want to know: how can I connect to different databases using the elixir code without using a file dev.exs?
You can start multiple instances of your Repo with different connection options.
Then, use the Repo.put_dynamic_repo/1 function to tell the Repo which of the databases should be used for queries in the current process. (The documentation for this function also tells you how to start more of the same repo).
There's also a discussion document that goes more in-depth about this topic: https://hexdocs.pm/ecto/replicas-and-dynamic-repositories.html
Current Setup:
SQL Server OLTP database
AWS Redshift OLAP database updated from OLTP
via SSIS every 20 minutes
Our customers only have access to the OLAP Db
Requirement:
One customer requires some additional tables to be created and populated to a schedule which can be done by aggregating the data already in AWS Redshift.
Challenge:
This is only for one customer so I cannot leverage the core process for populating AWS; the process must be independent and is to be handed over to the customer who do not use SSIS and don't wish to start. I was considering using Data Pipeline but this is not yet available in the market in which the customer resides.
Question:
What is my alternative? I am aware of numerous partners who offer ETL like solutions but this seems over the top, ultimately all I want to do is execute a series of SQL statements on a schedule with some form of error handling/ alert. Preference of both customer and management is to not use a bespoke app to do this, hence the intended use of Data Pipeline.
For exporting data from AWS Redshift to another data source using datapipeline you can follow a template similar to https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RedshiftToRDS using which data can be transferred from Redshift to RDS. But instead of using RDSDatabase as the sink you could add a JdbcDatabase (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html). The template https://github.com/awslabs/data-pipeline-samples/blob/master/samples/oracle-backup/definition.json provides more details on how to use the JdbcDatabase.
There are many such templates available in https://github.com/awslabs/data-pipeline-samples/tree/master/samples to use as a reference.
I do exactly the same thing as you, but I use lambda service to perform my ETL. One drawback of lambda service is, it can run max of 5 mins (Initially 1 min) only.
So for ETL's greater than 5 minutes, I am planning to set up PHP server in AWS and with SQL injection I can run my queries, scheduled at any time with help of cron function.
I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency