Is it possible to make a runtime db connection and use it in Schema, DB and models without effecting configs? - laravel

I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253

Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.

We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.

Related

Why is there no data inside of my database when step debugging my Laravel Sail application?

Background
I am using PhpStorm to debug a test that creates a database entry. My goal is to set a breakpoint, then inspect the database manually.
I have confirmed this so far:
Step debugging is properly configured
Can connect via a forwarding port set up in docker-compose.yml (Fig. 1)
Laravel reports the entry exists in the database (Fig. 2)
Relevant Code
The star below indicates my breakpoint.
...
use Illuminate\Foundation\Testing\LazilyRefreshDatabase;
...
class ObfuscatedTestClass extends TestCase
{
use LazilyRefreshDatabase;
...
/** #test */
public function obfuscated_test_name() {
Queue::fake();
ObfuscatedModelName::factory()->create();
* Queue::assertPushed(SyncLeaseWithAccountingApp::class);
}
Hypotheses
Maybe I'm misunderstanding how databases are handled during these tests. I know Laravel has the ability to use database transactions to speed up tests, but I expect it to be modifying the database here when I'm using LazilyRefreshDatabase. Why else would I need to set up a database for testing?
Figures
Figure 1
Figure 2
I've been in the same boat all day, I also suspected transactions as a likely cause.
Then I came to realize all that was needed is to refresh the database connection when stopped at a breakpoint.
I'm not sure if this will be helpful to anyone but I thought I should answer because my situation is so similar to OP's - also running Docker and PhpStorm.
We're typically not in the habit of refreshing the db connection on a regular basis so it's easy to overlook.
EDIT:
To tag a little bit more onto my answer from earlier.. The data will persist if your test does not wipe it which can be useful. You can then tinker with that data by creating a file named ".env.testing" & setting APP_ENV=testing. Then run tinker like so: php artisan tinker --env=testing

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

Using Session with Schedule

I'm using Laravel 5.2.29 and I've set up some scheduled commands.
One of them uses some methods used in my app through normal use (i.e., via a browser) and so the session is accessible.
However, when I try and run the command manually using artisan schedule:run, I get the following exception:
Session store not set on request.
The Session isn't being set (I suppose in the same vein as if a route were accessed without the web middleware), so is there a way to manually boot up the Session? I'd rather not rewrite my methods to not use it.
You don't have access to the session in CLI. Session is strictly connected with the web browser (client side) which doesn't make sense in command line.
If You need a local storage (for server side) You can try with cache driver.

Laravel master slave exceptions

I have a laravel app running and a master slave implementation. the master is used for writing and the slave is used for reading. However how do we work with sessions if we write to session table and grab from sessions table right after user is logged in? Is there anyway we can make certain read requests to the master instead?
If I inderstood the question right, you could use the DB::on method described in: Specifying The Query Connection

Good way to demo a classic ASP web site

What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.

Resources