Laravel Session Database Driver freezes database randomly - laravel

Running laravel 5.4 using DB sessions driver. Randomly I have thousands of
update `sessions` set `payload` where id ....
when viewing the process list via mysql.
Seems like Laravel decides to update tons of sessions suddenly, causing the whole database to become unresponsive (DB running on dedicated server) and the max connections gets filled up pretty quick. I have a long session lifetime (as I don't want to log out the users) and in result the sessions table is quite huge.
I tried to limit the lottery to 0, 100 in case that would have been causing the issue but it has not helped.
Any ideas what could cause this / what I could try?
sessions table is INNODB, indexes in id and user_id

I had the same issue last week.
Your mind blows because you really don't know what is going on or even worse: what it's causing this issue.
I was using the database session drive for nearly 2 years and this situation never happened before.
The solution was to change the session drive to REDIS.
In config/session.php change the connection key:
'connection' => 'session'
In config/database.php add the following lines on redis entry:
'redis' => [
'cluster' => false,
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
'session' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 1,
],
],
The reason to add one more entry 'session' is:
If you run:
php artisan cache:clear
ALL OF YOUR SESSIONS WILL BE CLEARED TOO, not only the cache itself.
So, to avoid that, just add this new entry.
You can find more info here:
Redis As Session Driver With Laravel

Related

Redis memory filled up to 2 GB with Laravel Horizon

redis info memory:
used_memory_human:1.95G
used_memory_rss_human:1.93G
used_memory_peak_human:2.04G
used_memory_peak_perc:95.63%
I use Laravel Horizon and it is the only thing that uses redis.
Now it reached 2GB limit and stay like this
First question:
Why there is 2 GB limit and how to increase it? maxmemory set to 0
Second question:
I think there not that much jobs pending on Laravel Horizon to fill up 2GB, looks like trimmer or something not working. Jobs are small, not much information stored. There is about 1-2k jobs per hour and maybe pending around 3-4k.
My trim settings from horizon.php:
'trim' => [
'recent' => 60,
'pending' => 43200,
'completed' => 60,
'recent_failed' => 10080,
'failed' => 10080,
'monitored' => 10080,
],
Where to look next? Everithing working so far but I don't like the situation. Once with big batch of jobs we faced allocate memory issue (Allowed memory size of 536870912 bytes exhausted)

dbms_scheduler jobs inside the same package

Environment:
Oracle 12.2 64-bit under Linux.
Job_queue_processes = 4000
Aq_tm_processes = 1
I wrote a package, that has a three procedures inside, say, MYPKG.
First procedure is for client requests from web application, say ProcWeb.
This procedure creates two jobs and waits for their finish inside loop.
Both jobs are disposable and will be stopped and dropped from MYPKG.ProcWeb after
usage. The procedures for these jobs are also inside package – ProcTop and ProcBottom.
This is how it’s declared inside MYPKG.ProcWeb:
l_jobtop := dbms_scheduler.generate_job_name('TOP_JOB_');
l_jobbottom := dbms_scheduler.generate_job_name('BOTTOM_JOB_');
dbms_scheduler.create_job(job_name => l_jobtop,
job_type => 'STORED_PROCEDURE',job_action => 'MYPKG.PROCTOP');
dbms_scheduler.create_job(job_name => l_jobbottom,
job_type => 'STORED_PROCEDURE',job_action => 'MYPKG.PROCBOTTOM');
…
dbms_scheduler.run_job(l_jobtop, use_current_session=>false);
dbms_scheduler.run_job(l_jobbottom, use_current_session=>false);
During the first ten days after the package was installed on the database everything was fine.
Then the weird things begin - one job is starting, but another – never, or with huge delay.
So I wrote a standalone procedures ProcTop and ProcBottom and re-declared creation of jobs:
dbms_scheduler.create_job(job_name => l_jobtop,
job_type => 'STORED_PROCEDURE',job_action => 'PROCTOP');
dbms_scheduler.create_job(job_name => l_jobbottom,
job_type => 'STORED_PROCEDURE',job_action => 'PROCBOTTOM');
It’s hard to explain… but observation shows that calling of standalone procedures instead of calling procedures from package is much more stable. Both jobs start with no problem.
What is the hidden problem to have executable block of job inside the package that creates the job?

Laravel job stays in "Processing" state although finished

I have a problem with the laravel queue system. For performance purposes we use laravel queuing system with amazons SQS for heavier calculations. This works fine at least for most of our jobs. Some of them where the raw calculation time is about 25 seconds keep blocking the queue in the "Procesing" state for 6 minutes.
We did log the complete handle function of the job and the output was right at any time. As a matter of fact the last log statement (end of the function) was printed 20 seconds after entering the function. The data was calculated as expected and the database was up to date but the job was still "Processing".
After we intentionally crashed the job at the end of the handle function the calculated data was stored perfectly but obviously the queue crashed as well. So i guess it has to be something happing after the handle function. Maybe something with allocated memory?
The config of the queue is the default sqs driver configuration:
'sqs' => array(
'driver' => 'sqs',
'key' => env('AWS_KEY', 'secret'),
'secret' => env('AWS_SECRET', 'secret'),
'queue' => env('AWS_SQS_QUEUE', 'secret'),
'region' => env('AWS_SQS_REGION', 'secret'),
),
Edit:
I found out it is not only the queue but when I execute the job as a command the same behavior appears:
I print "Done." as the last statement in the command and after it gets printed the console halts for a few seconds before returning to the console input.
When I comment out the part where the most queries are the issue is gone, like the more queries I use the more I have to wait for the console.
I hope any of you guys know what causes this behavior and how we can fix it.
Thanks in advance
Ok I found the issue.
The problem was that telescope was enabled. So after the code was executed telescope was busy logging all requests and cache hits.
After disabling telescope there was no delay any more.

What is the number of processes in Laravel Horizon I should use?

I have installed Laravel Horizon to mange my queues, and inside the published config there are these settings:
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 3,
'tries' => 3,
],
],
In the processes setting:
How can I determine the number of processes I should use in a real
world app?
Is there a limit?
This is really a guess until your application is in the real world. You need to balance acceptable wait times for jobs to kick off with projections on how often jobs will be scheduled and how long they will take to run. A reasonable upper limit is going to be heavily dependent on the hardware you're running on and the overall server workload.
It's probably not worth spending too much time on this ahead of time vs monitoring it and dialing it in as you get up and running.

How to speedup unittests in laravel which use database migrations?

I have many tests in laravel app.
They make POST/GET requests and check responses.
Every test is performed using DatabaseMigrations trait.
On my laptop it takes about 20 seconds for every test to be finished.
I do not want to write different repositories for different types of queries so that I can later mock them (extra work).
May be there is a better solution?
You should use in memory testing using SQLite:
'testing' => [
'driver' => 'sqlite',
'database' => ':memory:',
'prefix' => '',
],
In this case, migrations and seeders will create data filled tables really quick.

Resources