Laravel jobs leaves an idle postgresql process on DEALLOCATE - laravel

Every time a Delayed Job is has run on my server I can see a new idle process in postgreSQL. Running select * from pg_stat_activity; I can see:
DEALLOCATE pdo_stmt_00000018
I tried to understand, and one more line (and one more process in htop) appears each time a delayed queued job had just ran.
The last line of my Job is:
$this->log->info("Invitation {$this->invitation->uuid} sent");
And I can see this in my logs, so everything is alright BUT it does not clean up after. I have a idle process every time with "DEALLOCATE pdo_stmt_00000xxx".
What can I do to avoid this problem? What is causing this?
Here is my supervisor config:
[program:laravel-queue-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/my/site/artisan queue:work --queue=invitation,default --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/path/to/my/logs/worker.log
Side note: the idle processes disappear when I run php artisan queue:restart

I found a (quick and dirty) workaround. Adding this at the end of my Job handle function:
\DB::disconnect();
sleep(1);
A issue has been opened on Laravel: https://github.com/laravel/framework/issues/18384

Related

Multiple queue workers: some restart, some don't with ERROR (spawn error)

Our application provides a separate database for each of our users. I have set up an emailing job which users may dispatch to run in background via a Laravel 5.3 Queue. Some users succeed in evoking this facility (so it does work - same codebase) but some users fail.
The users who fail to generate the email are all characterised by the following error when trying to restart all user queues using sudo supervisor start all, eg:
shenstone-worker:shenstone-worker_02: ERROR (spawn error)
shenstone-worker:shenstone-worker_00: ERROR (spawn error)
shenstone-worker:shenstone-worker_01: ERROR (spawn error)
An example of a user who's email facility works:
meadowwood-worker:meadowwood-worker_02: started
meadowwood-worker:meadowwood-worker_00: started
meadowwood-worker:meadowwood-worker_01: started
The log of all attempted restarts has a load of these spawn errors at the beginning then all the successful queue restarts at the end.
The worker config files for these two users are:
[program:shenstone-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="shenstone"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/shenstone-worker.log
and
[program:meadowwood-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="meadowwood"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/meadowwood-worker.log
As you see, generically identical. Yet shenstone does not restart its queues to capture requests from its jobs table, but meadowwood does. No logfiles appear in storage.
So why do some of these queues restart successfully, and a load don't?
Looking at the stackoverflow issue Running multiple Laravel queue workers using Supervisor inspired me to run sudo supervisorctl status and yes I can see a more elaborate explanation of my problem:
shenstone-worker:shenstone-worker_00 FATAL too many open files to spawn 'shenstone-worker_00'
shenstone-worker:shenstone-worker_01 FATAL too many open files to spawn 'shenstone-worker_01'
shenstone-worker:shenstone-worker_02 FATAL too many open files to spawn 'shenstone-worker_02'
As opposed to:
meadowwood-worker:meadowwood-worker_00 RUNNING pid 32459, uptime 0:51:52
meadowwood-worker:meadowwood-worker_01 RUNNING pid 32460, uptime 0:51:52
meadowwood-worker:meadowwood-worker_02 RUNNING pid 32457, uptime 0:51:52
But I still cannot see what I can do to resolve the issue.
If you haven't come up with any other ideas, increasing the limit of open files in your server might help, maybe. Check this for instance
https://unix.stackexchange.com/questions/8945/how-can-i-increase-open-files-limit-for-all-processes
However, having many files open might affect the performance of your system and you should check why is it happening and prevent if you can.
Thanks Armando.
Yes, you’re right: open files is the issue. I did a blitz on the server to increase these, but ran into problems with MySQL, so I backed out my config changes.
We host over 200 customers ech with their own .env. Each of these have their own workers.
I’ll revisit the problem sometime.

Queue closes after difficult job

After processing the excel file, for about 60 seconds, the queue drops.
And it may not end.
Why is the queue falling?
Decided by adding --memory=2048
php artisan queue:work redis --queue=process --tries=0 --delay=0 --timeout=120 --sleep=5 --memory=2048

"how fix laravel job has been attempted too many times or run too long'"

I use the queue to send SMS. Worked well. Next, the problem that occurred to the Ubuntu server was that I had to reinstall the supervisor. This problem has arisen after resetting.
production.ERROR: Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously time timed out. in /home/.../.../vendor/laravel/framework/src/Illuminate/Queue/Worker.php:385
You need edit config/queue.php
'retry_after' => 1800, // This parameter is responsible for timeout
make sure the retry_after value is greater than the time it takes a job to run, this is mentioned in the queue documentation already.
I have my queue run under supervisor in Fedora Linux Server
In my supervisord.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/app/current/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=webapp
numprocs=8
redirect_stderr=true
stdout_logfile=/var/app/current/worker.log
stopwaitsecs=36000
And with this config File I have erro that you shows but when I remove sleep and tries flag like this
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/app/current/artisan queue:work
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=webapp
numprocs=8
redirect_stderr=true
stdout_logfile=/var/app/current/worker.log
stopwaitsecs=36000
Error was gone

Laravel queue using supervisord ignoring tries limit

I am running a laravel queue being monitored by Supervisord:
php /home/path/to/artisan queue:listen --env=production --timeout=0 --sleep=5 --tries=3
However if a job is failing, it is trying indefinitely - the 'count' in the jobs table shows 255 which is the max mysql field limit, but it has been doing thousands of attempts.
If the jobs table has 'attempts' marked at 255, and 'tries' set as 3 - why is it continuing to run this job in the queue?
You should use artisan queue:work --daemon with supervisord, listen is used for development (always pick new code without restart), to understand the difference you can read the commands code.

Where is this Laravel 5.1 queue listener coming from?

Whenever I check for running processes, I notice this process running:
/usr/local/Cellar/php55/5.5.24/bin/php artisan queue:work --queue=default --delay=0 --memory=128 --sleep=3 --tries=0 --env=local
I notice it gets a new process id every few seconds. In my /app/Console/Kernel.php file I have nothing in the schedule() command. Furthermore, there is nothing in my crontab. What is causing this listener process to run and restart and run as if controlled by a cron job?

Resources