I have supervisorctl managing some daemon queue workers with this configuration :
[program:jobdownloader]
process_name=%(program_name)s_%(process_num)03d
command=php /var/www/microservices/ppsatoms/artisan queue:work ppsjobdownloader --daemon --sleep=0
autostart=true
autorestart=true
user=root
numprocs=50
redirect_stderr=true
stdout_logfile=/mnt/##sync/jobdownloader.log
Sometimes some workers are like hanging (running but stop getting queue messages) and supervisorctl does not automatically restart them, so I have to monitor and manually restart them.
Is there something wrong with the configuration? What can I do to prevent this to happen in the future?
Update :
Run the process as normal process (non-daemon) so that supervisorctl can restart the workers after they signal exit code :
[program:jobdownloader]
process_name=%(program_name)s_%(process_num)03d
command=php /var/www/microservices/ppsatoms/artisan queue:work ppsjobdownloader --sleep=0
autostart=true
autorestart=true
user=root
numprocs=50
redirect_stderr=true
stdout_logfile=/mnt/##sync/jobdownloader.log
Related
I'm using supervisor with laravel on ubuntu
there are many jobs in jobs table but they are not processing
/etc/supervisor/conf.d/queue-worker.conf
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/site/artisan queue:work sqs --sleep=3 --tries=3 -->
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=root
numprocs=8
redirect_stderr=true
stdout_logfile=/var/www/site/storage/logs/worker.log
stopwaitsecs=3600
sudo supervisorctl status queue-worker:*
queue-worker:queue-worker_00 RUNNING pid 1527816, uptime 0:36:16
queue-worker:queue-worker_01 RUNNING pid 1527820, uptime 0:36:16
queue-worker:queue-worker_02 RUNNING pid 1527804, uptime 0:36:17
queue-worker:queue-worker_03 RUNNING pid 1527802, uptime 0:36:17
queue-worker:queue-worker_04 RUNNING pid 1527815, uptime 0:36:16
queue-worker:queue-worker_05 RUNNING pid 1527793, uptime 0:36:17
queue-worker:queue-worker_06 RUNNING pid 1527835, uptime 0:36:16
queue-worker:queue-worker_07 RUNNING pid 1527807, uptime 0:36:17
also i run this command
sudo php artisan queue:restart
but it doesn't work
You said you are using database on a comment, how do you expect to process jobs if you are running php artisan queue:work sqs, you are literally executing the SQS queue, not database...
So use this command=php /var/www/site/artisan queue:work database --sleep=3 --tries=3 (or just run without writting database, but QUEUE_CONNECTION env must be database)
I'm facing an issue with supervisor & kubernetes.
Below is my supervisor config for laravel queue worker.
[program:queue-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work --sleep=10 --tries=3 --max-time=3600 --timeout=90 --daemon
user=nobody
autostart=true
autorestart=false
stopasgroup=true
killasgroup=true
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/html/queue-worker.log
stopwaitsecs=50
stopsignal=TERM
when my container is terminating, a supervisor is still running in the background. It is not listing to the SIGTERM from Kubernetes.
if I try to stop using the preStop hook from Kubernetes and run the command /usr/bin/supervisorctl stop queue-worker:* then it stops immediately without waiting for 50 sec as mentioned in config.
To resolve the issue, I'm started Nginx and PHP-fpm with the worker using a supervisor. So basically supervisor has PID 1 and it will receive SIGTERM before the pod stops from Kubernetes.
Note: Kubernetes send SIGTERM signal only to Process with ID 1. No child process receives a SIGTERM signal. If you want to send SIGTERM or any other POSIX signal to the process, then you can use the preStop hook.
I am trying to dispatch a job by laravel queue:work using supervisor in the live server (CentOS 7). Supervisor is running but the job is not processing. I am getting following error:
My worker file is :
[program:queue-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/maomin/public_html/bvend.xyz/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600 --daemon
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=apache
numprocs=8
redirect_stderr=true
stdout_logfile=/home/maomin/public_html/bvend.xyz/w.log
stopwaitsecs=3600
log file (/home/maomin/public_html/bvend.xyz/w.log) shows below error :
The "--max-time" option does not exist.
I have tried almost all google solution but no luck
solved the issue by doing following:
removed --max-time=3600 and replaced 'sqs' with 'database' as I am using database for queue job.
command=php /home/maomin/public_html/bvend.xyz/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600 --daemon
to
command=php /home/maomin/public_html/bvend.xyz/artisan queue:work database --sleep=3 --tries=3 --daemon
I have two sites on a single digital ocean droplet, production and staging. I've installed queue supervisor according to the docs: sudo apt-get install supervisor and configured two configuration files inside /etc/supervisor/conf.d as follows:
laravel-worker.conf:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app.com/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=root
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/app.com/worker.log
stopwaitsecs=3600
laravel-worker-staging.conf:
[program:laravel-worker-staging]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/test.app.com/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=root
numprocs=1
redirect_stderr=true
stdout_logfile=/var/www/test.app.com/worker.log
stopwaitsecs=3600
Once I'm done testing I will disable the server block for test.app.com to prevent random users to visit the site . Do I need to do anything with queue supervisor because it will still be running in the background for the test site or maybe the laravel-worker-staging.conf to prevent it from unnecessarily using up server resources?
Looks like I could just issue the following command:
supervisorctl stop laravel-worker-staging
I am using supervisor in laravel some time my supervisor work fine and some time got error.
FATAl Exited too quickly (process log may have details).
This is my supervisor file.
[program:laravel-worker-mail]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/new-project/artisan queue:work mongodb --sleep=10 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/new-project//storage/logs/worker.log
Please suggest me if anybody have good idea about this.
In my case, the supervisor was exiting very fast because it was finishing before the startsecs and since startsecs wasn't defined, it uses the default which is 1.
Setting startsecs=0 fixed my issue.
I solved the problem myself by searching and applying a number of methods, I found my solution by adding --daemon in command
updated code below
[program:laravel-worker-mail]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/new-project/artisan queue:work mongodb --sleep=10 --tries=3 --daemon
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/new-project/storage/logs/worker.log
For anyone still having the same issue despite following the accepted answer. It turned out that I was referencing the wrong queue method "sqs" instead of the one I was using which was "database".
command=php /var/www/html/new-project/artisan queue:work database --sleep=10 --tries=3 --daemon
[program:laravel-worker-mail]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/new-project/artisan queue:work database --sleep=10 --tries=3 --daemon
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/new-project/storage/logs/worker.log
I also got this message when trying to run php artisan horizon inside a directory that didn't actually have artisan available to run.
I feel a bit silly posting this, but I ran into this issue when cloning a server for a new website - and the supervisor conf file was pointing the command to the wrong directory because the new website was hosted in a different location.
So using the example my /etc/supervisor/conf.d/laravel-worker.conf needed to be changed from something like this:
[program:laravel-worker-mail]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/new-project/artisan queue:work database --sleep=10 --tries=3 --daemon
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/new-project/storage/logs/worker.log
To this:
[program:laravel-worker-mail]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/new-project-2/artisan queue:work database --sleep=10 --tries=3 --daemon
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/html/new-project-2/storage/logs/worker.log