Supervisor running queue:work but not executing queued laravel jobs - laravel-5

I've setup supervisor to run multiple instances of the following command.
php artisan queue:work --queue=default--tries=3
My default Queue is currently database as a proof of concept, before migrating to SQS.
My laravel-work.ini file looks like the below.
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php ~/www/artisan queue:work --queue=default--tries=3 --daemon
autostart=true
autorestart=true
numprocs=5
stdout_logfile=~/www/storage/logs/workers.log
My supervisorctl output is as follows.
laravel-worker:laravel-worker_00 RUNNING pid 34697, uptime 0:26:59
laravel-worker:laravel-worker_01 RUNNING pid 34698, uptime 0:26:59
laravel-worker:laravel-worker_02 RUNNING pid 34699, uptime 0:26:59
laravel-worker:laravel-worker_03 RUNNING pid 34700, uptime 0:26:59
laravel-worker:laravel-worker_04 RUNNING pid 34701, uptime 0:26:59
Not sure what I'm missing? But jobs in the database aren't getting processed.

--queue=default--tries=3
Was aactually a typo, so guessing artisan was trying to process a non existant queue with the name default--tries=3

Related

Send Laravel logs from Supervisor to cloudwatch

I have a Laravel based queue system running on AWS worker tier. I am running the listener process using Supervisor and with the following configuration.
[program:program-name]
process_name=%(program_name)s_%(process_num)02d
command=php /var/app/current/artisan queue:work --tries=1 --queue=cron_default_dev
autostart=true
autorestart=true
user=root
numprocs=1
redirect_stderr=true
stdout_logfile=/var/app/current/storage/logs/laravel.log
Everything is working fine except for the logging. I use CloudWatch to log and is using this package. I have configures Laravel commands to execute at different schedules. The logging is working fine for some time and then it is working only in the command file. From the command file I am dispatching jobs as follows:
$job = (new MyJobCalss($this->argument()))
->onQueue(config('queue.connections.sqs.queue'));
$this->dispatch($job);
There is a stdout_logfile=/var/app/current/storage/worker.log entry in the supervisor config. How can send these logs to CloudWatch?

Supervisor 3.3.1 running but not processing jobs

I have setup supervisor
[program:laravel_queue]
process_name=%(program_name)s_%(process_num)02d
command=php /usr/local/bin/run_queue.sh
startsecs = 0
autostart=true
autorestart=true
user=www-data
numprocs=3
redirect_stderr=true
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log
run_queue.sh
#!/bin/bash
php /var/www/html/application/artisan --timeout=240 queue:work --tries=1
log file looks like this
but job table is filling up, its not processing any jobs. Any help in this regard is appreciated
I did some changes to make it work i am not really sure what actually made it work but here are steps:
I removed dependency on run_queue.sh and moved command inside laravel_queue.conf
[program:laravel_queue]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/application/artisan queue:work --tries=1
startsecs = 0
autostart=true
autorestart=true
user=www-data
numprocs=3
redirect_stderr=true
stderr_logfile=/var/log/laraqueue.err.log
stdout_logfile=/var/log/laraqueue.out.log
also if you notice i changed command a little bit from
--timeout=240 queue:work --tries=1
to
queue:work --tries=1 (this made it work in my opinion)
after making these changes i ran following commands:
sudo supervisorctl reread & sudo supervisorctl update
sudo supervisorctl start laravel_queue:*

PHP error pushes Job in Delayed queue while --tries=0 is used

I am using supervisor to run jobs on my lumen 5.2 setup. My supervisor conf. look like this
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work --
queue=server_level,app_level --tries=0 -vvv --daemon
autostart=true
autorestart=true
user=web_user
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/app/storage/logs/worker.log
when a job fails due to PHP error, Lumen inserts it in delayed queue and tries to run it indefinitely. I have used --tries=0 and expect a job should be failed in case of any error but it keeps re-running forever.
Even if you don't specify the --tries option, it takes the value 0 as default. Which means jobs will be attempted indefinitely till they're successful. If you want to prevent the jobs from running again after failure, then set the value to 1.
--tries=1

Laravel 5 run queue:work for more than one job

I have queue:work running every minute.
Laravel 5 queue:work runs only one job at a time.
How can I run 5 jobs every minute?
Assuming your app is running on Linux, there are a few ways to do this:
Simplest solution: set up 5 queue:work process every minute in cron (crontab -e):
* * * * * php /path/to/your/app/artisan queue:work <queue_driver> --queue=<queue_name> --once
More scalable solution (recommended): use a process manager such as Supervisor to run multiple workers, as per documentation.
Configuration example:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/your/app/artisan queue:work <queue_driver> --queue=<queue_name> --sleep=60 --timeout=90 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=5
redirect_stderr=true
stdout_logfile=/var/log/supervisor/laravel-worker.stdout.log
stderr_logfile=/var/log/supervisor/laravel-worker.stderr.log

How to stop artisan queue:listen command eating all CPU?

I'm running queue listener with command php artisan queue:listen --sleep=10 --tries=3 on Windows 7 laptop. My computer has 4 core CPU and the process is constantly eating up 25% of my CPU load. I tried increasing sleep parameter, but it doesn't help at all. There are no jobs in the queue. I'm using database queue. How to solve it, my computer is getting very hot.
Run it as a daemon to stop it from spinning up more and more instances of your app:
php artisan queue:work connection --daemon
From the docs:
The queue:work Artisan command includes a --daemon option for forcing the queue worker to continue processing jobs without ever re-booting the framework. This results in a significant reduction of CPU usage when compared to the queue:listen command

Resources