Stopping a Laravel Queue after it has finished running - laravel

I am running a web application on Laravel 5.5. I have a requirement to run the jobs in a queue and then stop the queue. The queue cannot be allowed to stay running.
I am running the below command but this just endlessly carries on.
php artisan queue:work --tries=3
If I use supervisord can I stop the queue from inside the Laravel application.
Any help is really appreciated.

From the documentation:
Processing All Queued Jobs & Then Exiting The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully. This option can be useful when working Laravel queues within a Docker container if you wish to shutdown the container after the queue is empty:
Try php artisan queue:work --tries=3 --stop-when-empty
https://laravel.com/docs/8.x/queues#running-the-queue-worker

Related

Multiple queue workers: some restart, some don't with ERROR (spawn error)

Our application provides a separate database for each of our users. I have set up an emailing job which users may dispatch to run in background via a Laravel 5.3 Queue. Some users succeed in evoking this facility (so it does work - same codebase) but some users fail.
The users who fail to generate the email are all characterised by the following error when trying to restart all user queues using sudo supervisor start all, eg:
shenstone-worker:shenstone-worker_02: ERROR (spawn error)
shenstone-worker:shenstone-worker_00: ERROR (spawn error)
shenstone-worker:shenstone-worker_01: ERROR (spawn error)
An example of a user who's email facility works:
meadowwood-worker:meadowwood-worker_02: started
meadowwood-worker:meadowwood-worker_00: started
meadowwood-worker:meadowwood-worker_01: started
The log of all attempted restarts has a load of these spawn errors at the beginning then all the successful queue restarts at the end.
The worker config files for these two users are:
[program:shenstone-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="shenstone"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/shenstone-worker.log
and
[program:meadowwood-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="meadowwood"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/meadowwood-worker.log
As you see, generically identical. Yet shenstone does not restart its queues to capture requests from its jobs table, but meadowwood does. No logfiles appear in storage.
So why do some of these queues restart successfully, and a load don't?
Looking at the stackoverflow issue Running multiple Laravel queue workers using Supervisor inspired me to run sudo supervisorctl status and yes I can see a more elaborate explanation of my problem:
shenstone-worker:shenstone-worker_00 FATAL too many open files to spawn 'shenstone-worker_00'
shenstone-worker:shenstone-worker_01 FATAL too many open files to spawn 'shenstone-worker_01'
shenstone-worker:shenstone-worker_02 FATAL too many open files to spawn 'shenstone-worker_02'
As opposed to:
meadowwood-worker:meadowwood-worker_00 RUNNING pid 32459, uptime 0:51:52
meadowwood-worker:meadowwood-worker_01 RUNNING pid 32460, uptime 0:51:52
meadowwood-worker:meadowwood-worker_02 RUNNING pid 32457, uptime 0:51:52
But I still cannot see what I can do to resolve the issue.
If you haven't come up with any other ideas, increasing the limit of open files in your server might help, maybe. Check this for instance
https://unix.stackexchange.com/questions/8945/how-can-i-increase-open-files-limit-for-all-processes
However, having many files open might affect the performance of your system and you should check why is it happening and prevent if you can.
Thanks Armando.
Yes, you’re right: open files is the issue. I did a blitz on the server to increase these, but ran into problems with MySQL, so I backed out my config changes.
We host over 200 customers ech with their own .env. Each of these have their own workers.
I’ll revisit the problem sometime.

Laravel 5.6 queue restart CPU usage

I installed my Laravel 5.6 application on shared hosting service. But my hosting company is not happy with the CPU usage of my application. This high CPU usage shows up when killing the queue worker, no matter whether I kill the worker manually or via a cron job.
Can someone explain me why this 'php artisan queue:restart' takes so much CPU time? And if possible, how can I reduce?
Restart:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:restart >/dev/null 2>&1
Activate queue worker:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:work --daemon
You seem to have memory leaks so read up on memory.
Straight from the documentation on how to run queue worker:
Daemon queue workers do not "reboot" the framework before processing each job. Therefore, you should free any heavy resources after each job completes. For example, if you are doing image manipulation with the GD library, you should free the memory with imagedestroy when you are done.
Alternative is to use queue:listen instead, the difference is that :work boots up once and runs forever, while :listen boots up before each job.
Note: queue:work and queue:work --daemon is equal so you do not have to run cron with the --daemon flag.
Note: Why do you run :restart so often? I doubt that you update your code every day, so use :restart only when you update the code.
Related
What is the difference between queue:work --daemon and queue:listen

Laravel queue using supervisord ignoring tries limit

I am running a laravel queue being monitored by Supervisord:
php /home/path/to/artisan queue:listen --env=production --timeout=0 --sleep=5 --tries=3
However if a job is failing, it is trying indefinitely - the 'count' in the jobs table shows 255 which is the max mysql field limit, but it has been doing thousands of attempts.
If the jobs table has 'attempts' marked at 255, and 'tries' set as 3 - why is it continuing to run this job in the queue?
You should use artisan queue:work --daemon with supervisord, listen is used for development (always pick new code without restart), to understand the difference you can read the commands code.

Where is this Laravel 5.1 queue listener coming from?

Whenever I check for running processes, I notice this process running:
/usr/local/Cellar/php55/5.5.24/bin/php artisan queue:work --queue=default --delay=0 --memory=128 --sleep=3 --tries=0 --env=local
I notice it gets a new process id every few seconds. In my /app/Console/Kernel.php file I have nothing in the schedule() command. Furthermore, there is nothing in my crontab. What is causing this listener process to run and restart and run as if controlled by a cron job?

Supervisor on Heroku

I use a remote server to process beanstalkd queues that I would like to use on my application running on Heroku. Is there any way to run Supervisor to monitor that the queue command (Laravel: php artisan queue:listen) is running?

Resources