I am having a hard time figuring out why it's not running in supervisor but works fine when running it on project.
When I try to run
php artisan queue:work redis
on my project and it returns
but if I run it via supervisor, I getting this log
this is my laravel-worker program name inside /etc/supervisor/conf.d
Thank you!
I see that supervisor will launch 8 worker processes that is different from running 1 process in first way.
You can try specify numprocs = 1, if it works.
Please check the code logic.
Related
I'm running Laravel 7 on XAMPP, Windows 10.
I open about 10 terminals with this command:
php artisan queue:work --timeout=0 --tries=1
Jobs are being executed once and sometimes twice, and I caught this job being executed by 3 workers:
I can't figure out why, and I can't reproduce the error on purpose.
I created a table to store events, and later I caught this triple execution :
So, I'm sure that the job is dispatched only once.
I am running a web application on Laravel 5.5. I have a requirement to run the jobs in a queue and then stop the queue. The queue cannot be allowed to stay running.
I am running the below command but this just endlessly carries on.
php artisan queue:work --tries=3
If I use supervisord can I stop the queue from inside the Laravel application.
Any help is really appreciated.
From the documentation:
Processing All Queued Jobs & Then Exiting The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully. This option can be useful when working Laravel queues within a Docker container if you wish to shutdown the container after the queue is empty:
Try php artisan queue:work --tries=3 --stop-when-empty
https://laravel.com/docs/8.x/queues#running-the-queue-worker
Horizon runs fine but only recently, after a deploy, supervisor and queue workers do not start back up again with Horizon GUI showing "Inactive"
To get them running again I can:
restart the daemon worker from within forge
restart the supervisor /etc/init.d/supervisor restart
My deploy script has php artisan horizon:terminate within it. I have also tried reset/purge and a combination thereof.
When I run terminate in the command with an inactive horizon, it seems to do nothing. When I run the same command with horizon active, it shuts it down but the daemon does not reboot supervisor.
The daemon runs without any errors throughout all of this.
Should terminate take down and bring up the service or is it the daemon itself?
Running horizon:terminate will kill the daemon, when the daemon is killed supervisor will realize this and boot up a new daemon. You can clearly see this if you monitor your server with htop while running terminate command.
If a long running job is running, it will run the current job until it finishes. Terminate in general is to reboot the process, to be certain the new code is loaded into horizon, this should be done after the last step in envoyer or similar deployment tool.
This seems like there is something wrong in your setup. Does the horizon process run before you call terminate, again check htop?. Or what happens when the command is called manually?
I installed my Laravel 5.6 application on shared hosting service. But my hosting company is not happy with the CPU usage of my application. This high CPU usage shows up when killing the queue worker, no matter whether I kill the worker manually or via a cron job.
Can someone explain me why this 'php artisan queue:restart' takes so much CPU time? And if possible, how can I reduce?
Restart:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:restart >/dev/null 2>&1
Activate queue worker:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:work --daemon
You seem to have memory leaks so read up on memory.
Straight from the documentation on how to run queue worker:
Daemon queue workers do not "reboot" the framework before processing each job. Therefore, you should free any heavy resources after each job completes. For example, if you are doing image manipulation with the GD library, you should free the memory with imagedestroy when you are done.
Alternative is to use queue:listen instead, the difference is that :work boots up once and runs forever, while :listen boots up before each job.
Note: queue:work and queue:work --daemon is equal so you do not have to run cron with the --daemon flag.
Note: Why do you run :restart so often? I doubt that you update your code every day, so use :restart only when you update the code.
Related
What is the difference between queue:work --daemon and queue:listen
I use a remote server to process beanstalkd queues that I would like to use on my application running on Heroku. Is there any way to run Supervisor to monitor that the queue command (Laravel: php artisan queue:listen) is running?