How to start/stop specific horizon supervisor? - laravel

I have a horizon question. Is there horizon have a command where i can stop/pause specific supervisor?
for example: i have 5 supervisor running like supervisor-1, supervisor-2 ... supervisor-5 in horizon.php.
what i want to achieve is how to pause/stop the supervisor-1 for temporary and enable it later?

short answer: you can't. Because master supervisor will re-start all failed supervisor defined on horizon config file. But Laravel do support(not recommend) manage every single supervisor manually. Mohamed Said wrote a deep dive about how horizon works

Related

How to stop a laravel SyncQueue

I've tried queue:clear, even tried removing all jobs but when I add them again the former queue starts working again as evidenced by the timely log entries, I'd just like to start fresh but couldn't find any way to actually stop the former queue
You can use
php artisan queue:restart
This will stop all running queue workers so that you can start fresh with a new worker process.

Laravel 7 - Stop Processing Jobs and Clear Queue

I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name

Queued jobs are somehow being cached with Laravel Horizon using Supervisor

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

How to deploy laravel into a docker container while there are jobs running

We are trying to migrate our laravel setup to use docker. Dockerizing the laravel app was straight forward however we ran into an issue where if do a deployment while scheduled jobs are running they would be killed since the container is destroyed. Whats the best practice here? Having a separate container to run the laravel scheduler doesnt seem like it would solve the problem.
Run the scheduled job in a different container so you can scale it independently of the laravel app.
Run multiple containers of the scheduled job so you can stop some to upgrade them while the old ones will continue processing jobs.
Docker will send a SIGTERM signal to the container and wait for the container to exit cleanly before issuing SIGKILL (the time between the two signals is configurable, 10 seconds by default). This will allow to finish your current job cleanly (or save a checkpoint to continue later).
The plan is to stop old containers and start new containers gradually so there aren't lost jobs or downtime. If you use an orchestrator like Docker Swarm or Kubernetes, they will handle most of these logistics for you.
Note: the laravel scheduler is based on cron and will fire processes that will be killed by docker. To prevent this have the scheduler add a job to a laravel queue. The queue is a foreground process and it will be given the chance to stop/save cleanly by the SIGTERM that it will receive before being killed.

Can I trigger Laravel jobs from a controller instead of using the `php artisan queue` process

We are running our production system on Elastic Beanstalk. We want to be able to take advantage of EBS' worker tiers with autoscaling. Unfortunately, due to how Laravel queue processing works, Laravel expects all queues to be consumed by starting a php command line process on your servers. EBS worker tiers don't function that way. AWS installs a listener daemon of its own, that pulls of jobs and feeds them to your worker over local HTTP calls. Sounds great. Unfortunately, I can't figure out how one would call a queued job from a route and controller in Laravel instead of using the built-in artisan queue listener task. Any clues as to how to achieve this would be greatly appreciated.
You can use the Artisan::call method to call commands from code.
$exitCode = Artisan::call('queue:work');
You can see more info in the docs
In Controller action method:
JobClassName::dispatch();

Resources