Laravel Forge: How to stop queue workers? - laravel

I have configured a queue worker as a daemon on Forge, then used the recommended deployment script command (php artisan queue:restart).
How do I manually stop and restart the queue worker? If I stop it, supervisor will just restart it. Do I need kill the active worker in Forge first?
This may be required on an ad-hoc basis. For example, if I want to clear a log file that the queue has open.

I've been pretty vocal in deployment discussions, and I always tell people to stop their worker processes with the supervisorctl command.
supervisorctl stop <name of task>
Using the queue:restart command doesn't actually restart anything. It sets an entry in the cache which the worker processes check, and shutdown. As you noticed, supervisor will then restart the process.
This means that queue:restart has one huge problem, ignoring the naming and the fact that it doesn't restart; it will cause all worker processes on all servers that uses the same cache to restart. I think this is wrong, I think a deployment should only affect the current server currently being deployed to.
If you're using a per-server cache, like the file cache driver, then this has another problem; what happens if your deployment entirely removes the website folder? The cache would change, the queues would start again, and the worker process may have a mix of old and new code. Fun things to debug...
Supervisor will signal the process when it is shutting down, and wait for it to shut down cleanly, and if it doesn't, forcefully kill it. These timeouts can be configured in the supervisor configuration file. This means that using supervisorctl to stop the queue process will not terminate any jobs "half-way through", they will all complete (assuming they run for a short enough time, or you increase the timeouts).

Related

Laravel Horizon worker constantly crashes silently

We are running Laravel 7 and Horizon 4.3.5. Horizon runs with Supervisor.
We have 10 different queues configured, but workers responsible for one particular queue constantly dies without any output. After restarting Horizon, I can see these workers up and running for several seconds via top and ps commands. Then they are gone.
I checked supervisor's stdout_logfile: nothing suspicious there. I can see Jobs related to this queue are being processed successfully. Each worker processes exactly 2 jobs before crash.
I checked supervisor's stderr_logfile, but it's empty.
Laravel logs and failed_jobs table both are empty.
I even checked syslog, but nothing related there.
There are no problems with other queues at all. Only this particular queue keeps piling up: jobs are being pushed to queue by application, but never processed until I restart Horizon.
There are lot of free space on disk, free RAM, CPU usage is low.
Worker command: /usr/bin/php7.4 artisan horizon:work redis --delay=0 --memory=128 --queue=main --sleep=3 --timeout=1800 --tries=1 --supervisor=php01-Mexm:business
Turned out it was Out Of Memory problem. We had one job in this queue which caused crash.
Still not sure why logs were empty. Probably there wasn't enough memory to log anything.

Performing go routines in background

I am new to Go and I am using go routines in my app in Heroku, which are long (up to 7 minutes), and cannot be interrupted.
I saw that the auto scaler sometimes kills the Heroku dyno which is running the routine. I need a way of running this routine independently from the dynos so I know that it will not get shutdown. I read articles and still don't understand how to perform a go routine in a background worker. It is hard for me to believe I am the only one experiencing this.
My go routines use my redis database.
Could someone please point me to an example of how to setup a background worker in heroku for go and how to send my go routine to that worker?
Thank you very much
I need a way of running this routine independently from the dynos so I
know that it will not get shutdown.
If you don't want to run your worker code on a dyno then you'll need to use a different provider from Heroku, like Amazon AWS, Digital Ocean, Linode etc.
Having said that, you should design your workers, especially those that are mission critical, to be able to recover from a shutdown. Either to be able to continue where they left off or to start over. Heroku's dyno manager restarts the dynos at least once a day but I wouldn't be surprised if the other cloud providers also restart their virtual instances once in a while, probably not once a day but still... And even if you decide to deploy your workers on a physical machine that you control and never turn off, you cannot prevent things like hardware failure or power outage from happening.
If your workers need to perform some task till it's done you need to make them be aware of possible shutdowns and have them handle such scenarios gracefully. Do not ever rely on a machine, physical or virtual, to keep running while your worker is doing it's job.
For example if you're on Heroku, use a worker dyno and make your worker listen for the SIGTERM signal, after your worker receives such a signal...
The application processes have 30 seconds to shut down cleanly
(ideally, they will do so more quickly than that). During this time
they should stop accepting new requests or jobs and attempt to finish
their current requests, or put jobs back on the queue for other worker
processes to handle. If any processes remain after that time period,
the dyno manager will terminate them forcefully with SIGKILL.
... continue reading here.
But keep in mind, as I mentioned earlier, if there is an outage and Heroku goes down, which is something that happens from time to time, your worker won't even have those 30 seconds to clean up.

Run script hole time on VPS server

Is it possible to create a script that is always running on my VPS server? And what need i to do to run it the hole time? (I haven't yet a VPS server, but if this is possible i wants to buy one!
Yes you can, there are many methods to get your expected result.
Supervisord
Supervisord is a process control system that keeps any process running. It automatically start or restart your process whenever necessary.
When to use it: Use it when you need a process that run continuously, eg.:
A queue worker that reads a database continuously waiting for a job to run.
A node application that acts like a daemon
Cron
Cron allow you running processes regularly, in time intervals. You can for example run a process every 1 minute, or every 30 minutes, or any time interval you need.
When to use it: Use it when your process is not long running, it do a task and end, and you do not need it beign restarted automatically like on Supervisord, eg.:
A task that collects logs everyday and send it on a gzip by email
A backup routine.
Whatever you choose, there are many tutorials on the internet on how configuring both, so I'll not go into this details.

What is the difference queue:work and queue:listen

I can't understand what's the difference between Laravel queue:work and Laravel queue:listen
I can see that:
Queue: Listen to a given queue
Work: Process the next job on a queue
But still don't get it, because I've tried both, both will run queue if there is any new queue ("work option" not just running once)
I'm not talking about the daemon option. Just these both.
Until Laravel 5.2 you had :listen and :work.
Work would process the first job in the queue.
Listen would process all jobs as they came through.
In Laravel 5.3+ this is no longer the case. Listen still exists, but it is deprecated and slated for removal in 5.5. You should prefer :work now.
Work now process jobs one after the other, but have a plethora of options you can configure.
Edit
The above was true at the time of the posting, but since then things have been changed a bit.
queue:work should be preferred when you want your queue's to run as a daemon. This would be a long-lived process that would be beneficial where performance was an issue. This will use a cached version of the application and does not re-bootstrap the application every time a job is processed.
queue:listen should be used when you don't care about performance or you don't want to have to restart the queue after making changes to the code.
They'll both pop jobs off the queue 1-by-1 as received.
They both share almost the exact same options that can be passed to them.
In Laravel 5.3+ queue:work runs a daemon listener. It could in 5.2 as well if you specified the --daemon flag. A daemon work boots the framework one time and then processes jobs repeatedly. The queue:listen command runs a queue:work --once sub-process in a loop which boots the framework each iteration.
queue:work should pretty much always be used in production as it's much more efficient and uses less RAM. However; you need to restart it after each core change. queue:listen is useful for development and local environments because you don't have to restart it after code changes (because the framework is booting fresh each job).
from here
The queue:work Artisan command includes a --daemon option for forcing
the queue worker to continue processing jobs without ever re-booting
the framework. This results in a significant reduction of CPU usage
when compared to the queue:listen command:
As you can see, the queue:work job supports most of the same options
available to queue:listen. You may use the php artisan help queue:work
job to view all of the available options.
https://laravel.com/docs/5.1/queues#running-the-queue-listener
There are two different issues listed.
There is artisan queue:work and artisan queue:listen
queue:work will simply pop off the next job in the queue, and process only that one job. This is a 'one off' command that will return to the command prompt once the one queue command is processed.
queue:listen will listen to the queue, and continue to process any queue commands it receives. This will continue running indefinitely until you stop it.
In Laravel >=4.2 there was a --daemon command added. The way it works is simply keeps running the queues directly, rather than rebooting the entire framework after every queue is processed. This is an optional command that significantly reduces the memory and cpu requirements of your queue.
The important point with the --daemon command is that when you upgrade your application, you need to specifically restart your queue with queue:restart, otherwise you could potentially get all sorts of strange errors as your queue would still have the old code in memory.
So to answer your question "Which command should I use for running my daemons?" - the answer is almost always queue:work --daemon

Ensure Background Worker is Alive and Working

Is there any way in ruby to determine if a background worker is running?
For instance, i have a server that works a queue in delayed job and i would like to ensure 4 workers are on it and spin up a new worker process if one has either stalled or quit.
From the command line, crontab -lgives a list of all currently running jobs.
From the Rails console, Delayed::Job.all will give you a list of all currently running jobs.
Delayed Job also has a list of lifecycle methods which you can access:
http://www.rubydoc.info/github/collectiveidea/delayed_job/Delayed/Lifecycle
the usual way to do that is to use an external watchdog process. you can use Monit or God

Resources