Queued jobs are somehow being cached with Laravel Horizon using Supervisor - laravel

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks

As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

Related

How to stop a laravel SyncQueue

I've tried queue:clear, even tried removing all jobs but when I add them again the former queue starts working again as evidenced by the timely log entries, I'd just like to start fresh but couldn't find any way to actually stop the former queue
You can use
php artisan queue:restart
This will stop all running queue workers so that you can start fresh with a new worker process.

Laravel 7 - Stop Processing Jobs and Clear Queue

I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name

How to cleanup died jobs logs in storm?

I am trying to cleanup died storm jobs logs which stored in storm_log_path/workers-artifacts/
my current approach is using cron job or log rotate to cleanup the directory but that is has a problem it is deleting logs even the job is running.
what I am trying to do is using storm configuration to do this task as written in storm-documentation the Log Cleanup section this options should cleanup the logs and will never delete the logs of running jobs but it didn't work.
I am using storm 1.2.3 and my storm.yaml
logviewer.childopts: "-Xmx128m"
logviewer.cleanup.age.mins: 30
logviewer.max.sum.worker.logs.size.mb: 4096
logviewer.max.per.worker.logs.size.mb: 2048
I set the cleanup period to 30 minutes to test but never worked.
the log directory has folder for jobs per run there names are jobID-countingNumber-timestamp
5faaac990788a706cb972861-1-1607352884
5faaac990788a706cb972861-1-1607358710
5faaac990788a706cb972861-1-1607528615
5faaac990788a706cb972861-1-1607587744
5faaac990788a706cb972861-2-1607353512
5faaac990788a706cb972861-2-1607507502
5faaac990788a706cb972861-3-1607354786
How to allow the logviewer option to work or is there another approach?
TL;DR
In your storm.yaml, you need to add logviewer.cleanup.interval.secs: <value> for the logviewer cleaner service to work. Restart the logviewer service afterwards.
Your question made me curious so I have done some digging, first through the storm docs, then through our cluster's logs, then through the storm source code.
Turns out the logviewer cleanup service does not have a default value configured and is initialized with null. This is not mentioned in the docs, however, examining our own logviewer logs, this line popped to my eye:
2020-12-10 13:34:42.129 o.a.s.d.l.u.LogCleaner main [WARN] The interval for log cleanup is not set. Skip starting log cleanup thread.
Looking through the default config file and the storm sources made it clear there is no default value configured and the process is initialized with null (this file, line 97), which actually does not start the cleanup service at all. Seems to me, that they forgot to mention that in their docs, so admins looking to configure the service would automatically set this.
After setting the value and restarting the logviewer, it immediately started cleaning the files, as I could see in the logs. So thanks for raising this question, it would have slipped my attention otherwise!

How to deploy laravel into a docker container while there are jobs running

We are trying to migrate our laravel setup to use docker. Dockerizing the laravel app was straight forward however we ran into an issue where if do a deployment while scheduled jobs are running they would be killed since the container is destroyed. Whats the best practice here? Having a separate container to run the laravel scheduler doesnt seem like it would solve the problem.
Run the scheduled job in a different container so you can scale it independently of the laravel app.
Run multiple containers of the scheduled job so you can stop some to upgrade them while the old ones will continue processing jobs.
Docker will send a SIGTERM signal to the container and wait for the container to exit cleanly before issuing SIGKILL (the time between the two signals is configurable, 10 seconds by default). This will allow to finish your current job cleanly (or save a checkpoint to continue later).
The plan is to stop old containers and start new containers gradually so there aren't lost jobs or downtime. If you use an orchestrator like Docker Swarm or Kubernetes, they will handle most of these logistics for you.
Note: the laravel scheduler is based on cron and will fire processes that will be killed by docker. To prevent this have the scheduler add a job to a laravel queue. The queue is a foreground process and it will be given the chance to stop/save cleanly by the SIGTERM that it will receive before being killed.

Laravel Horizon inactive and still processing

I run my Application on Kubernetes.
I have one Service for requests and one service for the worker processes.
If I access the Horizon UI it often shows the Inactive Status, but there are still jobs being processed by the worker. I know this because the JOBS PAST HOUR are getting more.
If I scale up my worker service there will be constantly "failing" Jobs with this exception Illuminate\Queue\MaxAttemptsExceededException.
If I connect directly to the pods and run ps aux I will see that there are horizon instances running.
If I connect to a pod on which the worker is running and execute the horizon:list command it tells me that one (or multiple) Masters are running.
How can I further debug this?
Laravel version: 5.7.15
Horizon version: 2.0.0
Redis version: 3.2.4
The issue was that the Server Time was out of Sync so the "old" ones got restartet all the time

Resources