How to stop a laravel SyncQueue - laravel

I've tried queue:clear, even tried removing all jobs but when I add them again the former queue starts working again as evidenced by the timely log entries, I'd just like to start fresh but couldn't find any way to actually stop the former queue

You can use
php artisan queue:restart
This will stop all running queue workers so that you can start fresh with a new worker process.

Related

Laravel 7 - Stop Processing Jobs and Clear Queue

I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name

Laravel - Is there a way to run a queued job from another queued job on different queue?

Imagine:
1- Eating job
2- Drinking job
3- Eating Queue
4- Drinking Queue
If I have an Eating Job running on Eating queue and from this job I want to run Drinking Job but on Drinking Queue.
Is it possible?
I just found what solves my problem.
Instead of using php artisan queue:work I used php artisan queue:listen on the job that runs another job on the different queue.
Looks like queue:work only runs that job on the queue and doesn't affect another queue but when I used queue:listen it works fine.
A note: This is helped in the windows machine while on linux server it works normally with php artisan queue:work

Deleting a periodic job in sidekiq

I am trying to delete a Sidekiq Enterprise periodic job for an app, and I'm not sure how one goes about deleting the periodic job itself after deleting the schedule from the initialize and deleting the worker job.
I see this answer from earlier but the app in question has other jobs (both periodic and regular sidekiq jobs) and I cannot just globally blow away all scheduled periodic jobs and would prefer to not have to totally shut down and restart sidekiq either. Is there a way I can just get the specific job I am deleting out of redis so that it will no longer try to run at the next scheduled time?
You have to deploy your code change and restart Sidekiq for it to pick up periodic changes.

Queued jobs are somehow being cached with Laravel Horizon using Supervisor

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

Elastic Beanstalk and Laravel queues

I'm implementig Laravel queues via database driver, and when I start the process for listening the jobs, another process is also started.
So basically I'm doing php artisan queue:listen and the process that is automatically started is php artisan queue:work.
so basically the second process here is generated automatically, also the thing is that it doesnt point to the folder where it should be
The listener:
php artisan queue:listen
starts a long-running process that will "run" (process) new jobs as they are pushed onto the queue. See docs.
The processer:
php artisan queue:work
Will process new jobs as they are pushed onto the queue. See docs.
So basically queue:listen runs queue:work as and when new jobs are pushed.
P.S: You need not worry about this but its good to know how it works. You can dig into the code if you need more information.

Resources