I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name
Related
I've tried queue:clear, even tried removing all jobs but when I add them again the former queue starts working again as evidenced by the timely log entries, I'd just like to start fresh but couldn't find any way to actually stop the former queue
You can use
php artisan queue:restart
This will stop all running queue workers so that you can start fresh with a new worker process.
I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.
I have setup job in laravel thats time consuming so user get upload file and exit, and it works just fine when I do php artisan queue:listen or queue:work.
But that doesn't work when I get out of terminal. What do I need to do to have it work automatically?
I've tried amazon aws sqs, but that's useless because I can queue the job but thats about it, it doesn't have option to set endpoint to hit on job received.
I know there is iron.io but that outside of my budget.
Below is my code to push the job to database
public function queue()
{
$user = Property::find(1);
$this->dispatch(new SendReportEmail($user));
}
I cannot say Amazon sqs is useless
you can use a job on your scheduled jobs in laravel and use taht to recieve jobs from amazon sqs which has reference to the file / row to be processed and you can get the payload for the job and accordingly process that with the scheduled job.
For help here is a tutorial on setting up a queue listener for sqs via laravel
I had a quick question in regards to a beanstalkd queue.
So say I have 500,000 ready jobs in the beanstalk queue which are just waiting be processed and at the same time more jobs are being added to this queue. All of these jobs have the same priority.
Is it possible to move a job in the ready queue so that it can be processed before all of the other jobs in that queue?
I've just started using beanstalk and I was wondering whether this is possible to do in beanstalk?
I'm on a linux environment.
I guess I could delete that specific job and reinsert it with a priority that will allow it to be processed first but I would like to avoid doing that yet unless there is a command that will allow me to do that.
Please let me know if more information is required and I thank you in advance for your help. :)
There is no such command currently in Beanstalkd.
This is what different priorities are used for.
Beanstalk has a new command as of version 1.12: reserve-job.
A job can be reserved by its id. Once a job is reserved for the client, the client has limited time to run (TTR) the job before the job times out. When the job times out, the server will put the job back into the ready queue. The command looks like this:
reserve-job <id>\r\n
<id> is the job id to reserve
This should immediately return one of these responses:
NOT_FOUND\r\n if the job does not exist or reserved by a client or is not either ready, buried or delayed.
RESERVED <id> <bytes>\r\n<data>\r\n. See the description for the reserve command.
Assuming you know the ID of the job that needs the priority adjustment, you can connect to beanstalk, reserve that specific job, then release it with a new priority:
release <id> <pri> <delay>
Is there a similar event scheduler from MySQL available in PostgreSQL?
While a lot of people just use cron, the closest thing to a built-in scheduler is PgAgent. It's a component to the pgAdmin GUI management tool. A good intro to it can be found at Setting up PgAgent and doing scheduled backups.
pg_cron is a simple, cron-based job scheduler for PostgreSQL that runs
inside the database as an extension. A background worker initiates
commands according to their schedule by connecting to the local
database as the user that scheduled the job.
pg_cron can run multiple jobs in parallel, but it runs at most one
instance of a job at a time. If a second run is supposed to start
before the first one finishes, then the second run is queued and
started as soon as the first run completes. This ensures that jobs run
exactly as many times as scheduled and don’t run concurrently with
themselves.
If you set up pg_cron on a hot standby, then it will start running the
cron jobs, which are stored in a table and thus replicated to the hot
standby, as soon as the server is promoted. This means your periodic
jobs automatically fail over with your PostgreSQL server.
Source: citusdata.com