Is there a way to kill/remove a job that are already on the reserve?
Say I have 5 jobs that I pushed on the queue and the queue is currently processing the 2nd Job, but I want to cancel the processing of the 2nd job. Not all the jobs should be killed/removed just the 2nd one if my request would be to remove that.
I'm using beanstalkd for this queuing by the way.
You could use a Beanstalkd console.
This one is pretty good: https://github.com/ptrofimov/beanstalk_console
Related
I need my Laravel workers to work with a queue priority, in Laravel documentation it is achieved by using:
php artisan work:queue --queue=jobA,jobB
Running this, all jobs with the queue jobA will dispatched, and then the ones with jobB, but what I need to do is prioritize the jobs with queue jobA, but when there is not more of this jobs remaining, dispatch any of the remaining jobs.
I need this in order to make sure that all my workers are being used, because if I have 50 jobs with the queue jobD, and none with the queue jobA, I want that worker to keep working with those remaining while there is none of jobA, something like:
php artisan work:queue --queue=jobA,jobB,[anyOtherJobRemaining]
Where the worker will dispatch all the jobs with queue jobA, then jobB, and then any other available job in the jobs table.
What I'm trying to do here is to optimize the way I use the workers, or if there is a better way to achieve this it would be really nice if anyone can point me to the solution. Thank you!
We have an external service which processes a specific task on given data. Because this takes a while per task, and we have ten thousand of tasks we decided to put processing into jobs and a queue otherwise we will get an timeout.
Processing all the tasks can be take 15 hours.
So, we decided to split them into chunks and put processing the chunk into a job. So the job will only take about 1 minute.
Considering that the receiving service has limited resources it is important to process each job after each other without a synchronicity.
We put these jobs into a specific named queue to divide this jobs from other jobs like email submitting.
In the local test environment, it works properly with sync, database and sqs.
Now I will explain the issue with the live environment:
When I run the jobs in my local test environment with sqs, invoked by php art queue:listen --queue=name of the queue, all jobs will be written in the "message available" column and one by one will be removed from "message available" column and added to the "message in flight" column.
The "message in flight" column has never more than one message.
After deploying everything to production the following happens:
The command to add the jobs to the queue will invoked by a scheduler, instead of invoking in console on my local environment.
Then all jobs will be added to "message available" column and immediately dozens of jobs will be moved to "messages in flight". That means all jobs from "message available" will be moved to "messages in flight". So that it seems that the jobs won't be processed step by step instead of a kind of brute force.
The other thing is that only 5 jobs will be executed. After that nothing happens, the receiving service gets no requests, the failed_jobs table is empty, and the jobs still remains in "messages in flight".
I have no idea what I do wrong!
Is there another way to process thousands of jobs?
I've set the "queue-concurrency" to 1 and all queues are listed below the "queues" section in vapor.yml.
Furthermore, I've set timeout for cli, general and queue to 900 (seconds).
After checking the sqs log file in cloud watch I see that the job has been executed about 4 times.
The time between first job and last job in the log file is about 6 minutes max.
Does anybody has any ideas?
Thank you in advance.
Best
Michael
In my Laravel 5.1 project I want to start my second job when first will finished.
Here is my logic.
\Queue::push(new MyJob())
and when this job finish I want to start this job
\Queue::push(new ClearJob())
How can i realize this?
If you want this, you just should define 1 Queue.
A queue is just a list/line of things waiting to be handled in order,
starting from the beginning. When I say things, I mean jobs. - https://toniperic.com/2015/12/01/laravel-queues-demystified
To get the opposite of what you want: async executed Jobs, you should define a new Queue for every Job.
Multiple Queues and Workers
You can have different queues/lists for
storing the jobs. You can name them however you want, such as “images”
for pushing image processing tasks, or “emails” for queue that holds
jobs specific to sending emails. You can also have multiple workers,
each working on a different queue if you want. You can even have
multiple workers per queue, thus having more than one job being worked
on simultaneously. Bear in mind having multiple workers comes with a
CPU and memory cost. Look it up in the official docs, it’s pretty
straightforward.
In sidekiq you can specify the name of the queue you want to add that job into, so what's the benefit from separating jobs in multiple queues? why do we need to not put all jobs in the default queue?
In my current project we send a lot of jobs to Sidekiq. One type of job we submit about 200,000 jobs a day. For another type (sending email to users) we send maybe 100. If they were all in the same queue then there is a very good chance that "please confirm your account" email will be in the 200,001st spot and won't get run for a long time (hours?). By having multiple queues I can ensure that those 100 get sent out promptly.
I want to only send out one email at a time so I have a script starting every minute. Is it possible to only send out one email and then stop the delayed job from sending the next queued jobs?
Before you place the next job on the queue, you could look at the last job that's already on the queue, and check it's run_at time. Then set the run_at time for your job to be one minute later. If there's no jobs on the queue, set it to now, or one minute from now, depending on how strict you need to be about one minute between.
You could fetch one specific job from DJ's table itself, invoke it and then destroy it, something like:
job = Delayed::Job.last
job.invoke_job # This will NOT destroy the job
job.destroy
Found it here: https://groups.google.com/forum/#!topic/delayed_job/5j5BmAlXN3g