laravel queue:work with priority if it exists - laravel

I need my Laravel workers to work with a queue priority, in Laravel documentation it is achieved by using:
php artisan work:queue --queue=jobA,jobB
Running this, all jobs with the queue jobA will dispatched, and then the ones with jobB, but what I need to do is prioritize the jobs with queue jobA, but when there is not more of this jobs remaining, dispatch any of the remaining jobs.
I need this in order to make sure that all my workers are being used, because if I have 50 jobs with the queue jobD, and none with the queue jobA, I want that worker to keep working with those remaining while there is none of jobA, something like:
php artisan work:queue --queue=jobA,jobB,[anyOtherJobRemaining]
Where the worker will dispatch all the jobs with queue jobA, then jobB, and then any other available job in the jobs table.
What I'm trying to do here is to optimize the way I use the workers, or if there is a better way to achieve this it would be really nice if anyone can point me to the solution. Thank you!

Related

How do I configure queue workers, connection and limiter to avoid job failing

My Project consumes several 3rd party APIs which enforce requests limiting. My Project calls these api's through Laravel Jobs. I am using using Spatie/aravel-rate-limited-job-middleware for rate limiting
Once a Project is submitted, around 60 jobs are dispatched on an average. These jobs needs to be executed as 1 Job/Minute
There is one supervisord program running 2 process of the default queue with --tries=3
also in config/queue.php for redis I am using 'retry_after' => (60 * 15) to avoid retrying while job is executing.
My current Rate Limiter middleware is coded this way
return (new RateLimited())
->allow(1)
->everySeconds(60)
->releaseAfterBackoff($this->attempts());
What happens is that 3 jobs get processed in 3 mins, but after that all jobs gets failed.
what I can understand is all jobs are requeued every min and once they cross tries threshold (3), they are moved to failed_jobs.
I tried removing --tries flags but that didn't work. I also tried increasing --tries=20, but then jobs fails after 20 mins.
I don't want to hardcode the --tries flag as in some situation more than 100 jobs can be dispatched.
I also want to increase no of queue workers process in the supervisor so that few jobs can execute parallely.
I understand it is issue with configuring retry, timeouts flags but I don't understand how. Need Help...

Will low priority job in artisan queue stop high priority tas from being executed if low priority job takes long time to complete?

I'm running artisan queue worker with pm2 and was thinking to run two artisan workers one that could process high priority queue, and the other would process low priotiry, long jobs.
The issue is that pm2 does not allow to run the same script as a separate instance.
I know that I can set priorities here --queue=live-high,live-low,default, but my problem is that if low priority job takes 5 mins to complete, I need to be able to process high priority jobs meanwhile
From the Laravel Documentation:
Background Tasks
By default, multiple commands scheduled at the same time will execute
sequentially. If you have long-running commands, this may cause
subsequent commands to start much later than anticipated. If you would
like to run commands in the background so that they may all run
simultaneously, you may use the runInBackground method:
$schedule->command('analytics:report')
->daily()
->runInBackground();
https://laravel.com/docs/5.7/scheduling#background-tasks

Laravel Queue start a second job after first job

In my Laravel 5.1 project I want to start my second job when first will finished.
Here is my logic.
\Queue::push(new MyJob())
and when this job finish I want to start this job
\Queue::push(new ClearJob())
How can i realize this?
If you want this, you just should define 1 Queue.
A queue is just a list/line of things waiting to be handled in order,
starting from the beginning. When I say things, I mean jobs. - https://toniperic.com/2015/12/01/laravel-queues-demystified
To get the opposite of what you want: async executed Jobs, you should define a new Queue for every Job.
Multiple Queues and Workers
You can have different queues/lists for
storing the jobs. You can name them however you want, such as “images”
for pushing image processing tasks, or “emails” for queue that holds
jobs specific to sending emails. You can also have multiple workers,
each working on a different queue if you want. You can even have
multiple workers per queue, thus having more than one job being worked
on simultaneously. Bear in mind having multiple workers comes with a
CPU and memory cost. Look it up in the official docs, it’s pretty
straightforward.

Laravel queue remove jobs

Is there a way to kill/remove a job that are already on the reserve?
Say I have 5 jobs that I pushed on the queue and the queue is currently processing the 2nd Job, but I want to cancel the processing of the 2nd job. Not all the jobs should be killed/removed just the 2nd one if my request would be to remove that.
I'm using beanstalkd for this queuing by the way.
You could use a Beanstalkd console.
This one is pretty good: https://github.com/ptrofimov/beanstalk_console

Laravel Artisan Queues - high cpu usage

I have set up queues in Laravel for my processing scripts.
I am using beanstalkd and supervisord.
There are 6 different tubes for different types of processing.
The issue is that for each tube, artisan is constantly spawning workers every second.
The worker code seems to sleep for 1 second and then the worker thread uses 7-15% cpu, multiply this by 6 tubes... and I would like to have multiple workers per tube.. my cpu is being eaten up.
I tried changing the 1 second sleep to 10 seconds.
This helps but there is still a huge cpu spike every 10 seconds when the workers wake back up.
I am not even processing anything at this time because the queues are completely empty, it is simply the workers looking for something to do.
I also tested to see the cpu usage of laravel when I refreshed the page in a brower and that was hovering around 10%.. I am on a low end rackspace instance right now so that could explain it but still... it seems like the workers spin up a laravel instance every time they wake up.
Is there no way to solve this? Do I just have to put a lot of money into a more expensive server just to be able to listen to see if a job is ready?
EDIT:
Found a solution... it was to NOT use the artisan queue:listener or queue:work
I looked into the queue code and there doesn't seem to be a way around this issue, it requires laravel to load every time a worker checks for more work to do.
Instead I wrote my own listener using pheanstalk.
I am still using laravel to push things into the queue, then my custom listener is parsing the queue data and then triggering an artisan command to run.
Now my cpu usage for my listeners is under %0, the only time my cpu shoots up now is when it actually finds work to do and then triggers the command, I am fine with that.
The problem of high CPU is caused because the worker loads the complete framework everytime it checks for a job in the queue. In laravel 4.2, you can use php artisan queue:work --daemon. This will load the framework once and the checking/processing of jobs happen inside a while loop, which lets CPU breathe easy. You can find more about daemon worker in the official documentation: http://laravel.com/docs/queues#daemon-queue-worker.
However, this benefit comes with a drawback - you need special care when deploying the code and you have to take care of the database connections. Usually, long running database connections are disconnected.
I had the same issue.
But I found another solution. I used the artisan worker as is, but I modified the 'watch' time. By default(from laravel) this time is hardcoded to zero, I've changed this value to 600 (seconds). See the file:
'vendor/laravel/framework/src/Illuminate/Queue/BeanstalkdQueue.php'
and in function
'public function pop($queue = null)'
So now the work is also listening to the queue for 10 minutes. When it does not have a job, it exits, and supervisor is restarting it. When it receives a job, it executes it after that it exists, and supervisor is restarting it.
==> No polling anymore!
notes:
it does not work for iron.io queue's or others.
it might not work when you want that 1 worker accept jobs from more than 1 queue.
According to this commit, you can now set a new option to queue:listen "--sleep={int}" which will let you fine tune how much time to wait before polling for new jobs.
Also, default has been set to 3 instead of 1.

Resources