Property precedence of Laravel's queued jobs - laravel

I am having a hard time figuring out what is the precedence of properties in laravel's jobs.
Currently, I have a job class and tries and timeout property described as shown below:
class ProcessSmsJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 3;
public $timeout = 300;
public function __construct()
{
//
}
public function handle()
{
//
}
}
Say, in server, the queue worker has different properties, for eg. tries = 1 and timeout = 700. What I wanted to know was, if I push this code to a server, will the properties (tries, timeout) defined in server's queue worker take precedence over the currently defined property, or will the server run the queue based on this job's property for this job?
Also, an unrelated question, if a job has tries = 3 and the job was successfully executed on the 2nd attempt, the record is not stored in the failed_jobs table right? So is there a way to determine how many attempts the job took to complete execution?

The documentation states:
One approach to specifying the maximum number of times a job may be attempted is via the --tries switch on the Artisan command line. This will apply to all jobs processed by the worker unless the job being processed specifies a more specific number of times it may be attempted
The $tries inside the job class takes precedence.
Same for the timeout. But make sure your $timeout is never larger than the retry_after setting in your queue.php config file.
Regarding your second question: Laravel itself is keeping track of the number of times it tries a job. So maybe there is a property on the job itself. Check it. Else, you could try to implement a counter yourself.

Related

How to lock the job during execution in Laravel?

I see withoutOverlapping() mutex for commands, but I don't see it for jobs. How can I protect jobs of the same type from overlapping each other?
Thanks!
I think it's possible using the following:
https://laravel.com/docs/8.x/queues#unique-jobs
You can specify a needed key that you can pass to the job to mark its uniqueness. In my case, I need to limit requests to a third-party API that happens in the job so if I have more than one worker handling the queue, it's possible to get 429 from the API. As soon as I have many API-keys (per user of the app), I can use it to have the same type of job being exxecuted independently across the app users but lock the job execution if the current job with a specific key is not completed.
Like this:
//In the class defining you must use ShouldBeUnique interface
class UpdateSpreadsheet implements ShouldQueue, ShouldBeUnique
//some other code
public function __construct($keyValue)
{
//some other constructor code if needed
$this->keyValue= $keyValue;
}
//This function allows to set the unique key
public function uniqueId()
{
return $this->keyValue;
}
//If you don't need to wait until the job is processed, you may also specify
//the time for the force lock removing (so you'll be able to queue another
//job with this key after 10 seconds even if the current job is
//still in process)
public $uniqueFor = 10;

Laravel Single Job Class dispatched multiple times with different parameters getting overwritten

I'm using Laravel Jobs to pull data from the Stripe API in a paginated way. Basically, each job gets a "brand id" (a user can have multiple brands per account) and a "start after" parameter. It uses that to know which stripe token to use and where to start in the paginated calls (this job call itself if more stripe responses are available in the pagination). This runs fine when the job is started once.
But there is a use case where a user could add stripe keys to multiple brands in a short time and that job class could get called multiple times concurrently with different parameters. When this happens, whichever process is started last overwrites the others because the parameters are being overwritten to just the last called. So if I start stripe job with brand_id = 1, then job with brand_id = 2, then brand_id = 3, 3 overwrites the other two after one cycle and only 3 gets passed for all future calls.
How do I keep this from happening?
I've tried static vars, I've tried protected, private and public vars. I thought might be able to solve it with dynamically created queues for each brand, but this seems like a huge headache.
public function __construct($brand_id, $start_after = null)
{
$this->brand_id = $brand_id;
$this->start_after = $start_after;
}
public function handle()
{
// Do stripe calls with $brand_id & $start_after
if ($response->has_more) {
// Call next job with new "start_at".
dispatch(new ThisJob($this->brand_id, $new_start_after));
}
}
According to Laravel Documentation
if you dispatch a job without explicitly defining which queue it
should be dispatched to, the job will be placed on the queue that is
defined in the queue attribute of the connection configuration.
// This job is sent to the default queue...
dispatch(new Job);
// This job is sent to the "emails" queue...
dispatch((new Job)->onQueue('emails'));
However, pushing jobs to multiple queues with unique names can be especially useful for your use case.
The queue name may be any string that uniquely identifies the queue itself. For example, you may wish to construct the queue name based on the uniqid() and $brand_id.
E.g:
dispatch(new ThisJob($this->brand_id, $new_start_after)->onQueue(uniqid() . '_' . $this->brand_id));

Laravel - Running Jobs in Sequence

I am learning Laravel, working on a project which runs Horizon to learn about jobs. I am stuck at one place where I need to run the same job a few times one after one.
Here is what I am currently doing
<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Models\Subscriptions;
class MailController extends Controller
{
public function sendEmail() {
Subscriptions::all()
->each(function($subscription) {
SendMailJob::dispatch($subscription);
});
}
}
This works fine, except it runs the job's across several workers and not in a guaranteed order. Is there any way to run the jobs one after another?
What you are looking for, as you mention in your question, is job chaining.
From the Laravel docs
Job chaining allows you to specify a list of queued jobs that should be run in sequence. If one job in the sequence fails, the rest of the jobs will not be run. To execute a queued job chain, you may use the withChain method on any of your dispatchable jobs:
ProcessPodcast::withChain([
new OptimizePodcast,
new ReleasePodcast
])->dispatch();
So in your example above
$mailJobs = Subscriptions::all()
->map(function($subscription) {
return new SendMailJob($subscription);
});
Job::withChain($mailJobs)->dispatch()
Should give the expected result!
Update
If you do not want to use an initial job to chain from (like shown in the documentation example above) you should be able to make an empty Job class that that has use Dispatchable;. Then you can use my example above
Everything depends on how many queue workers you will run.
If you run single queue worker, those jobs will be processed in the order they were queued. However, if you run multiple queue workers, obviously they will be run in same time. This is how queues should work. You add some tasks and they might run in same time in different order.
Of course if you want to make sure there is a pause between those jobs, you could inside each add some sleep() but assuming you are running this in controller (what might be not a good idea because what in case you have million subscriptions) it might be not the best solution.
What you need is Job Chaining.
You can read all about it in the Laravel website : Chain
Good luck
To add an extra info on #Josh answer, if one of your jobs implemented Illuminate\Contracts\Queue\ShouldQueue interface, other jobs in the chain must implement ShouldQueue interface, otherwise they will not be chained.
For example, here if ProcessPodcast class implemented ShouldQueue, OptimizePodcast also must implement this interface to be included in the chain.
class ProcessPodcast implements ShouldQueue {
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
...
}
class OptimizePodcast implements ShouldQueue {
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
...
}

Job always fails at laravel jobs Redis rate limiting

This is a follow up on
Laravel - Running Jobs in Sequence
I decided to go with redis rate limit. Code is below
jobClass {
protected $subscription;
public function __construct(Subscription$subscription) {
$this->subscription= $subscription;
}
public function handle() {
Redis::funnel('mailingJob')->limit(1)->then(function () {
// Job logic...
(new Mailer($this->subscription))->send();
}, function () {
// Could not obtain lock...
return $this->release(10);
});
}
}
And the controller code looks like.
<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Models\Subscriptions;
class MailController extends Controller
{
public function sendEmail() {
Subscriptions::all()
->each(function($subscription) {
SendMailJob::dispatch($subscription);
});
}
}
Now, when i run the queue, some of them works rest(around 90%) failed with the below error in horizon.
SendMailJob has been attempted too many times or run too long. The job may have
previously timed out.
What am i missing? Please someone guide me to the right direction.
My goal is to run only one job of a type at one time.
[...] has been attempted too many times or run too long is an error that doesn't tell you why the job failed. It means some other exception has caused your job to fail every time it was attempted by the worker, and the worker has tried it the maximum number of times it was allowed to by your configuration. To understand why it's failing, check your laravel.log file for the exception that caused the job to fail.
In your case, since Mailer is contacting an external system it could be that the system you're connecting to is rate limiting you, or they could be having temporary connection problems or other service downtime. Again, there should be more detail in your log files.
The Laravel documentation has a hint about this:
When using rate limiting, the number of attempts your job will need to run successfully can be hard to determine. Therefore, it is useful to combine rate limiting with time based attempts.
The core of the issue is, the job keeps failing until it can achieve a lock and run.
So I imagine that where you are running your queue worker, you are not setting the --tries flag high enough.
Although you could just set a very high --tries, it is not really scalable.
The best solution, as suggested in the documentation, would be to increase the number of tries as well as using time based attempts
You can also increase return $this->release(10); the release time here. That should have the job wait longer before trying to reacquire a lock, so will use up fewer tries!

Laravel Scheduling in clustered environment

I am working with scheduling in Laravel 5.3. Previously, I was using one server to host the laravel application. Now that I am using two servers to run the Laravel App, how do I ensure that both servers are not running the same jobs at the same time?
Recently, I saw an Event method called "withoutOverlapping()". See https://laravel.com/docs/5.3/scheduling#preventing-task-overlaps
In my case, withoutOverlapping() cannot help me as I am working in a clustered environment.
Are there any workarounds or suggestions regarding this?
First of all, define if it is critical or not to avoid running task multiple times.
For example, if your app is using a task to do some sort of cleanup, there is almost no drawback to run it on every server (who care if you try to delete messages with +10 min twice?)
If it is absolutely critical to run every task only one time, you'll need to define a "main server" that will execute tasks, and a slave server that will just answer to requests but not perform any task. This is quite trivial as you just have to give every env a different name in your .env, and test against that when you define the scheduler tasks.
This is the easiest way, seriously don't bother making a database locking mecanism or whatever so you can synchronise tasks accross servers. Even OS's struggle to manage properly synchronisation against threads on the same machine, why do you want to implement the same accross different machines?
Here's what I've done when I ran into the same problems with load balancing:
class MutexCommand extends Command {
private $hash = null;
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
$this->hash = null;
}
}
protected abstract function generateHash();
protected abstract function handleInternal();
public final function handle() {
register_shutdown_function([$this,"cleanup"]);
try {
$this->hash = $this->generateHash();
//Set a value if it does not exist atomically. Will fail if it does exist.
//Essentially setnx is the mechanism to acquire the lock
if (!Redis::setnx($this->hash,true)) {
$this->hash = null; //Prevent it from being cleaned up
throw new Exception("Already running");
}
$this->handleInternal();
} finally {
$this->cleanup();
}
}
}
Then you can write your commands:
class ThisShouldNotOverlap extends MutexCommand {
public function generateHash() {
return "Unique key for mutex, you can just use the class name if you want by doing return static::class";
}
public function handleInternal() { /* do stuff */ }
}
Then whenever you try to run the same command on multiple instances one would successfully acquire the "lock" and the others should fail.
Of course this assumes that you are using a non-clustered redis cache.
If you are not using redis then there's probably similar locking mechanisms you can implement in other caches, if you are using a clustered redis then you may need to use the RedLock locking mechanism
Essentially no, there's no a natural way using Laravel to know if another Laravel app have the same job on the job dispatcher.
We have some options there to find a solution:
Create a intermediate app that manages the jobs from the other apps.
Allow only one app to dispatch jobs.
Use worker queues, you have some packages for this, I would recommend to use Laravel 5 with WebSockets and Queue Asynchronously.
First of all Laravel scheduler isn't designed to work in a clustered environment. It was never intended to be that way.
I would suggest you should have a dedicated cron instance which manages your Laravel scheduler jobs.

Resources