I am learning Laravel, working on a project which runs Horizon to learn about jobs. I am stuck at one place where I need to run the same job a few times one after one.
Here is what I am currently doing
<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Models\Subscriptions;
class MailController extends Controller
{
public function sendEmail() {
Subscriptions::all()
->each(function($subscription) {
SendMailJob::dispatch($subscription);
});
}
}
This works fine, except it runs the job's across several workers and not in a guaranteed order. Is there any way to run the jobs one after another?
What you are looking for, as you mention in your question, is job chaining.
From the Laravel docs
Job chaining allows you to specify a list of queued jobs that should be run in sequence. If one job in the sequence fails, the rest of the jobs will not be run. To execute a queued job chain, you may use the withChain method on any of your dispatchable jobs:
ProcessPodcast::withChain([
new OptimizePodcast,
new ReleasePodcast
])->dispatch();
So in your example above
$mailJobs = Subscriptions::all()
->map(function($subscription) {
return new SendMailJob($subscription);
});
Job::withChain($mailJobs)->dispatch()
Should give the expected result!
Update
If you do not want to use an initial job to chain from (like shown in the documentation example above) you should be able to make an empty Job class that that has use Dispatchable;. Then you can use my example above
Everything depends on how many queue workers you will run.
If you run single queue worker, those jobs will be processed in the order they were queued. However, if you run multiple queue workers, obviously they will be run in same time. This is how queues should work. You add some tasks and they might run in same time in different order.
Of course if you want to make sure there is a pause between those jobs, you could inside each add some sleep() but assuming you are running this in controller (what might be not a good idea because what in case you have million subscriptions) it might be not the best solution.
What you need is Job Chaining.
You can read all about it in the Laravel website : Chain
Good luck
To add an extra info on #Josh answer, if one of your jobs implemented Illuminate\Contracts\Queue\ShouldQueue interface, other jobs in the chain must implement ShouldQueue interface, otherwise they will not be chained.
For example, here if ProcessPodcast class implemented ShouldQueue, OptimizePodcast also must implement this interface to be included in the chain.
class ProcessPodcast implements ShouldQueue {
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
...
}
class OptimizePodcast implements ShouldQueue {
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
...
}
Related
I have two Laravel projects running on the same server.
The first calls second's api over HTTP, the the second pushes a job to notify some users.
As both projects live on the same server, can't i make the first project push the job to the second one's redis job queue?
I never tried this approach but it should be possible
You didn't specify what queue connections are your projects using, but let's assume that they use 2 different connections, for example 2 different redis servers
In your first laravel project config/queue.php add new connection to connections that will point to the queue connection of the second laravel project. Let's name it project-2-connection
Then you can use dispatching to a particular connection
ExampleJob::dispatch($data)->onConnection('project-2-connection');
It's important to make sure that the same job class ExampleJob exists in both projects
To make your life easier you should pass $data as simple array and avoid SerializesModels. If you pass model from project 1 that doesn't exist in project 2 then your job will will fail with a ModelNotFoundException. You could use models but then you would need to have same model in both projects
Set-up a queue management server and this can receive of point your jobs into queues from even multiple servers. A simple code which might help is below;
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Jobs\ProcessPodcast;
use App\Models\Podcast;
use Illuminate\Http\Request;
class PodcastController extends Controller
{
/**
* Store a new podcast.
*
* #param \Illuminate\Http\Request $request
* #return \Illuminate\Http\Response
*/
public function store(Request $request)
{
$podcast = Podcast::create(/* ... */);
// Create podcast...
ProcessPodcast::dispatch($podcast)->onQueue('processing');
}
}
I am having a hard time figuring out what is the precedence of properties in laravel's jobs.
Currently, I have a job class and tries and timeout property described as shown below:
class ProcessSmsJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 3;
public $timeout = 300;
public function __construct()
{
//
}
public function handle()
{
//
}
}
Say, in server, the queue worker has different properties, for eg. tries = 1 and timeout = 700. What I wanted to know was, if I push this code to a server, will the properties (tries, timeout) defined in server's queue worker take precedence over the currently defined property, or will the server run the queue based on this job's property for this job?
Also, an unrelated question, if a job has tries = 3 and the job was successfully executed on the 2nd attempt, the record is not stored in the failed_jobs table right? So is there a way to determine how many attempts the job took to complete execution?
The documentation states:
One approach to specifying the maximum number of times a job may be attempted is via the --tries switch on the Artisan command line. This will apply to all jobs processed by the worker unless the job being processed specifies a more specific number of times it may be attempted
The $tries inside the job class takes precedence.
Same for the timeout. But make sure your $timeout is never larger than the retry_after setting in your queue.php config file.
Regarding your second question: Laravel itself is keeping track of the number of times it tries a job. So maybe there is a property on the job itself. Check it. Else, you could try to implement a counter yourself.
I see withoutOverlapping() mutex for commands, but I don't see it for jobs. How can I protect jobs of the same type from overlapping each other?
Thanks!
I think it's possible using the following:
https://laravel.com/docs/8.x/queues#unique-jobs
You can specify a needed key that you can pass to the job to mark its uniqueness. In my case, I need to limit requests to a third-party API that happens in the job so if I have more than one worker handling the queue, it's possible to get 429 from the API. As soon as I have many API-keys (per user of the app), I can use it to have the same type of job being exxecuted independently across the app users but lock the job execution if the current job with a specific key is not completed.
Like this:
//In the class defining you must use ShouldBeUnique interface
class UpdateSpreadsheet implements ShouldQueue, ShouldBeUnique
//some other code
public function __construct($keyValue)
{
//some other constructor code if needed
$this->keyValue= $keyValue;
}
//This function allows to set the unique key
public function uniqueId()
{
return $this->keyValue;
}
//If you don't need to wait until the job is processed, you may also specify
//the time for the force lock removing (so you'll be able to queue another
//job with this key after 10 seconds even if the current job is
//still in process)
public $uniqueFor = 10;
This is a follow up on
Laravel - Running Jobs in Sequence
I decided to go with redis rate limit. Code is below
jobClass {
protected $subscription;
public function __construct(Subscription$subscription) {
$this->subscription= $subscription;
}
public function handle() {
Redis::funnel('mailingJob')->limit(1)->then(function () {
// Job logic...
(new Mailer($this->subscription))->send();
}, function () {
// Could not obtain lock...
return $this->release(10);
});
}
}
And the controller code looks like.
<?php
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Models\Subscriptions;
class MailController extends Controller
{
public function sendEmail() {
Subscriptions::all()
->each(function($subscription) {
SendMailJob::dispatch($subscription);
});
}
}
Now, when i run the queue, some of them works rest(around 90%) failed with the below error in horizon.
SendMailJob has been attempted too many times or run too long. The job may have
previously timed out.
What am i missing? Please someone guide me to the right direction.
My goal is to run only one job of a type at one time.
[...] has been attempted too many times or run too long is an error that doesn't tell you why the job failed. It means some other exception has caused your job to fail every time it was attempted by the worker, and the worker has tried it the maximum number of times it was allowed to by your configuration. To understand why it's failing, check your laravel.log file for the exception that caused the job to fail.
In your case, since Mailer is contacting an external system it could be that the system you're connecting to is rate limiting you, or they could be having temporary connection problems or other service downtime. Again, there should be more detail in your log files.
The Laravel documentation has a hint about this:
When using rate limiting, the number of attempts your job will need to run successfully can be hard to determine. Therefore, it is useful to combine rate limiting with time based attempts.
The core of the issue is, the job keeps failing until it can achieve a lock and run.
So I imagine that where you are running your queue worker, you are not setting the --tries flag high enough.
Although you could just set a very high --tries, it is not really scalable.
The best solution, as suggested in the documentation, would be to increase the number of tries as well as using time based attempts
You can also increase return $this->release(10); the release time here. That should have the job wait longer before trying to reacquire a lock, so will use up fewer tries!
I am working with scheduling in Laravel 5.3. Previously, I was using one server to host the laravel application. Now that I am using two servers to run the Laravel App, how do I ensure that both servers are not running the same jobs at the same time?
Recently, I saw an Event method called "withoutOverlapping()". See https://laravel.com/docs/5.3/scheduling#preventing-task-overlaps
In my case, withoutOverlapping() cannot help me as I am working in a clustered environment.
Are there any workarounds or suggestions regarding this?
First of all, define if it is critical or not to avoid running task multiple times.
For example, if your app is using a task to do some sort of cleanup, there is almost no drawback to run it on every server (who care if you try to delete messages with +10 min twice?)
If it is absolutely critical to run every task only one time, you'll need to define a "main server" that will execute tasks, and a slave server that will just answer to requests but not perform any task. This is quite trivial as you just have to give every env a different name in your .env, and test against that when you define the scheduler tasks.
This is the easiest way, seriously don't bother making a database locking mecanism or whatever so you can synchronise tasks accross servers. Even OS's struggle to manage properly synchronisation against threads on the same machine, why do you want to implement the same accross different machines?
Here's what I've done when I ran into the same problems with load balancing:
class MutexCommand extends Command {
private $hash = null;
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
$this->hash = null;
}
}
protected abstract function generateHash();
protected abstract function handleInternal();
public final function handle() {
register_shutdown_function([$this,"cleanup"]);
try {
$this->hash = $this->generateHash();
//Set a value if it does not exist atomically. Will fail if it does exist.
//Essentially setnx is the mechanism to acquire the lock
if (!Redis::setnx($this->hash,true)) {
$this->hash = null; //Prevent it from being cleaned up
throw new Exception("Already running");
}
$this->handleInternal();
} finally {
$this->cleanup();
}
}
}
Then you can write your commands:
class ThisShouldNotOverlap extends MutexCommand {
public function generateHash() {
return "Unique key for mutex, you can just use the class name if you want by doing return static::class";
}
public function handleInternal() { /* do stuff */ }
}
Then whenever you try to run the same command on multiple instances one would successfully acquire the "lock" and the others should fail.
Of course this assumes that you are using a non-clustered redis cache.
If you are not using redis then there's probably similar locking mechanisms you can implement in other caches, if you are using a clustered redis then you may need to use the RedLock locking mechanism
Essentially no, there's no a natural way using Laravel to know if another Laravel app have the same job on the job dispatcher.
We have some options there to find a solution:
Create a intermediate app that manages the jobs from the other apps.
Allow only one app to dispatch jobs.
Use worker queues, you have some packages for this, I would recommend to use Laravel 5 with WebSockets and Queue Asynchronously.
First of all Laravel scheduler isn't designed to work in a clustered environment. It was never intended to be that way.
I would suggest you should have a dedicated cron instance which manages your Laravel scheduler jobs.