I've got a Laravel 5.7 app consuming an SQS queue, populated by an external app to trigger some basic integration tasks.
I'm seeing in my Laravel logs this error:
Undefined index: job at (proj-dir)/vendor/laravel/framework/src/Illuminate/Queue/Jobs/Job.php:234
I can see the Jobs class is assuming a "job" index - but you can't assume that is set... unless the Laravel implementation assumes reading and writing to the SQS queue is handled only by Laravel.
/**
* Get the name of the queued job class.
*
* #return string
*/
public function getName()
{
return $this->payload()['job'];
}
I'm surprised this hasn't been reported all over the place. Perhaps I'm using it differently than most.
I'm not sure if I need to ask the other provider to specify a job name in the payload, or modify the Laravel core to not assume this is required.
Related
I have two Laravel projects running on the same server.
The first calls second's api over HTTP, the the second pushes a job to notify some users.
As both projects live on the same server, can't i make the first project push the job to the second one's redis job queue?
I never tried this approach but it should be possible
You didn't specify what queue connections are your projects using, but let's assume that they use 2 different connections, for example 2 different redis servers
In your first laravel project config/queue.php add new connection to connections that will point to the queue connection of the second laravel project. Let's name it project-2-connection
Then you can use dispatching to a particular connection
ExampleJob::dispatch($data)->onConnection('project-2-connection');
It's important to make sure that the same job class ExampleJob exists in both projects
To make your life easier you should pass $data as simple array and avoid SerializesModels. If you pass model from project 1 that doesn't exist in project 2 then your job will will fail with a ModelNotFoundException. You could use models but then you would need to have same model in both projects
Set-up a queue management server and this can receive of point your jobs into queues from even multiple servers. A simple code which might help is below;
namespace App\Http\Controllers;
use App\Http\Controllers\Controller;
use App\Jobs\ProcessPodcast;
use App\Models\Podcast;
use Illuminate\Http\Request;
class PodcastController extends Controller
{
/**
* Store a new podcast.
*
* #param \Illuminate\Http\Request $request
* #return \Illuminate\Http\Response
*/
public function store(Request $request)
{
$podcast = Podcast::create(/* ... */);
// Create podcast...
ProcessPodcast::dispatch($podcast)->onQueue('processing');
}
}
An application that I'm making will allow users to set up automatic email campaigns to email their list of users (up to x per day).
I need a way of making sure that this is throttled so too many aren't sent within some range. Right now I'm trying to work within the confines of a free Mailtrap plan. But even on production using Sendgrid, I want a sensible throttle.
So say a user has set their automatic time to 9am and there are 30 users eligible to receive requests on that date and time. Every review_request gets a record in the DB. Upon Model creation, an event listener is triggered to then dispatch a job.
This is the handle method of the job that is dispatched:
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
Redis::throttle('request-' . $this->reviewRequest->id)
->block(0)->allow(1)->every(5)
->then(function () {
// Lock obtained...
$message = new ReviewRequestMailer($this->location, $this->reviewRequest, $this->type);
Mail::to($this->customer->email)
->send(
$message
);
}, function () {
// Could not obtain lock...
return $this->release(5);
});
}
the above is taken from https://laravel.com/docs/8.x/queues#job-middleware
"For example, consider the following handle method which leverages Laravel's Redis rate limiting features to allow only one job to process every five seconds:"
I am using Horizon to view the jobs. When I run my command to send emails (about 25 requests to be sent), all jobs seems to process instantly. Not 1 every 5 seconds as I would expect.
The exception for the failed jobs are:
Swift_TransportException: Expected response code 354 but got code "550", with message "550 5.7.0 Requested action not taken: too many emails per second
Why does the above Redis throttle not process a single job every 5 seconds? And how can I achieve this?
I'm using Laravel Jobs to pull data from the Stripe API in a paginated way. Basically, each job gets a "brand id" (a user can have multiple brands per account) and a "start after" parameter. It uses that to know which stripe token to use and where to start in the paginated calls (this job call itself if more stripe responses are available in the pagination). This runs fine when the job is started once.
But there is a use case where a user could add stripe keys to multiple brands in a short time and that job class could get called multiple times concurrently with different parameters. When this happens, whichever process is started last overwrites the others because the parameters are being overwritten to just the last called. So if I start stripe job with brand_id = 1, then job with brand_id = 2, then brand_id = 3, 3 overwrites the other two after one cycle and only 3 gets passed for all future calls.
How do I keep this from happening?
I've tried static vars, I've tried protected, private and public vars. I thought might be able to solve it with dynamically created queues for each brand, but this seems like a huge headache.
public function __construct($brand_id, $start_after = null)
{
$this->brand_id = $brand_id;
$this->start_after = $start_after;
}
public function handle()
{
// Do stripe calls with $brand_id & $start_after
if ($response->has_more) {
// Call next job with new "start_at".
dispatch(new ThisJob($this->brand_id, $new_start_after));
}
}
According to Laravel Documentation
if you dispatch a job without explicitly defining which queue it
should be dispatched to, the job will be placed on the queue that is
defined in the queue attribute of the connection configuration.
// This job is sent to the default queue...
dispatch(new Job);
// This job is sent to the "emails" queue...
dispatch((new Job)->onQueue('emails'));
However, pushing jobs to multiple queues with unique names can be especially useful for your use case.
The queue name may be any string that uniquely identifies the queue itself. For example, you may wish to construct the queue name based on the uniqid() and $brand_id.
E.g:
dispatch(new ThisJob($this->brand_id, $new_start_after)->onQueue(uniqid() . '_' . $this->brand_id));
I'm trying to figure something out, i built a logger that keeps track of certain events happening. These happen fairly often (ie 1-500 times a minute).
To optimize this properly, i'm storing to redis and then i have a task that grabs the queue object from redis, clears the cache key and will insert each individual entry into the db.
I have the enqueue happening in the destructor when my logger is done observing the data.
For obvious reasons, i dont want a db write happening every time, so to speed this up i write to redis and then flush to db on a task.
The issue is that my queue implementation is as follows:
fetch object with key xyz from redis
append new entry to object
store object with key xyz in redis
This is inefficient, i would like to be able to just enqueue straight into redis. Redis has a list type built-in which i could use, but laravel redis driver doesn't support it. I tried to figure out a way to send raw commands to redis from laravel, but I can't seem to make it work.
I was thinking of just storing keys into a tag in redis, but i quickly discovered laravel's implementation of tags is not 'proper' and will not allow fetching tagged items without a key, so i can't use tags as a queue and each key an object in it.
If anyone has any idea how i could either talk to redis directly and make use of list, or if there's something i missed, it would really help.
EDIT
While the way below does work, there is a more proper way of doing it using a facade for redis as a reply mentioned, more on it here: Documentation
Okay, if anyone runs into this. I did not see it documented properly anywhere, you need to do the following:
-> get an instance of redis through the Cache facade laravel provides.
$redis = Cache::getRedis();
-> call redis functions through it.
$redis->rpush('test.key', 'data');
I believe you'll want predis as your driver.
If you're building a log driver, you'll want these 2 functions implemented:
public function cacheEnqueue($data) : int
{
$size = $this->_redis->lpush($this->getCacheKey("c_stack_queue"), $data);
if ($size > 2000)
{
//prevent runaway logs
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, 2000);
}
return $size;
}
/**
* Fetch items from stack. Multi-thread safe,
*
* #param int $number Fetch last x items.
*
* #return array
*/
public function cachePopMulti(int $number) : array
{
$data = $this->_redis->lrange($this->getCacheKey("c_stack_queue"), -$number, -1);
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, -1 * ($number + 1));
return $data;
}
Of course, write your own key generator getCacheKey()
What happens if a Laravel queued job is passed an Eloquent model as input, but the model is deleted before the job gets run in the queue?
For example, I am building an eCommerce site with Laravel 5.2 where a customer can enter addresses and payment methods. A payment method belongs to an address. But if a customer tries to delete an address, rather than cascading down and deleting any payment methods that are associated with it, I soft delete the address by marking it as disabled. That way, the payment method can still be used until the customer updates the billing address associated with it.
However, if the payment method is deleted and it references an address that has been soft-deleted, I want to do some garbage collection and delete the address from the database. This doesn't need to happen synchronously, so I wrote a simple queueable job to accomplish this. The handle method looks like this:
public function handle(PaymentMethodRepository $paymentMethodRepository, AddressRepository $addressRepository)
{
$billingAddress = $paymentMethodRepository->address($this->paymentMethod);
if ( ! $billingAddress->enabled) {
$addressRepository->delete($billingAddress);
}
}
I dispatch this job in the destroy method of the PaymentMethodsController. However, if the payment method passed to the job is deleted from the database before the job gets executed in the queue, will the job fail?
I'm still developing the site so I don't have a server to deploy and test out what happens. I know that the model gets serialized to be put in the queue, but I wonder if an issue would occur when the model is restored to execute the job.
Yes, the job will fail if the "serialized model" is deleted before the job is executed. The model is not really serialized - the job stores the model class and model identifier and fetches the model before execution.
To get around this, you could store the primary key of the model in the job and then when executing the job, check to see if the record exists:
class DeleteAddressJob extends Job implements ShouldQueue
{
private $addressId;
public function __construct(int $addressId)
{
$this->addressId = $addressId;
}
public function handle(AddressRepository $addressRepository)
{
$address = $addressRepository->find($this->addressId);
if (is_null($address)) {
// Address doesn't exist. Complete job...
return;
}
// Delete the address...
}
}