Laravel queue worker with supervisor - laravel

I have script, which executes about 5-8 mins and in the end it gives me xls file, on localhost it works fine, but on server it executes 3 times, i cannot understand why.
There is supervisor with 8 processes of queue workers.
queue connection set to redis.
laravel 5.7
Maybe someone had same problem and solved it?
.env
BROADCAST_DRIVER=redis
QUEUE_CONNECTION=redis
queue
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 90,
'block_for' => null,
],
upd:
changing retry_after => 900 doesn't help
worker starts with this command:
artisan queue:work redis --timeout=900 --sleep=3 --tries=3

Related

Laravel Horizon queue keeps pausing for one minute

TL;DR Laravel Horizon queue workers go to sleep for 60 seconds after each job they process
I have a big backlog in my Laravel Horizon queue. There are a lot of workers (maxProcesses set to 30), but when I monitor the log file, the output suggests that it is processing exactly 30 jobs over the course of 2-3 seconds, and then it pauses for a full minute (more or less exactly 60 seconds).
Any ideas why this could be happening? Am I hitting some resource limit that is causing Horizon or Supervisor to hit the breaks?
Here's the relevant section from my horizon.php config file:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'default', 'low'],
'balance' => 'false',
'minProcesses' => 3,
'maxProcesses' => 30,
'timeout' => 1800,
'tries' => 3
],
I have the exact same configuration in my local environment, and my throughput locally is ~600 jobs/minute. In production it hovers right around ~30 jobs/minute.
Update per #Qumber's request
For the most part these aren't actually jobs. They're events being handled by one or more listeners, most of which are super simple. For example:
public function handle(TransactionDeleted $event)
{
TransactionFile::where("transaction_id", $event->subject->id)->delete();
}
Here's some queue config:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 1900,
'block_for' => null,
],
Update per #sykez request
Here's the supervisor config in local:
[program:laravelqueue]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/artisan queue:once redis --sleep=1 --tries=1
autostart=true
autorestart=true
user=adam
numprocs=3
redirect_stderr=true
stdout_logfile=/path/to/worker.log
stopwaitsecs=3600
Here's the supervisor config in production:
[program:daemon-XXXXXX]
directory=/home/forge/SITE_URL/current/
command=php artisan horizon
process_name=%(program_name)s_%(process_num)02d
autostart=true
autorestart=true
user=forge
redirect_stderr=true
stdout_logfile=/home/forge/.forge/daemon-XXXXXX.log
stopwaitsecs=3600
The local supervisor is running the queue directly, with the "once" flag, which should load the entire code base for each job rather than running as a daemon. This, of course, should make it slower, not 20 times faster...
Another update
Thanks to some help from one of the core Laravel devs, we were able to determine that all of the "hanging" jobs were broadcast jobs, from events that were configured to broadcast after firing. We use Pusher as our broadcast engine. When Pusher is disabled (as it is in our local environment), then the jobs finish immediately with no pause.

Laravel 5.8 background job is not in the background

I have a library, which imports lots of images and I am trying to use Laravel background jobs. For queued jobs, I am following Laravel documentation.
First (create table):
php artisan queue:table
php artisan migrate
Then configuration in .env file for Redis:
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=43216
Create a job:
php artisan make:job CarsJob
CarsJob:
public function handle()
{
$cars = new CarsLibrary();
$CarsLibrary->importAll();
}
Dispatching a job in a some action in the controller:
First what I have tried:
$importCarsJob = (new ImportCarsJob())->onQueue('import_cars');
$this->dispatch($importCarsJob );
Second what I have tried:
$importCarsJob = new importCarsJob();
$this->dispatch($importCarsJob);
I have enabled Redis in my hosting. It is shared hosting.
If I access the URL, I see that this job is not in the background, because it needs more than a minute to finish the request.
EDIT:
The env file is above, this is config/queue.php:
'connections' => [
'sync' => [
'driver' => 'redis',
],
... other drivers like beanstalkd
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
],
I have no REDIS_QUEUE in env file.
It seems that you have not updated your queue connection in config/queue.php from sync to redis (or the environment variable QUEUE_CONNECTION). The sync driver will execute jobs immediately without pushing them on a queue.
By the way, you don't need the queue database table if you use the redis queue driver.

Laravel priority order of queue not working

I have studied about the laravel queues from here: https://laravel.com/docs/5.6/queues.
I have a situtation that my project had no queues in particular, and only 'default' queue was running. Now I have two queues: JobA and JobB. And I want to set priority of JobB higher than JobA.
To attach the jobs with queues I have used:
->onQueue('jobA');
->onQueue('jobB');
In .env file I have added :
QUEUE_DRIVER=sync
And in queue.php this is the code:
return [
'default' => env('QUEUE_DRIVER', 'sync'),
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
]
],
'failed' => [
'database' => env('DB_CONNECTION', 'mysql'),
'table' => 'failed_jobs',
],
];
And after making these changes, I run these commands on the server:
php artisan queue:work --queue=jobB,jobA
php artisan queue:restart
The first command runs all pending jobs from specified table (jobs table).
But now the new jobs which are getting created, how will I get to know that those are in my mentioned priority order? And also I checked in the database table (jobs table) still name of queue is appearing as default.
Please let me know what I am doing wrong.
Thanks

Use multiple connection in laravel queue

Using laravel 5.5, we need to use both Redis and SQS queues. Redis for our internal messaging and SQS for messages coming from a 3rd party.
config/queue.php has various connection information. The first key is the default connection. That default is the one used by queue:work artisan command.
'default' => 'redis',
'connections' => [
'sqs' => [
'driver' => 'sqs',
'key' => env('ACCESS_KEY_ID', ''),
'secret' => env('SECRET_ACCESS_KEY', ''),
'prefix' => 'https://sqs.us-west-1.amazonaws.com/account-id/',
'queue' => 'my-sqs-que'),
'region' => 'us-west-1',
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUE' , 'default'),
'retry_after' => 90,
],
The question is how can we use different queue connection for queue:work.
If --queue=my-sqs-que is supplied, with default connection set to redis, laravel looks under redis and obviously does not find my-sqs-que
Setting default to sqs will disable processing our internal messages.
You can specify the connection when running queue:work, see Specifying the Connection and Queue:
You may also specify which queue connection the worker should utilize. The connection name passed to the work command should correspond to one of the connections defined in your config/queue.php configuration file:
php artisan queue:work redis
You will need to setup the corresponding connections per queue, as well.
However, any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.

Laravel queue with Supervisor running twice - first time without logging

So, I configured my QUEUE_DRIVE with redis.
The queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'expire' => 90,
'retry_after' => 550
],
Supervisor is configured like this:
command=php /home/xxxxx/domains/xxxxx/public_html/artisan queue:work redis --sleep=3 --tries=5 --timeout=500
The job is being dispatched like this:
$job = (new CreateOrder($orderHeaderToPush, $order->order_id))
->delay(Carbon::now()->addMinutes(1));
dispatch($job);
I need the --tries argument to be bigger because there are multiple users doing this operation at the same time.
PROBLEM
Inside the job I have a Log::Debug. After 1 minute the job is dispatched - order comes in - No debug logging present. After a long time (the 500s) the job is dispatched again, this time logging with Log::Debug.
What exactly is happening? the job is not failed. How can it run without accessing the Log::Debug but doing other methods?

Resources