Multiple Laravel Horizon workers per supervisor using supervisord - laravel

I'm just moving our Laravel v8 queue driver from db to redis, using Horizon for management.
No matter what I configured in config/horizon.php, I was only getting either one worker process across all my queues or one worker per queue - but with no auto-balancing.
I modified the supervisor scheduler.conf to run 2 (or more) processes:
[program:horizon]
process_name=%(program_name)s_%(process_num)02d
command=php /www/E3/artisan horizon
autostart=true
autorestart=true
user=web
numprocs=2
redirect_stderr=true
stdout_logfile=/var/log/supervisor/horizon.log
stopwaitsecs=3600
but this seems to spawn multiple supervisors (in Horizon parlance) with one worker each, rather than multiple workers per supervisor:
I think Horizon is configured correctly:
'defaults' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'updatestock', 'priceapi', 'pubsub', 'klaviyo', 'default', 'low'],
'balance' => 'auto',
'processes' => 2,
'minProcesses' => 2,
'maxProcesses' => 10,
'maxTime' => 3600, // how long the process can run before restarting (to avoid memory leaks)
'maxJobs' => 0,
'balanceMaxShift' => 1,
'balanceCooldown' => 3,
'memory' => 128,
'tries' => 3,
'timeout' => 60,
'nice' => 0,
],
],
'environments' => [
'staging' => [
'supervisor-1' => [
'maxProcesses' => 3,
],
],
],
Also, at some point while attempting various changes I'm no longer getting any data shown in pending/completed - the json responses show counts, but not the job data, for instance in /horizon/api/jobs/completed?starting_at=-1&limit=50:
{
"jobs": [],
"total": 13157
}

I think in this case you dont need to worry about supervisor.conf .
Since based on the horizon.php - laravel will auto-scale amount of "workers".
Try to change maxProcesses and test it out.

Related

Laravel Horizon queue keeps pausing for one minute

TL;DR Laravel Horizon queue workers go to sleep for 60 seconds after each job they process
I have a big backlog in my Laravel Horizon queue. There are a lot of workers (maxProcesses set to 30), but when I monitor the log file, the output suggests that it is processing exactly 30 jobs over the course of 2-3 seconds, and then it pauses for a full minute (more or less exactly 60 seconds).
Any ideas why this could be happening? Am I hitting some resource limit that is causing Horizon or Supervisor to hit the breaks?
Here's the relevant section from my horizon.php config file:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['high', 'default', 'low'],
'balance' => 'false',
'minProcesses' => 3,
'maxProcesses' => 30,
'timeout' => 1800,
'tries' => 3
],
I have the exact same configuration in my local environment, and my throughput locally is ~600 jobs/minute. In production it hovers right around ~30 jobs/minute.
Update per #Qumber's request
For the most part these aren't actually jobs. They're events being handled by one or more listeners, most of which are super simple. For example:
public function handle(TransactionDeleted $event)
{
TransactionFile::where("transaction_id", $event->subject->id)->delete();
}
Here's some queue config:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 1900,
'block_for' => null,
],
Update per #sykez request
Here's the supervisor config in local:
[program:laravelqueue]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/artisan queue:once redis --sleep=1 --tries=1
autostart=true
autorestart=true
user=adam
numprocs=3
redirect_stderr=true
stdout_logfile=/path/to/worker.log
stopwaitsecs=3600
Here's the supervisor config in production:
[program:daemon-XXXXXX]
directory=/home/forge/SITE_URL/current/
command=php artisan horizon
process_name=%(program_name)s_%(process_num)02d
autostart=true
autorestart=true
user=forge
redirect_stderr=true
stdout_logfile=/home/forge/.forge/daemon-XXXXXX.log
stopwaitsecs=3600
The local supervisor is running the queue directly, with the "once" flag, which should load the entire code base for each job rather than running as a daemon. This, of course, should make it slower, not 20 times faster...
Another update
Thanks to some help from one of the core Laravel devs, we were able to determine that all of the "hanging" jobs were broadcast jobs, from events that were configured to broadcast after firing. We use Pusher as our broadcast engine. When Pusher is disabled (as it is in our local environment), then the jobs finish immediately with no pause.

Laravel queue worker with supervisor

I have script, which executes about 5-8 mins and in the end it gives me xls file, on localhost it works fine, but on server it executes 3 times, i cannot understand why.
There is supervisor with 8 processes of queue workers.
queue connection set to redis.
laravel 5.7
Maybe someone had same problem and solved it?
.env
BROADCAST_DRIVER=redis
QUEUE_CONNECTION=redis
queue
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 90,
'block_for' => null,
],
upd:
changing retry_after => 900 doesn't help
worker starts with this command:
artisan queue:work redis --timeout=900 --sleep=3 --tries=3

Laravel horizon: items no longer queued for no obvious reason

I've been running an app on a Laravel forged provisioned server.
We have some email jobs that are being queued, and we use Horizon to manage our queues. This has always worked without any issues, but for some reason, we have broken something, and I can't fix it.
This is our setup.
.env
APP_ENV=dev
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
QUEUE_DRIVER=redis
config/queues.php
return [
'default' => env('QUEUE_DRIVER', 'sync'),
'connections' => [
'sync' => [
'driver' => 'sync',
]
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'medium',
'retry_after' => 90,
],
],
];
config/horizon.php
return [
'use' => 'default',
'waits' => [
'redis:default' => 60,
],
'environments' => [
'dev' => [
'high-prio' => [
'connection' => 'redis',
'queue' => ['high'],
'balance' => 'simple',
'processes' => 10,
'tries' => 5,
],
'default-prio' => [
'connection' => 'redis',
'queue' => ['medium', 'low'],
'balance' => 'auto',
'processes' => 10,
'tries' => 3,
],
],
],
];
I checked the redis-cli info result to make sure the port was right:
forge#denja-dev:~$ redis-cli info
# Server
redis_version:3.2.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:11aa79fd2425bed9
redis_mode:standalone
os:Linux 4.4.0-142-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:5.4.0
process_id:1191
run_id:fcc57fa2c17440ab964538c2d986dc330d9e1223
tcp_port:6379
uptime_in_seconds:3045
uptime_in_days:0
hz:10
lru_clock:13667343
executable:/usr/bin/redis-server
config_file:/etc/redis/redis.conf
When I visit /horizon/dashboard, all is running fine.
I was playing a bit with adding some metadata to the payload for queued jobs, at which time the issues began. I then just removed that code again, and basically went back to the previous code base. There is no difference anymore, so I'm now suspecting that I have another issue.
However - I'm not getting ANY exception thrown when I add something to the queue. Bugsnag has no new entries, and my process just continues without any error.
Any idea what I can verify more to detect the actual issue? Is there a problem with the config? I'm a bit lost to be honest, especially since I have no information to work with :(
I also checked using tinker whether I could make a connection to redis, and that too works fine without an exception:
$ php artisan tinker
Psy Shell v0.9.9 (PHP 7.2.0RC3 — cli) by Justin Hileman
>>> Illuminate\Support\Facades\Redis::connection('default')
=> Illuminate\Redis\Connections\PredisConnection {#3369}
The cause of this issue was that the notification that I was testing this with, did use the Queuable trait, but did not implement the ShouldQueue interface. The latter is required to have Laravel automatically queue these Notifications.
We noticed it when we started testing using other Notifications which went through fine.
The only question we still had is that we would have expected the email to go out nevertheless, since it would synchronously send it, which for some reason it did not.

Laravel priority order of queue not working

I have studied about the laravel queues from here: https://laravel.com/docs/5.6/queues.
I have a situtation that my project had no queues in particular, and only 'default' queue was running. Now I have two queues: JobA and JobB. And I want to set priority of JobB higher than JobA.
To attach the jobs with queues I have used:
->onQueue('jobA');
->onQueue('jobB');
In .env file I have added :
QUEUE_DRIVER=sync
And in queue.php this is the code:
return [
'default' => env('QUEUE_DRIVER', 'sync'),
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
]
],
'failed' => [
'database' => env('DB_CONNECTION', 'mysql'),
'table' => 'failed_jobs',
],
];
And after making these changes, I run these commands on the server:
php artisan queue:work --queue=jobB,jobA
php artisan queue:restart
The first command runs all pending jobs from specified table (jobs table).
But now the new jobs which are getting created, how will I get to know that those are in my mentioned priority order? And also I checked in the database table (jobs table) still name of queue is appearing as default.
Please let me know what I am doing wrong.
Thanks

Laravel queue with Supervisor running twice - first time without logging

So, I configured my QUEUE_DRIVE with redis.
The queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'expire' => 90,
'retry_after' => 550
],
Supervisor is configured like this:
command=php /home/xxxxx/domains/xxxxx/public_html/artisan queue:work redis --sleep=3 --tries=5 --timeout=500
The job is being dispatched like this:
$job = (new CreateOrder($orderHeaderToPush, $order->order_id))
->delay(Carbon::now()->addMinutes(1));
dispatch($job);
I need the --tries argument to be bigger because there are multiple users doing this operation at the same time.
PROBLEM
Inside the job I have a Log::Debug. After 1 minute the job is dispatched - order comes in - No debug logging present. After a long time (the 500s) the job is dispatched again, this time logging with Log::Debug.
What exactly is happening? the job is not failed. How can it run without accessing the Log::Debug but doing other methods?

Resources