I am using SQS to upload my videos to the S3 bucket in the background. The queue works perfectly fine for small videos (~40 MBs). But, when I try to upload bigger videos (say 70 MBs and more) the queue operation gets killed.
Here's my queue operation's output:
vagrant#homestead:~/Laravel/video (master)*$ php artisan queue:work --tries=3
[2017-08-25 17:48:42] Processing: Laravel\Scout\Jobs\MakeSearchable
[2017-08-25 17:48:45] Processed: Laravel\Scout\Jobs\MakeSearchable
[2017-08-25 17:48:51] Processing: App\Jobs\VideoUploadJob
Killed
vagrant#homestead:~/Laravel/youtube (master)*$ php artisan queue:work --tries=3
[2017-08-25 17:50:33] Processing: App\Jobs\VideoUploadJob
Killed
vagrant#homestead:~/Laravel/video (master)*$
Where do I need to change the setting?? Something on Laravel side or on SQS??
Can anyone help me?
There are 2 options. Either running out of memory or exceeding execution time.
Try $ dmesg | grep php This will show you more details
Increase max_execution_time and/or memory_limit in your php.ini file.
Related
Our application provides a separate database for each of our users. I have set up an emailing job which users may dispatch to run in background via a Laravel 5.3 Queue. Some users succeed in evoking this facility (so it does work - same codebase) but some users fail.
The users who fail to generate the email are all characterised by the following error when trying to restart all user queues using sudo supervisor start all, eg:
shenstone-worker:shenstone-worker_02: ERROR (spawn error)
shenstone-worker:shenstone-worker_00: ERROR (spawn error)
shenstone-worker:shenstone-worker_01: ERROR (spawn error)
An example of a user who's email facility works:
meadowwood-worker:meadowwood-worker_02: started
meadowwood-worker:meadowwood-worker_00: started
meadowwood-worker:meadowwood-worker_01: started
The log of all attempted restarts has a load of these spawn errors at the beginning then all the successful queue restarts at the end.
The worker config files for these two users are:
[program:shenstone-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="shenstone"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/shenstone-worker.log
and
[program:meadowwood-worker]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/solar3/current
environment=APP_ENV="meadowwood"
command=php artisan queue:work --tries=1 --timeout=300
autostart=true
autorestart=true
user=root
numprocs=3
redirect_stderr=true
stdout-logfiles=/var/www/solar3/storage/logs/meadowwood-worker.log
As you see, generically identical. Yet shenstone does not restart its queues to capture requests from its jobs table, but meadowwood does. No logfiles appear in storage.
So why do some of these queues restart successfully, and a load don't?
Looking at the stackoverflow issue Running multiple Laravel queue workers using Supervisor inspired me to run sudo supervisorctl status and yes I can see a more elaborate explanation of my problem:
shenstone-worker:shenstone-worker_00 FATAL too many open files to spawn 'shenstone-worker_00'
shenstone-worker:shenstone-worker_01 FATAL too many open files to spawn 'shenstone-worker_01'
shenstone-worker:shenstone-worker_02 FATAL too many open files to spawn 'shenstone-worker_02'
As opposed to:
meadowwood-worker:meadowwood-worker_00 RUNNING pid 32459, uptime 0:51:52
meadowwood-worker:meadowwood-worker_01 RUNNING pid 32460, uptime 0:51:52
meadowwood-worker:meadowwood-worker_02 RUNNING pid 32457, uptime 0:51:52
But I still cannot see what I can do to resolve the issue.
If you haven't come up with any other ideas, increasing the limit of open files in your server might help, maybe. Check this for instance
https://unix.stackexchange.com/questions/8945/how-can-i-increase-open-files-limit-for-all-processes
However, having many files open might affect the performance of your system and you should check why is it happening and prevent if you can.
Thanks Armando.
Yes, you’re right: open files is the issue. I did a blitz on the server to increase these, but ran into problems with MySQL, so I backed out my config changes.
We host over 200 customers ech with their own .env. Each of these have their own workers.
I’ll revisit the problem sometime.
I'm running a Laravel project on Laradock. I have a Job class that handles a Notification class that sends mail. The Job uses Redis for the queue driver and everything is well set up
I have Supervisor all set up and working. Below is my .conf file that is executed by Supervisor when the Job runs:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --sleep=3 --tries=5 --daemon
autostart=true
autorestart=true
numprocs=8
user=root
redirect_stderr=true
stdout_logfile=/var/www/worker.log
This is set up in Laradock's php-worker>supervisord.d directory. I also have Laravel Horizon set up to monitor my Redis queues. When the Job is executed, Supervisor runs the Job and this is confirmed in my worker.log file that logs the processing of the Job, as defined in the above .conf file, as shown below:
[2019-10-14 12:18:27][21] Processing: App\Jobs\NewStaffAdded
[2019-10-14 12:18:30][21] Processing: App\Jobs\NewStaffAdded
[2019-10-14 12:18:31][21] Processing: App\Jobs\NewStaffAdded
[2019-10-14 12:18:33][21] Processing: App\Jobs\NewStaffAdded
[2019-10-14 12:18:35][21] Processing: App\Jobs\NewStaffAdded
When i visit my Horizon dashboard, i find that the Job failed. Clicking on the failed Job to learn more details, i get this exception:
Symfony\Component\Debug\Exception\FatalThrowableError: Call to undefined function Moontoast\Math\bcadd() in /var/www/vendor/moontoast/math/src/Moontoast/Math/BigNumber.php:506
Looking aroud for possible solutions for the above exception, i see suggestions that the cause is absence of the Bcmath module for PHP but I've run the following command and confirmed I have this module:
dpkg --list | grep -i bcmath
Shows I have the Bcmath module for my PHP version(7.3). So don't know why I'm getting the mentioned exception, which is preventing my queue from being executed.
The issue was down to not having the Bcmath module in Laradock's .env file under the PHP_WORKER section set to true. Firstly the module has to be installed in your workspace, then turn it on under BOTH the PHP_FPM AND PHP_WORKER sections. My mistake was that i had only set true under PHP_FPM section so had to do the same under PHP_WORKER section
After processing the excel file, for about 60 seconds, the queue drops.
And it may not end.
Why is the queue falling?
Decided by adding --memory=2048
php artisan queue:work redis --queue=process --tries=0 --delay=0 --timeout=120 --sleep=5 --memory=2048
I installed my Laravel 5.6 application on shared hosting service. But my hosting company is not happy with the CPU usage of my application. This high CPU usage shows up when killing the queue worker, no matter whether I kill the worker manually or via a cron job.
Can someone explain me why this 'php artisan queue:restart' takes so much CPU time? And if possible, how can I reduce?
Restart:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:restart >/dev/null 2>&1
Activate queue worker:
cd /home/xxxxxx/rdw_laravel/; /usr/local/bin/php72 artisan queue:work --daemon
You seem to have memory leaks so read up on memory.
Straight from the documentation on how to run queue worker:
Daemon queue workers do not "reboot" the framework before processing each job. Therefore, you should free any heavy resources after each job completes. For example, if you are doing image manipulation with the GD library, you should free the memory with imagedestroy when you are done.
Alternative is to use queue:listen instead, the difference is that :work boots up once and runs forever, while :listen boots up before each job.
Note: queue:work and queue:work --daemon is equal so you do not have to run cron with the --daemon flag.
Note: Why do you run :restart so often? I doubt that you update your code every day, so use :restart only when you update the code.
Related
What is the difference between queue:work --daemon and queue:listen
I am running a laravel queue being monitored by Supervisord:
php /home/path/to/artisan queue:listen --env=production --timeout=0 --sleep=5 --tries=3
However if a job is failing, it is trying indefinitely - the 'count' in the jobs table shows 255 which is the max mysql field limit, but it has been doing thousands of attempts.
If the jobs table has 'attempts' marked at 255, and 'tries' set as 3 - why is it continuing to run this job in the queue?
You should use artisan queue:work --daemon with supervisord, listen is used for development (always pick new code without restart), to understand the difference you can read the commands code.