After processing the excel file, for about 60 seconds, the queue drops.
And it may not end.
Why is the queue falling?
Decided by adding --memory=2048
php artisan queue:work redis --queue=process --tries=0 --delay=0 --timeout=120 --sleep=5 --memory=2048
Related
I am running a web application on Laravel 5.5. I have a requirement to run the jobs in a queue and then stop the queue. The queue cannot be allowed to stay running.
I am running the below command but this just endlessly carries on.
php artisan queue:work --tries=3
If I use supervisord can I stop the queue from inside the Laravel application.
Any help is really appreciated.
From the documentation:
Processing All Queued Jobs & Then Exiting The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully. This option can be useful when working Laravel queues within a Docker container if you wish to shutdown the container after the queue is empty:
Try php artisan queue:work --tries=3 --stop-when-empty
https://laravel.com/docs/8.x/queues#running-the-queue-worker
I am using SQS to upload my videos to the S3 bucket in the background. The queue works perfectly fine for small videos (~40 MBs). But, when I try to upload bigger videos (say 70 MBs and more) the queue operation gets killed.
Here's my queue operation's output:
vagrant#homestead:~/Laravel/video (master)*$ php artisan queue:work --tries=3
[2017-08-25 17:48:42] Processing: Laravel\Scout\Jobs\MakeSearchable
[2017-08-25 17:48:45] Processed: Laravel\Scout\Jobs\MakeSearchable
[2017-08-25 17:48:51] Processing: App\Jobs\VideoUploadJob
Killed
vagrant#homestead:~/Laravel/youtube (master)*$ php artisan queue:work --tries=3
[2017-08-25 17:50:33] Processing: App\Jobs\VideoUploadJob
Killed
vagrant#homestead:~/Laravel/video (master)*$
Where do I need to change the setting?? Something on Laravel side or on SQS??
Can anyone help me?
There are 2 options. Either running out of memory or exceeding execution time.
Try $ dmesg | grep php This will show you more details
Increase max_execution_time and/or memory_limit in your php.ini file.
Every time a Delayed Job is has run on my server I can see a new idle process in postgreSQL. Running select * from pg_stat_activity; I can see:
DEALLOCATE pdo_stmt_00000018
I tried to understand, and one more line (and one more process in htop) appears each time a delayed queued job had just ran.
The last line of my Job is:
$this->log->info("Invitation {$this->invitation->uuid} sent");
And I can see this in my logs, so everything is alright BUT it does not clean up after. I have a idle process every time with "DEALLOCATE pdo_stmt_00000xxx".
What can I do to avoid this problem? What is causing this?
Here is my supervisor config:
[program:laravel-queue-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/my/site/artisan queue:work --queue=invitation,default --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/path/to/my/logs/worker.log
Side note: the idle processes disappear when I run php artisan queue:restart
I found a (quick and dirty) workaround. Adding this at the end of my Job handle function:
\DB::disconnect();
sleep(1);
A issue has been opened on Laravel: https://github.com/laravel/framework/issues/18384
I am running a laravel queue being monitored by Supervisord:
php /home/path/to/artisan queue:listen --env=production --timeout=0 --sleep=5 --tries=3
However if a job is failing, it is trying indefinitely - the 'count' in the jobs table shows 255 which is the max mysql field limit, but it has been doing thousands of attempts.
If the jobs table has 'attempts' marked at 255, and 'tries' set as 3 - why is it continuing to run this job in the queue?
You should use artisan queue:work --daemon with supervisord, listen is used for development (always pick new code without restart), to understand the difference you can read the commands code.
Whenever I check for running processes, I notice this process running:
/usr/local/Cellar/php55/5.5.24/bin/php artisan queue:work --queue=default --delay=0 --memory=128 --sleep=3 --tries=0 --env=local
I notice it gets a new process id every few seconds. In my /app/Console/Kernel.php file I have nothing in the schedule() command. Furthermore, there is nothing in my crontab. What is causing this listener process to run and restart and run as if controlled by a cron job?