supervisor returns error too many arguments, expected arguments "command" - laravel

I wanna run this command php artisan schedule:run >> /dev/null 2>&1 using supervisor but it returns error too many arguments, expected arguments "command"..
My /etc/supervisord.d/conf.d/job-runner.conf file content:
[program:job-runner]
command=php /home/mysite/public_html/artisan schedule:run >> /dev/null 2>&1
autostart=true
autorestart=true
user=apache
redirect_stderr=true
stdout_logfile=/home/mysite/public_html/storage/logs/job-runner.log
[supervisord]
How can I fix this?

You should not use supervisor for this, supervisor is meant to manage processes not execute scripts.
The command will run, the script will execute and exit, it is likely that supervisor will then auto restart (repeat) this at an uncontrolled tick rate (as fast as the hardware will allow it) that can cause an undesired out of control CPU and memory consumption.
You should use a cron task job as specified in the docs in order to execute schedule task at a controlled rate.
https://laravel.com/docs/5.7/scheduling#introduction

Related

Running Artisan Horizon on Shared Hosting

im tried to create cronjob on shared hosting with artisan horizon like this
/usr/local/bin/ea-php74 /home/example/example.com/artisan schedule:run 1>> /dev/null 2>&1
/usr/local/bin/ea-php74 /home/example/example.com/artisan horizon>> /dev/null 2>&1
but after a view hours our server goes down. any solutions for us?
Horizon is not supposed to be on a cronjob, as every time the cron triggers that line, it is a new Horizon process that has to run, so probably that's why your server is going down.
The right solution is to set up Horizon using Supervisor: https://laravel.com/docs/8.x/horizon#supervisors

Bash to start and kill process on Ubuntu in a given period

I have this situation: I have a script in php running on ubuntu terminal (xfce4-terminal) as a console/process (in php there is a loop with some process).
The problem is: every two days this process is killed due to memory overuse.
What I need is: A bash script that can start the process and every 48hrs it kills this process and start it again.
The optimal solution is fixing the memory leak, trace the leaking function and post a new question with the relevant code if you need help.
Now for this specific case you can use something like this:
while true
do
timeout 12h php myfile.php
done
This is a infinite loop that starts your command and kills it afer 12 hours. (or any other duration you want: 30m, 1d, etc)
A more stable solution is creating a systemd service or deploying your script using some process manager like Supervisor or Monit.
Supervisor has a config parameter "autorestart", if you specify true it restarts your script every time it crashes, and this is a stable production ready solution.
A sample supervisor config from this post
[program:are_we_there_yet]
command=php /var/www/areWeThereYet.php
numprocs=1
directory=/tmp
autostart=true
autorestart=true
startsecs=5
startretries=10
redirect_stderr=false

Laravel run the queue job every second

I have created a queue job which need to run every second.How can I do that ? So i have created a job using artisan command,but the job is not run every second. I think I need to reconfigure some config files of supervisor.
Laravel docs have examples of exactly that. check https://laravel.com/docs/5.6/queues#supervisor-configuration
The default examples is
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/yourproject/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile= /var/www/html/yourproject/storage/logs/worker.log
Note that you need to have a worker connection set in config/queues.php, and then on superviser, in the command artisan queue:work you can specify the connection. in the example i sent is using sqs but you can configure other stuff like redis
I'm successfully using Spatie's Short Schedule package for this.
You may use crontab for your queueable job. But Cron only allows for a minimum of one minute. Use crontab -e to set your schedule with the help of https://crontab.guru/ add this 2 * * * * php /var/www/html/your-project-folder/artisan queue:work >> /dev/null 2>&1 to your crontab -e which runs every 2 minutes.
What you could, you need to write a shell script with an infinite loop that runs your task, and then sleeps for every second.

How to kill the laravel queue:listen --queue=notification?

For a cron job I am using following code in laravel 5.1 and run the command in every 1 min. But even though after stopping cronjob from crontab the laravel code still executes. ?
$this->call('queue:listen', [
'--queue' => 'notification-emails','--timeout'=>'30'
]);
what could be the problem ? How can I stop this queue listen ?
You probably looking for queue:work which will stop, when no more jobs left, meanwhile queue:listen will persist.
If You want to kill existing process - You have to do it manually, because there is no command in laravel to kill all queue:listen processes.
Keep in mind, that You will not find process like artisan queue:listen, You have to look for artisan schedule:run because queue:listen, when called internally, will not create separate process.

Laravel queues run forever

I have a page which supports some kind of mail notificiation. When user inserts some data, I want to send mail to another. I know, Mail::send() works perfectly, but it is slow. So I want to push this mail to queue. I use iron.io as provider. Everything works perfectly until I close console.
So is it possible to run php artisan queue:listen forever after I close console on Win and Linux?
You can run every process in the background in linux by using nohup
nohup php artisan queue:listen
This will keep the process running even if you close your terminal, nohup will force to ignore hangup signals.
nohup creates a logfile. If you want to suppress this, you can add
>/dev/null 2>&1 &
after your command

Resources