I'm having a bit of a nightmare getting a crontab/cronjob to run an Artisan command.
I have another Artisan command running via cronjob no problems but this second command won't run.
Firstly, when I do 'crontab -e' and edit the file to contain:
0 0 * * * /usr/local/bin/php /home/purple/public_html/artisan feeds:send
The cronjob doesn't run at all.
If I go to cPanel and add the cronjob there, it runs but I receive the following error:
open(public/downloads/feeds/events.csv): failed to open stream: No such file or directory
The thing is the file exists and the directories have the correct permissions. If I run the command when logged in via SSH as root or the user purple (php artisan feeds:send) the command runs flawlessly and completes its tasks no problem.
If in cPanel, I edit the cronjob to use:
0 0 * * * php /home/purple/public_html/artisan feeds:send
I receive the following error:
There are no commands defined in the "feeds" namespace.
The funny thing is that my other command is registered in the crontab file and works and has no reference in cPanel at all.
Any help would be much appreciated. Just for brevity I have included the command and model that the command uses.
Feed.php Model:
http://laravel.io/bin/1e2n
DataFeedController.php Controller:
http://laravel.io/bin/6x0E
SendFeeds.php Command:
http://laravel.io/bin/BW3d
start/artisan.php:
http://laravel.io/bin/2xV3
FeedInterface.php Interface:
http://laravel.io/bin/LxnO
As you can see there is a GetRates command, which works.
Well it looks like I had to cd in to the script directory first before running the command, which now after working it out it makes sense. Easy when you know how eh!
* * * * * cd /home/purple/public_html/ && /usr/local/bin/php artisan feeds:send
Related
I want to run the php artisan schedule:work command but the issue is that when i close putty it terminate the operation while i need it to remain processing on server
my server is running Ubuntu 20.4.1 LTS
Actually, the command schedule:work is for local development environment.
While you want to run your scheduler on server you should add a cron job like the following:
First, Go to your terminal, ssh into your server, cd into your project and run this command:
crontab -e
This will open the server Crontab file, paste the code below into the file, save and then exit.
* * * * * cd /path-to-your-project && php artisan schedule:run >> /dev/null 2>&1
Here we added one Cron job which executes every minute to start the Laravel scheduler.
Don't forget to replace /path-to-your-project with your project path.
Use "nohup php artisan schedule:work"
Go to your project path e.g. cd /var/www/mywebsite
Command this
crontab -e
choose editor nano number 1 if it shows a list of editors.
below of file add this.
* * * * * php /var/www/mywebsite/artisan schedule:run >> /dev/null 2>&1
save it CTR+S
I can't get to execute symfony command in bash script when I run it in cron.
When I execute the .sh script by hand everything is working fine.
in my bash file the command is executed like this:
/usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null
I run the scripts from root, the cron is set to root as well. For the test i set files permissions to 777 and added +x for execution.
the bash script executes fine. It acts like it's skipping the command but from logs i can see that the code is executed
It turned out that symfony system variables that I have stored on server are not enough. When you start to execute the command from command line its fine, but when using Cron you need them in .env file. Turned out that in the proces of countinous integrations I only got .env.dist file and I've to make the .env file anyways.
Additionaly I've added two lines to cron:
PATH=~/bin:/usr/bin/:/bin
SHELL=/bin/bash
and run my command like this from the bash file:
sudo /usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null
I have a ruby script that connects to an Amazon S3 bucket and downloads the latest production backup. I have tested the script (which is very simple) and it works fine.
However, when I schedule this script to be run as a cron job it seems to fail when it loads the Amazon (aws-s3) gem.
The first few lines of my script looks like this:
#!/usr/bin/env ruby
require 'aws/s3'
As I said, when I run this script manually, it works fine. When I run it via a scheduled cron job, it fails when it tries to load the gem:
`require': no such file to load -- aws/s3 (LoadError)
The crontab for this script looks like this:
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
I originally thought it might be because cron is running as a different user, but when I do a 'whoami' at the start of my ruby script it tells me it's running as the same user I always use.
I have also done a bundle init and added the gem to my gemfile, but this doesn't seem to have any affect.
Why does cron fail to load the gem? I am running Ubuntu.
As mentioned here https://coderwall.com/p/vhv8aw you can simply try
rvm cron setup # let RMV do your cron settings
Make sure that you make copy of your crontab before running this command
If you're running it manually and it works you're probably in a different shell environment than cron is executing in. Since you mention you're on Ubuntu, the cron jobs probably execute under /bin/sh, and you're manually running them under /bin/bash if you haven't changed anything.
You can debug your environment problems or you can change the shell that your job runs under.
To debug, There are several ways to figure out what shell your cron jobs are using. It can be defined in
/etc/crontab
or you can make a cron job to dump your shell and environment information, as has been mentioned in this SO answer: How to simulate the environment cron executes a script with?
To switch to that shell and see the actual errors causing your job to fail, do
sudo su
env -i <path to shell> (e.g. /bin/sh)
Then running your script you should see what the errors are and be able to fix them (rubygems?).
Option 2 is to switch shells. You can always try something like:
0 3 * * * /bin/bash -c '~/Downloader/download.rb > ~/Downloader/output.log 2>&1'
To force your job into bash. That might also clear things up.
You may also explicitly set your Gem path:
GEM_HOME="/usr/local/rvm/gems/ruby-1.9.2-p290#my-special-gemset"
in a non cron environment execute echo $PATH, copy the path and paste it into your crontab, before your command:
echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
and inside crontab:
PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
Add this at the beginning of your cron
PATH="/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/home/user/.rvm/bin:/usr/local/sbin:/usr/sbin:/home/user/.rvm/bin:/home/user/.local/bin:/home/user/bin"
GEM_HOME='/home/user/.rvm/gems/ruby-2.1.4'
GEM_PATH='/home/user/.rvm/gems/ruby-2.1.4:/home/user/.rvm/gems/ruby-2.1.4#global'
MY_RUBY_HOME='/home/user/.rvm/rubies/ruby-2.1.4'
IRBRC='/home/user/.rvm/rubies/ruby-2.1.4/.irbrc'
RUBY_VERSION='ruby-2.1.4'
I've tried all the solution above, none of them worked until I tried;
0 12 * * * /bin/bash -l -c 'ruby /Users/simon/Desktop/script.rb'
Wondered if anyone had any idea why the following problem is occurring, or had any tips where to look…I can run the shell script manually in ssh, but if I set it up to run in crontab i get the problems below.
Server is: FreeBSD 8, and I have access to all root permissions
I have a shell script (Bourne) that runs under the “root” permissions using crontab with the following command:
* * * * * /data/backups/scripts/server_log_check.sh > /data/backups/logs/cron_logs/server_log_check.sh_cron.log
The “server_log_check.sh” script checks to see if “the report server” is running with this command:
if ps -xauww | grep -v grep | grep java | grep www > /dev/null
then
#“reports are running, no need to try to restart it”
Else
/usr/local/etc/rc.d/tomcat55 start #start report server because it is not running
Fi
The problem is occurring on this line: “/usr/local/etc/rc.d/tomcat55 start”, when the script is run using crontab, but if I run the script manually via ssh this line of code runs without a problem, but all the rest of the code in the script executes fine, just not this line. Allternatively, if I paste this line /usr/local/etc/rc.d/tomcat55 start into the ssh command prompt, it runs just fine too.
I changed the “server_log_check.sh” ownership to be “root”, but that didn’t make a difference, and the script "tomcat55" ownership is "www". The crontab entry is being made under the "Root" profile, so, I assumed there is no problem running a file that is owned by a lessor permission such as "www" has
Do you have any ideas why cron is doing this?
Thanks in advance
Try adding the following which will add the error to the log file as well:
* * * * * /data/backups/scripts/server_log_check.sh > /data/backups/logs/cron_logs/server_log_check.sh_cron.log 2>&1
Also change this:
/usr/local/etc/rc.d/tomcat55 start
to:
cd /home/root
nohup /usr/local/etc/rc.d/tomcat55 start &
This should create a nohup.out in /home/root.
This question already has answers here:
CronJob not running
(19 answers)
Closed last month.
I have a cron job that I want to execute every 5 minutes:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /scr_temp/scheduleSpider.sh
In /var/spool/cron/crontabs/root
The cron should execute a shell script:
#!/bin/sh
if [ ! -f "sync.txt" ]; then
touch "sync.txt"
chmod 777 /scr_temp
curl someLink
fi
That works fine from command line but not from cron. However the cron itself is startet but the script does not start.
I read about the path problem but I dont really understand it. I setup a cron that writes some env data to a file. This is the output:
HOME=/root
LOGNAME=root
PATH=/usr/bin:/bin
SHELL=/bin/sh
If I execute the env command in command line I get following output for PATH
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
What path do I have to set in my shell script?
Your $PATH is fine; leave it alone. On Ubuntu, all the commands you're invoking (touch, chmod, curl) are in /bin and/or /usr/bin.
How did you set up the cron job? Did you run crontab some-file as root?
It seems that /etc/crontab is the usual mechanism for running cron commands as root. On my Ubuntu system, sudo crontab -l says no crontab for root. Running crontab as root, as you would for any non-root account, should be ok, but you might consider using /etc/crontab instead. Note that it uses a different syntax than an ordinary crontab, as explained in the comments at the top of /etc/crontab:
$ head -5 /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
Run sudo crontab -l. Does it show your command?
Temporarily modify your script so it always produces some visible output. For example, add the following right after the #!/bin/sh:
echo "Running scheduleSpider.sh at \`date\`" >> /tmp/scheduleSpider.sh.log
and see what's in /tmp/scheduleSpider.sh.log after a few minutes. (You can set the command to run every minute so you don't have to wait as long for results.) If that works (it should), you can add more echo commands to your script to see in detail what it's doing.
It looks like your script is designed to run only once; it creates the sync.txt file to prevent it from running again. That could be the root (ahem) of your problem. What that your intent? Did you mean to delete sync.txt after running the command, and just forgot to do it?
root's home directory on Ubuntu is /root. The first time your script runs, it should create /root/sync.txt. Does that file exist? If so, how old is it?
Note that curl someLink (assuming someLink is a valid URL) will just dump the content from the specified link to standard output. Was that your intent (it will show up as e-mail to root? Or did you just not show us the entire command?
First: you can substitute the first field with */5 (see man 5 crontab)
Second: have cron mail the output to your email address by entering MAILTO=your#email.address in your crontab. If the script has any output, it'll be mailed. Instead of that, you may have a local mailbox in which you can find the cron output (usually $MAIL).
A better syntax for you CRON is
*/5 * * * * /scr_temp/scheduleSpider.sh
Also, check the authority of your scheduleSpider.sh file. Cron runs under a different user than the one you are likely executing your program interactively, so it may be that cron does not have authority. Try chmod 777 for now, just to check.
I suggest to:
check that /scr_temp/scheduleSpider.sh has executable bit
set PATH properly inside your script or use absolute path to command (/bin/touch instead of touch)
specify absolute path to sync.txt file (or calculate it relatively to script)
Have you added the comand via crontab -e or just by editing the crontab file? You should use crontab -e to get it correctly updated.
Set the working directory in the cron script, it probably doesn't execute the things where you think it should.
You should add /bin/sh before the absolute path of your script.
*/5 * * * * /bin/sh /scr_temp/scheduleSpider.sh