I am trying to write a cron that will run sacct command of slurm for given dates and save it in file. As I don't have much experience with shell script I am not sure how to do it.
I did the following:
I created a shell script with the following code (sacct_data.sh):
#!/bin/bash
startdate=`date +"%Y-%m-%dT00:00:00"`
enddate=`date -d "yesterday" '+%Y-%m-%dT00:00:00'`
TZ=UTC sacct info_that_needs_to_be_pulled --starttime $startdate --endtime $enddate > data.log
In the crontab I have the following code:
* * * * * bash sacct_data.sh #I know this will run every min, but that's is not important
However, I am getting the error "sacct: command not found".
Any help is appreciated :)
The sacct: command not found error means that the command was not found in the PATH. And that is expected as the PATH set in cron environments is really minimal. You can either set a correct PATH variable (see this for instance) or use the absolute sacct path: type which sacct and use the output in sacct_data.sh.
Related
I have the following entry in my crontab:
0,30 7-18 * * 1-5 cd /path/to/scrapers && scrapy crawl funny_quotes &>> $(date "+/home/foobar/logs/\%Y\%m\%d.funny.log"
This entry is supposed to run every half hour, on weekdays and append the output to the log file each time it's run. I have tested the syntax online, using this handy tool, and the syntax is correct.
However, the job doesn't get run. What's worse, the log file is created (but has no contents - file size 0), so I have no diagnostic information to go by.
The command cd /path/to/scrapers && scrapy crawl funny_quotes runs perfectly when I type it at the command, and there is copious amounts of information output to the console, from scrapy.
Why does the cronjob fail to run sccessfully - and why is nothing being piped to the log file?
Check your cron logs
grep CRON /var/log/syslog
I am sure you are getting error something like scrapy - command not found or something similar.
To fix it, do this
Enter and copy the output of echo $PATH from shell.
And then open crontab -e
At the very top of file, write PATH=YOUR_COPIED_CONTENTS
And that should work.
I get this error while running thru crontab
/aws-cron-job/Ap_Hourly_xxxDelete.sh: 1: ./aws-cron-job/Ap_Hourly_xxxDelete.sh: ec2-describe-snapshots: not found
./aws-cron-job/Ap_Hourly_xxxDelete.sh: 1: ./aws-cron-job/Ap_Hourly_xxxDelete.sh: ec2-delete-snapshot: not found
This is my script: filename = xxx.sh
ec2-delete-snapshot --region ap-southeast-1 $(ec2-describe-snapshots --region ap-southeast-1 | sort -r -k 5 | grep "Ap_Hourly" | sed 1,4d | awk '{print $2};' | tr '\n' ' ')
This is my cronjob:
30 05-15 * * 1-6 ./aws-cron-job/Ap_Hourly_xxxDelete.sh > ./aws-cron-job/Ap_Hourly_xxxDelete.txt 2>&1
I can run this script manually but not through Cronjob. Where is the problem in this. Thanks in advance.
I believe that you should place only absolute paths in your cronjobs. As seen in your question, you wrote:
./aws-cron-job/Ap_Hourly_xxxDelete.sh
and I think you should write:
/<rootpath>/aws-cron-job/Ap_Hourly_xxxDelete.sh
The environment that commands run with as cron jobs is very limited, things like $PATH and $HOME are not what you'd expect.
To analyze this, use crontab -e to add the job * * * * * /bin/bash -c env >/tmp/cron.env, then look inside that file to see what bash knows about when started as a cron job on your machine. The job will run every minute, so when you're done debugging, remove it, also with crontab -e.
The error ec2-describe-snapshots: not found suggests that ec2-describe-snapshots might not be found in $PATH when the script runs as a cron job. To fix this, first find its normal location from the a shell with which ec2-describe-snapshots. Then, either use full path in script (/some/path/ec2-describe-snapshots ...), or adjust $PATH in script (PATH=/some/path:$PATH) before calling ec2-describe-snapshots.
Also, it's a good habit to use full paths in crontab entries, both for executables and for log files. However, the error in OP would not come from this.
When I type the following in a terminal ./DHT 11 4 it works and saves all data to mysql correctly.
id (1), temp (29), hum (37), date (2015...)
When I add it to a crontab it does not work correctly.
id (1), temp (0 or empty), hum (0 or empty), date (2015...)
sh script:
#!/bin/bash
#DHT11
SCRIPT="/var/www/ErnestynoFailai/scripts/DHT 11 4"
#DHT22
#SCRIPT="/root/to/folder/DHT 22 4"
#AM2302
#SCRIPT="/root/to/folder/DHT 2302 4"
TEMP_AND_HUM=""
while [[ $TEMP_AND_HUM == "" ]]
do
TEMP_AND_HUM=`$SCRIPT | grep "Temp"`
done
TEMP=`echo "$TEMP_AND_HUM" | cut -c8-9`
HUM=`echo "$TEMP_AND_HUM" | cut -c21-22`
myqsl_user="root"
myqsl_pw="pw"
myqsl_database="DHT"
today=`date +"%Y-%m-%d %T"`
query="INSERT INTO DHT11 (temp, hum, date) VALUES ('$TEMP', '$HUM', '$today');"
mysql --user=$myqsl_user --password=$myqsl_pw $myqsl_database << EOF
$query
EOF
And crontab:
*/1 * * * * /var/www/ErnestynoFailai/scripts/write_DHT11_to_db.sh
What can be wrong?
Long time ago, it happened on some systems that cron didn't start shell scripts, only binaries. So you had to indicate explicitely which interpreter to use in the crontab line
*/1 * * * * /bin/bash /var/www/ErnestynoFailai/scripts/write_DHT11_to_db.sh
I didn't check since, and I dont know what system you are using. On debian/jessie, it is told in the crontab 5 manpage that the command is executed by /bin/sh, or the shell specified by the SHELL variable in the crontab file.
See https://superuser.com/questions/81262/how-to-execute-shell-script-via-crontab
Probably you have a problem of different environment settings. For debugging, an easy way is to include a line like the following to the beginning of your script:
set >/tmp/envlog.txt
Then compare its contents once created when you run your script directly and once using crontab.
Another way for debugging is:
exec >/tmp/scriptoutput.txt 2>&1
set -x
With this commands, the full output of your script will be redirected to the specified file.
Most often, the PATH variable is wrong. Instead of
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
You often only have a reduces version of it:
/usr/sbin:/usr/bin:/sbin:/bin
This means that some commands cannot be found. If you find a command which doesn't work, try finding out where it is located using:
$ which mysql
/usr/bin/mysql
Asterisk check script is not running only when run by crontab, but runs by ./script.sh and sh script.sh. Here is the script:
date
asterisk -rx "show channels"
asterisk -rx "zap show channels"
Then I >> into a log file. When I run manually via ./ or sh with >> log.log it works, just not as a crontab listed as
* * * * * /root/script.sh
I have tried adding #!/bash/sh at the top of the script and only the date is shown no matter what I try. I am a noob to bash scripts and I'm trying to learn.
Since feature requests to mark a comment as an answer remain declined, I copy the above solution here.
Have you checked your path? It's almost certainly different when run under cron. (You can set PATH=... in your crontab. From the command line, type "echo $PATH" to see what you're expecting.) It might be more standard to provide full paths to date, asterisk and your log file inside script.sh (e.g., "/bin/date /path/to/asterisk ....") – mjk
I have a bash script mysql_cron.sh that runs mysqldump
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile
This works fine. I then call it from cron:
20 * * * * /home/joe/mysql_cron.sh
and this creates the file /tmp/somefile, but the file is always empty. I have tried adding a
source /home/joe/.bash_profile
to the script to make sure cron has the right env variables, but that doesn't help. I see many other people having this problem but have found no solution. I've also tried the '>' operator in the crontab to cat any cron errors to a file, but that doesn't seem to generate any errors. Any troubleshooting ideas welcomed. Thanks!
Add output of error information to file (as Damp has said), so that you can check if there is any error:
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile 2>&1
You can also take a look at MySQL's log files at /var/log in case there is some hint there.
Add this line to your script and compare the result between running it from cron versus running it directly:
env > /tmp/env.$$.out
The $$ will be replaced in the resulting filename by the PID of the parent process (cron or the shell). You should be able to diff the two files and see if anything significant is different between the two environments.