For a daily database backup, I created the following cron job :
File : crontab -e
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /var/path/deploy/database/scripts/backup.sh
File : /var/path/deploy/database/scripts/backup.sh
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
cp -r /var/path/deploy/database/scripts /var/bricoolpathpostgresql/data
chmod 755 -R /var/path/postgresql/data
docker exec -it database /var/lib/postgresql/data/scripts/pg_backup_rotated.sh
When I execute the script directly it works well and the backup is created successfully. But the script is being excuted from the cron job the command docker exec -it database /var/lib/postgresql/data/scripts/pg_backup_rotated.sh does not seem to work.
I have no error output in /var/log/syslog
as Danny Ebbers mentioned in comment, the problem ws the -i argument in Docker command.
I have created one test.sh shell script which I have scheduled using crontab -e to execute after every 1 minutes and redirecting output to a file.
test.sh
echo "Printing all Environment Var"
env
echo "Bye Bye"
Below is how my crontab look like
#crontab
0,1,2,3,4,5,6,7,8,9,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59 * * * * sudo su - admiir -c /u01/users/admiir/test.sh > /u01/app/iir/InformaticaIR/iirlog/crontab_launchsh.log
When I run ls -ltr the timestamp is getting updated but nothing is getting printed in the output file.
To run the cron every minute and to save to a file the current environment used this could be used:
* * * * * env > ~/cronenv
Next, you can start a shell like it will be run within cron by doing:
env - `cat ~/cronenv` /bin/sh
Here you could try something like:
su - admiir -c "/u01/users/admiir/test.sh > /u01/app/iir/InformaticaIR/iirlog/crontab_launchsh.log"
You can omit the sudo su and only use su
Once your script is working you could then update your cron with:
* * * * * su - admiir -c "/path/to/test.sh > /path/to/out.txt"
You could also run the cron as the specific user by doing:
sudo crontab -u username -e
if been trouble-checking for hours and can't find out why my shell script won't execute properly when using a root crontab.
I'm on a vServer eqipped with
Ubuntu 14.04.4 LTS
3.13.0-042stab113.11.
my script is a chmod 711 file:
/usr/local/sbin/bckup_script
and looks like this:
#!/bin/bash
DATE=`date +%Y-%m-%d_%H_%M_%S`
su - -c "chgrp postgres /backup/db"
su - -c "chmod 770 /backup/db"
su - -c "chown user /backup/db"
su - postgres -c "pg_dump db_name > /backup/db/${DATE}db_name.sql && pg_dumpall > /backup/db/${DATE}_all_db.out"
su - -c "rsync -a /home/user/value /backup/"
The crontab is started using
crontab -e
as
root
user
The crontab executes as far as I can tell from syslog.
When executed as root user (no crontab), the file will do what it's told to. Also my PATH is set properly and working.
I have no idea what am doing wrong.
Solution:
Thx to Jay jargot I found out what was wrong. To complete the question, here are the outputs you "asked" for:
crontab -l
#m h dom mon dow command
* * * * * bckup_script
Output of crontab was
/bin/sh: bckup_script: command not found
which lead me to the conclusion to use the absolute Path to the file which solved the problem.
my crontab -l now looks like follows and everything works like a charm!
# m h dom mon dow command
49 20 * * 1-5 /usr/local/sbin/bckup_script
Thx very much!
i need to run cron job that changes owner and group for selected files.
i have a script for this:
#!/bin/bash
filez=`ls -la /tmp | grep -v zend | grep -v textfile | awk '$3 == "www-data" {print $8}'`
for ff in $filez; do
/bin/chown -R tm:tm /tmp/$ff
done
if i run it manually - it works perfectly. if i add this to root's cron
* * * * * /home/scripts/do_script
it does not change owner/group. file has permissions "-rwsr-xr-x".
any idea how this might be solved?
On my system, field $8 is the hour/year, not the filename. Maybe that's the case for your root user as well. This is why you should never try to parse ls. Even if you fix this issue, half a dozen more will remain to break the system in the future.
Use find instead:
find /tmp ! -name '*zend*' ! -name '*textfile*' -user www-data \
-exec chown -R tm:tm {} \;
if you are adding to root's cron (/etc/crontab) then be aware that the syntax is different from a normal user's crontab.
# m h dom mon dow user command
* * 1 * * root /usr/bin/selfdestruct --immediately
Also give the whole path to your command: Cron has not really a rich environment.
Make sure that the commands in your script also have the full path and don't use environment variables.
I need a shell script that will login to a remote FTP server, get the list of files present in only root folder and identify only xml files and get those files to local system.
Login credentials can be mentioned in the script it self. This script must be run once a day only.
Please help me with a UNIX BASH SHELL script.
Thanks
script:
#!/bin/bash
SERVER=ftp://myserver
USER=user
PASS=password
EXT=xml
DESTDIR=/destinationdir
listOfFiles=$(curl $SERVER --user $USER:$PASS 2> /dev/null | awk '{ print $9 }' | grep -E "*.$EXT$")
for file in $listOfFiles
do
curl $SERVER/$file --user $USER:$PASS -o $DESTDIR/$file
done
for scheduled run every day check the crontab:
crontab -e
for edit your current jobs and add for example:
0 0 * * * bash /path/to/script
that will mean run the script every day at midnight.
If you can install ncftpget, this is a one-line operation:
ncftpget -u user -p password ftp.remote-host.com /my/local/dir '/*.xml'