I'm a newbie to shell scripting. I have written a shell script to do incremental backup of MySQL database.The script is in executable format and runs successfully when executed manually but fails when executed through crontab.
Crontab entry is like this :
*/1 * * * * /home/db-backup/mysqlbackup.sh
Below is the shell script code -
#!/bin/sh
MyUSER="root" # USERNAME
MyPASS="password" # PASSWORD
MyHOST="localhost" # Hostname
Password="" #Linux Password
MYSQL="$(which mysql)"
if [ -z "$MYSQL" ]; then
echo "Error: MYSQL not found"
exit 1
fi
MYSQLADMIN="$(which mysqladmin)"
if [ -z "$MYSQLADMIN" ]; then
echo "Error: MYSQLADMIN not found"
exit 1
fi
CHOWN="$(which chown)"
if [ -z "$CHOWN" ]; then
echo "Error: CHOWN not found"
exit 1
fi
CHMOD="$(which chmod)"
if [ -z "$CHMOD" ]; then
echo "Error: CHMOD not found"
exit 1
fi
GZIP="$(which gzip)"
if [ -z "$GZIP" ]; then
echo "Error: GZIP not found"
exit 1
fi
CP="$(which cp)"
if [ -z "$CP" ]; then
echo "Error: CP not found"
exit 1
fi
MV="$(which mv)"
if [ -z "$MV" ]; then
echo "Error: MV not found"
exit 1
fi
RM="$(which rm)"
if [ -z "$RM" ]; then
echo "Error: RM not found"
exit 1
fi
RSYNC="$(which rsync)"
if [ -z "$RSYNC" ]; then
echo "Error: RSYNC not found"
exit 1
fi
MYSQLBINLOG="$(which mysqlbinlog)"
if [ -z "$MYSQLBINLOG" ]; then
echo "Error: MYSQLBINLOG not found"
exit 1
fi
# Get data in dd-mm-yyyy format
NOW="$(date +"%d-%m-%Y-%T")"
DEST="/home/db-backup"
mkdir $DEST/Increment_backup.$NOW
LATEST=$DEST/Increment_backup.$NOW
$MYSQLADMIN -u$MyUSER -p$MyPASS flush-logs
newestlog=`ls -d /usr/local/mysql/data/mysql-bin.?????? | sed 's/^.*\.//' | sort -g | tail -n 1`
echo $newestlog
for file in `ls /usr/local/mysql/data/mysql-bin.??????`
do
if [ "/usr/local/mysql/data/mysql-bin.$newestlog" != "$file" ]; then
echo $file
$CP "$file" $LATEST
fi
done
for file1 in `ls $LATEST/mysql-bin.??????`
do
$MYSQLBINLOG $file1>$file1.$NOW.sql
$GZIP -9 "$file1.$NOW.sql"
$RM "$file1"
done
$RSYNC -avz $LATEST /home/rsync-back
First of all, when scheduled on crontab it is not showing any errors. How can I get to know whether the script is running or not?
Secondly, what is the correct way to execute the shell script in a crontab.
Some blogs suggest for change in environment variables. What would be the best solution
When I did $echo PATH, I got this
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/mysql/bin:/opt/android-sdk-linux/tools:/opt/android-sdk-linux/platform-tools:~/usr/lib/jvm/jdk-6/bin
The problem is probably that your $PATH is different in the manual environment from that under which crontab runs. Hence, which can't find your executables. To fix this, first print your path in the manual environment (echo $PATH), and then manually set up PATH at the top of the script you run in crontab. Or just refer to the programs by their full path.
Edit: Add this near the top of your script, before all the which calls:
export PATH="/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/mysql/bin:/opt/android-sdk-linux/tools:/opt/android-sdk-linux/platform-tools:~/usr/lib/jvm/jdk-6/bin"
Another more generic way is to have cron run the user's bash logon process. In addition to the PATH, this will also pick up any LD_LIBRARY_PATH, LANG settings, other environment variables, etc. To do this, code your crontab entry like:
34 12 * * * bash -l /home/db-backup/mysqlbackup.sh
My Issue was that I set the cron job in /etc/cron.d (Centos 7). It seems that when doing so I need to specify the user who executes the script, unlike when a cronjob is entered at a user level.
All I had to do was
*/1 * * * * root perl /path/to/my/script.sh
*/5 * * * * root php /path/to/my/script.php
Where "root" states that I am running the script as root.
Also need to make sure the following are defined at the top of the file. Your paths might be different. If you are not sure try the command "which perl", "which php".
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Just import your user profile in the beginning of the script.
i.e.:
. /home/user/.profile
In your crontab file your script entry can have
* * * * * /bin/bash /path/to/your/script
Make sure to use full paths in your script. I have had a look through it and I have not seen any but just in case I missed it.
first run command env > env.tmp then run cat env.tmp copy PATH=.................. Full line and paste into crontab -e, line before your cronjobs. as like below
++=============================================================++
(# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
PATH=/root/.nvm/versions/node/v18.12.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sna>
# m h dom mon dow command
0 3 * * * bash /home/ubuntu/test.sh
* * * * * bash /home/ubuntu/test2.sh >> /home/ubuntu/cronlog.log
* 3 * * * bash /home/ubuntu/logscron.sh)
++=====================================================================++
Related
This question already has answers here:
How to pass a variable containing slashes to sed
(7 answers)
Closed 11 months ago.
I am trying to automate some patching steps and I have written a script to back up the file and then replace the path in the file in all spots, upon testing backing up the files was ok but the find and replace even though it states successful didn't work, I am trying to use said but I am not married to that so if there is a cleaner way I am not opposed, please see my code example below:
#!/bin/bash
#set -x
nodemanager="/u01/app/oracle/admin/domain/mserver/ADF_INT/nodemanager/"
bindirectory="/u01/app/oracle/admin/domain/mserver/ADF_INT/bin/"
ouilocation="/u01/app/oracle/product/fmw/middleware12c/oui/bin/"
date=$(date +"%d-%m-%y")
echo $date
read -p "Please enter the current jdk path: " oldjdk
read -p "Please enter the new jdk path: " newjdk
echo "Backing up an uploading files to remote server...."
cd $nodemanager || exit
cp nodemanager.properties nodemanager_$date.bkp
if [ $? -ne 0 ]; # Again checking if the last operation was successful if not shall exit the
script
then
echo -e "nodemanager.properties backup failed"
echo -e "Terminating script"
exit 0
fi
sed -i -e 's/${$oldjdk/$newjdk}/g' nodemanager.properties
if [ $? -ne 0 ]; # Again checking if the last operation was successful if not shall exit the
script
then
echo -e "find and replace failed for nodemanager.properties"
echo -e "Terminating script"
exit 0
fi
echo -e "nodemanager.properties operations completed successfully\n"
Thanks
JJ
To save you some trouble on this I figured it out the / is not part of sed it can be any delimiter that is not clashing with the path so I used this:
sed -i "s+$oldjdk+$newjdk+g" file
Thanks
JJ
I've got the following bash script to check if a string length is nonzero using -n and equals "OK" by converting the variable to uppercase using ${1^^}:
#!/bin/bash
myfunction() {
result=0
if [[ -n $1 && ${1^^} = "OK" ]]; then
result=1
fi
echo $result >> /home/[REDACTED]/mylog
}
myfunction "ok"
myfunction "NO"
I made it executable using:
sudo chmod +x ./test
I then call it with either of the following:
bash ./test
./test
And the file always contains the expected result:
1
0
However, when trying to run from cron and the file contains:
0
0
I set up cron with this command:
sudo crontab -e
And here is the content:
* * * * * /home/[REDACTED]/test
I'm probably overlooking something obvious like syntax or cron environment... any suggestions?
Thanks!
Not sure why, but deleting the first line of the file and retyping has resolved it:
#!/bin/bash
Thanks all for the help!
I am trying to run the following cron job from bash (RHEL 7.4), An entry level postgres DB backup script I could write:
#!/bin/bash
# find latest file
echo $PATH
cd /home/postgres/log/
echo "------------ backup starts-----------"
latest_file=$( ls -t | head -n 1 | grep '\.log$' )
echo "latest file"
echo $latest_file
# find older files than above
echo "old file"
old_file=$( find . -maxdepth 1 -name "postgresql*" ! -newer $latest_file -mmin +1 )
if [ -f "$old_file" ]
then
echo $old_file
file_name=${old_file##*/}
echo "file name"
echo $file_name
# zip older file
tar czvf /home/postgres/log/archived_logs/$old_file.gz /home/postgres/log/$file_name
rm -rf /home/postgres/log/$file_name
else
echo "no old file found"
fi
Above is running correctly from shell and performing the intended tasks. It is also echoing needed info.
I have installed it with postgres user (not root) with crontab -e
*/2 * * * * /home/postgres/log/rollup.sh >> /home/postgres/log/logfile.csv 2>&1
It is correctly echoing (text which I have embedded for testing) but not the commands output to the .csv. Although it is not my concern. My concern is , it is not running those few commands at all.
I have given another try by changing the log file (.csv) path to /dev/null and commands in shell script are executing. I am not getting what I am missing here.
.csv file has 777 as permission , just to test
I have written a shell script to copy a file from a remote server and convert the same file to another format. After conversion, i will edit the file using sed command .The script runs successfully when executed manually but fails when executed through crontab.
Crontab entry is:
*/1 * * * * /script/testshell.sh
Below is the shell script code:
#!/bin/bash
file="/script/test_data.csv" if [ -f "$file" ] then
echo " file is present in the local machine " else
echo " file is not present in the local machine "
echo " checking the file is present in the remote server "
ssh user#IP 'if [ -f /$path ]; then echo File found ; else echo File not found; fi' fi
if [ -f "$file"] then
rm -f test_data.csv fi
scp -i /server.pem user#IP:/$path
file="/script/test_data.csv" if [ -f "$file" ] then
echo "$file found." else
echo "$file not found." fi
if [ -f "$file" ] then echo " converting csv to json format ....." fi
./csvjson.sh input.csv output.json
sed -e '/^$/d' -e 's/.*/&,/; 1 i\[' ./output.json | sed ' $ a \]'
hello.json
After running the script manually, it works perfectly. But not working for crontab.
What doesn't work from cron, whats the output are there any errors from cron?
A few things to try:
cron doesn't run your profile by default so if your script needs anything set from it include it in the crontab command e.g.
". ./.bash_profile;/script/testshell.sh"
I can't see $path set anywhere, although its used to test for a file existing, are you setting that manually somewhere so won't be set from cron?.
Some of your scripts and files are specified as being in current dir (./), from cron that will be your home folder, is that right or do you need to Change directory in the script or use a path for them?
Hope that helps
I am working on a horribly old machine without logrorate.
[ Actually it has busybox 0.6, which is 'void of form' for most purposes. ]
I have openvpn running and I'd like to be able to see what it's been up to. The openvpn I'm using can output progress info to stdout or to a named log file.
I tried and failed to find a way to stop it using one log file and start it on another. Maybe some SIGUSR or something will make it close and re-open the output file, but I can't find it.
So I wrote a script which reads from stdin, and directs output to a rotating log file.
So now all I need to do is pipe the output from openvpn to it.
Job done.
Except that if I kill openvpn, the script which is processing its output just runs forever. There's nothing more it can do, so I'd like it to die automatically.
Is there any way to trap the situation in the script "EOF on STDIN" or something using "find the process ID which is feeding my stdin", or whatever?
I see that this resembles the question
"Tee does not exit after pipeline it's on has finished"
but it's not quite that in that I have no control over the behaviour of openvpn ( save that I can kill it ). I do have control over the script that receives the output of openvpn, but can't work out how to detect the death of openvpn, or the pipe from it to me.
My upper-level script is roughly:
vpn_command="openvpn --writepid ${sole_vpn_pid_file} \
--config /etc/openvpn/openvpn.conf \
--remote ${VPN_HOST} ${VPN_PORT} "
# collapse sequences of multiple spaces to one space
vpn_command_tight=$(echo -e ${vpn_command}) # must not quote the parameter
# We pass the pid file over explicitly in case we ever want to use multiple VPNs.
( ./${launchAndWaitScriptFullName} "${vpn_command_tight}" "${sole_vpn_pid_file}" 2>&1 | \
./vpn-log-rotate.sh 10000 /var/log/openvpn/openvpn.log ) &
if I kill the openvpn process, the "vpn-log-rotate.sh" one stays running.
that is:
#!/bin/sh
# #file vpn-log-rotate.sh
#
# #brief rotates stdin out to 2 levels of log files
#
# #param linesPerFile Number of lines to be placed in each log file.
# #param logFile Name of the primary log file.
#
# Archives the last log files on entry to .last files, then starts clean.
#
# #FIXME DGC 28-Nov-2014
# Note that this script does not die if the previous stage of the pipeline dies.
# It is possible that use of "trap SIGPIPE" or similar might fix that.
#
# #copyright Copyright Dexdyne Ltd 2014. All rights reserved.
#
# #author DGC
linesPerFile="$1"
logFile="$2"
# The location of this script as an absolute path. ( e.g. /home/Scripts )
scriptHomePathAndDirName="$(dirname "$(readlink -f $0)")"
# The name of this script
scriptName="$( basename $0 )"
. ${scriptHomePathAndDirName}/vpn-common.inc.sh
# Includes /sbin/script_start.inc.sh
# Reads config file
# Sets up vpn_temp_directory
# Sets up functions to obtain process id, and check if process is running.
# includes vpn-script-macros
# Remember our PID, to make it easier for a supervisor script to locate and kill us.
echo $$ > ${vpn_log_rotate_pid_file}
onExit()
{
echo "vpn-log-rotate.sh is exiting now"
rm -f ${vpn_log_rotate_pid_file}
}
trap "( onExit )" EXIT
logFileRotate1="${logFile}.1"
# Currently remember the 2 previous logs, in a rather knife-and-fork manner.
logFileMinus1="${logfile}.minus1"
logFileMinus2="${logfile}.minus2"
logFileRotate1Minus1="${logFileRotate1}.minus1"
logFileRotate1Minus2="${logFileRotate1}.minus2"
# The primary log file exist, rename it to be the rotated version.
rotateLogs()
{
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileRotate1}"
fi
}
# The log files exist, rename them to be the archived copies.
archiveLogs()
{
if [ -f "${logFileMinus1}" ]
then
mv -f "${logFileMinus1}" "${logFileMinus2}"
fi
if [ -f "${logFile}" ]
then
mv -f "${logFile}" "${logFileMinus1}"
fi
if [ -f "${logFileRotate1Minus1}" ]
then
mv -f "${logFileRotate1Minus1}" "${logFileRotate1Minus2}"
fi
if [ -f "${logFileRotate1}" ]
then
mv -f "${logFileRotate1}" "${logFileRotate1Minus1}"
fi
}
archiveLogs
rm -f "${LogFile}"
rm -f "${logFileRotate1}"
while true
do
lines=0
while [ ${lines} -lt ${linesPerFile} ]
do
read line
lines=$(( ${lines} + 1 ))
#echo $lines
echo ${line} >> ${logFile}
done
mv -f "${logFile}" "${logFileRotate1}"
done
exit_0
Change this:
read line
to this:
read line || exit
so that if read-ing fails (because you've reached EOF), you exit.
Better yet, change it to this:
IFS= read -r line || exit
so that you don't discard leading whitespace, and don't treat backslashes as special.
And while you're at it, be sure to change this:
echo ${line} >> ${logFile}
to this:
printf %s "$line" >> "$logFile"
so that you don't run into problems if $line has a leading -, or contains * or ?, or whatnot.