Error with fi in if then else shell script - shell

There is a shell script with if, then, else. Here is the part of the code, it's not the whole, just the part:
DAYOFWEEK=$(date +"%u")
echo $DAYOFWEEK
if [ "$DAYOFWEEK" -eq 1 ]; then
echo "OK. It's Monday. We are running a weekly backup on Mondays."
echo "`date` - Deleting weekly remote backup files."
sftp -oPort=199 $SFTPUSER#$SFTPSITE <<EOF;
cd user;
cd weekly;
ls -al;
rm *;
bye;
EOF;
echo "DONE"
rsync -ave "ssh -p 199" /root/backups/files/$THESITE/daily/
root#coolsite.org:/root/user/weekly
else
echo "No weekly backups today"
fi
I get en error:
./backup.sh: 120: ./backup.sh: Syntax error: end of file unexpected (expecting "fi")
root#developementbox:~/backups#
It doesn't like fi and I don't understand what is wrong with this.

The EOF terminating your sftp commands must start at the beginning of the line and must not be terminated by ;
What is happening is that sftp continues to consume the rest of your shell script including the fi before returning control leaving your if condition unterminated as the fi has been incorrectly treated as an sftp command.
Remove spaces or tabs before EOF and the ; after all of the sftp commands including the EOF terminator and you should be good to go.

Related

How to delay a command substitution in a shell script?

I have a bash script that in a nutshell generates a file via other command (works fine) and then later in the script I do a wc command on the file generated. My problem is that i'm using command substitution on the wc command and when I execute the script it is executing this substitution immediately rather than waiting for the file to be generated earlier in the script. What are my options?
The script is a shell program to run Oracle SQLLoader to load data into an Oracle Table. The SQLLoader command generates a file with rejected records and I am trying to do a word count on it. This is my code:
#!/bin/sh
# Set variables
program_name="My Program Name"
input_dir="/interface/inbound/hr_datafile"
log_dir="/interface/inbound/log"
bad_dir="/interface/inbound/hr_bad"
archive_dir="/interface/inbound/hr_archive"
input_base="Import_20171213"
input_ext="csv"
control_file="data_loader.ctl"
exit_status=0
d=`date "+%Y%m%d"`
t=`date "+%H%M"`
# Check if data file exists, count records
if [ -f ${input_dir}/${input_base}*.${input_ext} ]; then
data_file_name=`ls -1rt ${input_dir}/${input_base}*.${input_ext} | head -1 | cut -c 32-100`
data_file_base=${data_file_name%.*}
echo "Data file name: " ${data_file_name}
echo "Data file base: " ${data_file_base}
no_of_recs=`expr $(wc -l ${input_dir}/${data_file_name} | cut -c 1-8) - 1`
echo "DEBUG no_of_recs: " ${no_of_recs}
no_of_errs=0
else
echo
echo
echo
echo "----------------------------- ${program_name} ------------------------------------------"
echo "----------------------------------- Error report : ------------------------------------------------"
echo
echo "Please place your Data files in the UNIX directory => "${input_dir}
echo "${program_name} Process exiting ..."
echo
echo "---------------------------------------------------------------------------------------------------"
echo
echo
echo
echo
echo
exit 1
fi
# Run SQL*Loader
echo
echo "==> Loading Data...into MY_TABLE table"
echo
# Create a temporary control file to pass the data file name in
cat $XX_TOP/bin/${control_file} | sed "s/:FILENAME/${data_file_name}/g" > $XX_TOP/bin/${data_file_base}_temp.ctl
# NOTE: The following sqlldr format is used to "hide" the oracle user name and password
sqlldr errors=100000 skip=1 control=$XX_TOP/bin/${data_file_base}_temp.ctl data=${input_dir}/${data_file_name} log=${log_dir}/${data_file_base}.log bad=${bad_dir}/${data_file_base}.bad <<-END_SQLLDR > /dev/null
apps/`cat ${DB_SCRIPTS}/.${ORACLE_SID}apps`
END_SQLLDR
exit_status=$?
echo "DEBUG exit_status " ${exit_status}
# Remove temporary control file
rm -rf $XX_TOP/bin/${data_file_base}_temp.ctl
# Check for Errors
if [ -f ${bad_dir}/${data_file_base}.bad ]; then
echo
echo "----------------------------- ${program_name} ------------------------------------------"
echo "----------------------------------- Error report : ------------------------------------------------"
echo
grep 'Record' ${log_dir}/${data_file_base}.log > ${log_dir}/${data_file_base}.rec
grep 'ORA' ${log_dir}/${data_file_base}.log > ${log_dir}/${data_file_base}.err
paste ${log_dir}/${data_file_base}.rec ${log_dir}/${data_file_base}.err ${bad_dir}/${data_file_base}.bad
echo
echo "<---------------------------------End of Error Report---------------------------------------------->"
echo
# Count error records
no_of_errs=$(wc -l ${bad_dir}/${data_file_base}.bad | cut -c 1-8)
no_of_recs=$(expr ${no_of_recs} - ${no_of_errs})
# Remove temp files
rm ${log_dir}/${data_file_base}.rec
rm ${log_dir}/${data_file_base}.err
rm ${bad_dir}/${data_file_base}.bad
else
echo "Bad File not found at ${bad_dir}/${data_file_base}.bad"
fi
if (( ${no_of_errs} > 0 )); then
echo "Error found in data file..."
exit_status=1
else
# Archive the data file if no errors
mv ${input_dir}/${data_file_name} ${archive_dir}/${data_file_base}_$d"_"$t.${input_ext}
echo "Data file archived to ${archive_dir}"
exit_status=0
fi
echo
echo
echo
echo "----------------------------- ${program_name} ------------------------------------------"
echo
echo "Total records errored out :" ${no_of_errs}
echo "Total records successfully read :" ${no_of_recs}
echo "---------------------------------------------------------------------------------------------------"
echo
# Final Exit Status
if [ ${exit_status} -eq 1 ]; then
echo "==> Exiting process...Status : ${exit_status}"
exit 1
fi
The file referenced in the if condition check is the file generated, so I'm checking if the file exists and then running the wc on it. I know that it's executing prematurely because this wc error appears in my script output before it should:
Data file name: Import_20171213.csv
Data file base: Import_20171213
DEBUG no_of_recs: 27
==> Loading Data...into MY_TABLE table
wc: 0653-755 Cannot open /interface/inbound/hr_bad/Import_20171213.bad.
Username:
SQL*Loader: Release 10.1.0.5.0 - Production on Thu Dec 21 12:42:39 2017
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 27
Program exited with status 2
In the code, the wc command on the .bad file is performed after the sqlldr section, yet the log shows the error occurring before sqlldr is invoked.
Any ideas would be much appreciated! Thanks!
You aren't seeing all of the output you expect - anything after the call to SQL*Loader is missing - and the error from wc is coming in the wrong place, before instead of after the Username; prompt. But it shouldn't be erroring giving the construct you've used - if the file doesn't exist the if test should stop that line being reach.
The issue is that it isn't seeing the end of the heredoc. All of the commands beyond the start of the heredoc are being executed as part of the heredoc processing but aren't going anywhere or being displayed to the terminal (as expected), and something in the way that is all being evaluated is causing the wc to run unexpectedly, even though the file doens't exist. You're seeing the stderr output from that.
And that is all happening before SQL*Loader starts, so you see the Username: prompt from that afterwards.
From comments it seems the heredoc-ending END_SQLLDR is really indented in your actual code, which isn't reflected in the posted question. As you used the <<- heredoc form that implies it's indented with spaces rather than tabs. That is causing that to not be recognised as the end of the here doc.
From The Linux Documentation Project:
The closing limit string, on the final line of a here document, must start in the first character position. There can be no leading whitespace. Trailing whitespace after the limit string likewise causes unexpected behavior. The whitespace prevents the limit string from being recognized.
Removing the whitespace so that is the first thing on the line will fix it.

Ignore specific error conditions with SFTP/SCP File transfer

I am trying to bash script a daily file transfer between 2 machines. The script runs on the destination machine and pulls a file from the source machine. Occasionally the source machine will not have a file ready, this is acceptable.
I would like the script to exit 0 on successful transfer, and when there is no file available to be transferred. I would like the script to exit non 0 on any other failure condition (connectivity, etc).
I tried the following 2 approaches, I found with SCP the return code is always 1 no matter what the actual error, so its hard for the script to differentiate between my acceptable error condition, and others.
The sftp method seems to always return 0 no matter what takes place during the command. Any suggestions?
scpGet(){
  echo "Attempting File Transfer"
  scp -P $REMOTEPORT $REMOTEHOST:$REMOTEPATH $LOCALPATH
  echo $?
}
sftpGet(){
cd $LOCALPATH
sftp -P $REMOTEPORT $REMOTEHOST << EOF
get $REMOTEPATH
quit
EOF
echo $?
}
I haven't validated this, so please check that it actually does what you want -
but you are apparently running scp with no password, so you can probably execute arbitrary code remotely to test for the existence of the file. Just be careful.
scpGet() {
echo "Attempting File Transfer"
if scp -P $REMOTEPORT $REMOTEHOST:$REMOTEPATH $LOCALPATH
then echo "$( ls -l $LOCALPATH) - successfully retrieved"
elif ssh -P $REMOTEPORT ls -l $REMOTEHOST:$REMOTEPATH
then echo "$REMOTEHOST:$REMOTEPATH exists, but I can't retrieve it!" >&2
exit $oopsieCode
elif (( 2 == $rc )) # ls failed to find the file - verify this code
then echo "File not ready. Ignoring."
else : handle errors other than "not found"
fi
}

Why doesn't my if statement with backticks work properly?

I am trying to make a Bash script where the user will be able to copy a file, and see if it was successfully done or not. But every time the copy is done, properly or not, the second output "copy was not done" is shown. Any idea how to solve this?
if [ `cp -i $files $destination` ];then
echo "Copy successful."
else
echo "Copy was not done"
fi
What you want is
if cp -i "$file" "$destination"; then #...
Don't forget the quotes.
You version:
if [ `cp -i $files $destination` ];then #..
will always execute the else branch.
The if statement in the shell takes a command.
If that command succeeds (returns 0, which gets assigned into $?), then the condition succeeds.
If you do if [ ... ]; then, then it's the same as
if test ... ; then because [ ] is syntactic sugar for the test command/builtin.
In your case, you're passing the result of the stdout* of the cp operation as an argument to test
The stdout of a cp operation will be empty (cp generally only outputs errors and those go to stderr). A test invocation with an empty argument list is an error. The error results in a nonzero exit status and thus you always get the else branch.
*the $() process substitution or the backtick process substitution slurp the stdout of the command they run
With back ticks you are testing the output of the cp command, not its status. You also don't need the test command (square brackets) here.
Just use:
if cp ... ; then
...
In addition to testing the output verses status as correctly pointed out in the other answer, you can make use of a compound command to do exactly what your are attempting, without requiring the full if ... then ... else ... fi syntax. For example:
cp -i "$files" "$destination" && echo "Copy successful." || echo "Copy was not done"
Which essentially does the exact same thing as the if syntax. Basically:
command && 'next cmd if 1st succeeded'
and
command || 'next cmd if 1st failed'
You are simply using command && 'next cmd if 1st succeeded' as the command in command || 'next cmd if 1st failed'. Together it is simply:
command && 'next cmd if 1st succeeded' || 'next cmd if 1st failed'
Note: make sure to always quote your variables to prevent word-splitting, and pathname expansion, etc...
Try:
cp -i $files $destination
#check return value $? if cp command was successful
if [ "$?" == "0" ];then
echo "Copy successful."
else
echo "Copy was not done"
fi

Check for existance of directory always fails in Bash Script

I have a problem that has been bugging me for a few hours now. I have created a parameter --file-dir using getopt, which assigns a directory for the program to use. Following the parameter, the user has the choice to choose whatever directory they please. To keep the program stable, I check to see whether that directory even exists. The following code is what I have currently and it always returns "Directory does not exist. Terminating." even when I search for my /home directory.
-a|--file-dir) FILE_DIR=$2 ;
if [ ! -d "$FILE_DIR" ]; then
echo "Directory does not exist. Terminating." ;
exit 1;
else
echo "Directory exists." ;
fi ;
shift;;
Any input is much appreciated. The getopt's work fine with echo tests and such but fail when checking for directories.
It would be a good idea to check if you're really having the right argument for it:
-a|--file-dir) FILE_DIR=$2 ;
if [ ! -d "$FILE_DIR" ]; then
echo "Directory \"$FILE_DIR\" does not exist. Terminating." ;
exit 1;
else
echo "Directory exists." ;
fi ;
shift;;
If not certainly the problem is not in the checker but somewhere in your argument-parsing loop.
I had an issue with the same behavior: checking for a directory in the command line worked as expected, but always failed when done in a script.
I was running this script under git bash for Windows:
while read -r i; do
[ ! -d "$i" ] && echo "No $i"
done < "$1"
Windows' line endings (\r\n) can cause issues when splitting lines. Each test actually checks for directory\r instead of directory. Therefore, I needed to run the read command with the correct delimiter:
while IFS=$'\r\n' read -r i; do
It is possible that OP also had a similar issue, where non-printable characters got in the way.

LOCAL_DIR variable prepends the scripts current directory (totally not what I expect)

Consider the following simple rsync script I am tryint to slap up:
#!/bin/bash
PROJECT="$1"
USER=stef
LOCAL_DIR="~/drupal-files/"
REMOTE_HOST="hostname.com"
REMOTE_PROJECTS_PATH=""
# Should not have anything to change below
PROJECT_LIST="proj1 proj2 proj3 quit"
echo "/nSelect project you wish to rsync\n\n"
select PROJECT in $PROJECT_LIST
do
if [ "$PROJECT" = "quit" ]; then
echo
echo "Quitting $0"
echo
exit
fi
echo "Rsynching $PROJECT from $REMOTE_HOST into" $LOCAL_DIR$PROJECT
rsync -avzrvP $USER#$REMOTE_HOST:/var/projects/$PROJECT/ $LOCAL_DIR$PROJECT
done
echo "Rsync complete."
exit;
The variable $LOCALDIR$PROJECT set in the rsync command always includes the scripts path, :
OUTPUT:
Rsynching casa from hostname.com.com into ~/drupal-files/casa
opening connection using: ssh -l stef hostname.com rsync --server --sender -vvlogDtprz e.iLsf . /var/groupe_tva/casa/
receiving incremental file list
rsync: mkdir "/home/stef/bin/~/drupal-files/proj1" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(605) [Receiver=3.0.9]
The line with mkdir should not have /home/stef/bin, why is bash adding the script's running dir on the variable?
Thanks
LOCAL_DIR="~/drupal-files/"
The string is in quotes so there's pathname expansion, and the variable will contain the literal string.
Remove the quotes.
$ x="~/test"; echo $x
~/test
$ x=~/test; echo $x
/home/user/test

Resources