Assigning variables causes SFTP to fail - bash

I'm trying to write a shell script that grabs a set of parameters from a text file and then performs SFTP based on those parameters. Basically, I'm taking a daily webstats log and moving it to a central location.
The issue I'm having is that the SFTP fails based on the way I am assigning variables. I have debugged and found that the while loop works correctly by echoing out the loop of variables. The error I get is that the connection is closed.
#!/bin/sh
source /home/ntadmin/webstats/bin/webstats.profile
source /home/ntadmin/webstats/bin/webstats.blogs.profile
DATE=`date +%m%d%Y`
SOURCE_FILE="`echo $WS_BC_SOURCE_FILE | sed -e 's/mmddyyyy/'$DATE'/'`"
IFS=","
while read WS_BLOG_NAME WS_BLOG_SOURCE_VAR WS_BLOG_DEST_VAR WS_BC_SERVER1;
do
#Step 1 SFTP
cd $PERL_DIR
if $PERL_DIR/sftp.pl $WS_BC_SERVER1 $WS_BC_ID $WS_BC_PW $WS_BLOG_SOURCE_VAR/$SOURCE_FILE $WS_BLOG_DEST_VAR/$SOURCE_FILE
then
echo 'SFTP complete'
else
echo 'SFTP failed!'
exit 1
fi
#Step 2 - Check that ftp was successful (that the files exist)
if [ -e $WS_BLOG_DEST_VAR/$SOURCE_FILE ]
then
echo "FTP of $WS_BLOG_SOURCE_VAR/$SOURCE_FILE from $WS_BC_SERVER1 was successful"
else
echo "FTP of $WS_BLOG_SOURCE_VAR/$SOURCE_FILE from $WS_BC_SERVER1 was not successful!"
exit 1
fi
done < blogs_array.txt
exit 0

There is not enough information to determine what was wrong, but here is a debugging method.
Try replace the actual sftp command in perl script with a debug script like this, you should be able to locate the problem quickly.
#!/usr/bin/perl
print "arguments passed to $0\n";
$i=0;
while (defined $ARGV[$i]) {
print "arg ".($i+1)." is <$ARGV[$i++]>\n"
}

Related

using two variables in a bash find and replace [duplicate]

This question already has answers here:
How to pass a variable containing slashes to sed
(7 answers)
Closed 11 months ago.
I am trying to automate some patching steps and I have written a script to back up the file and then replace the path in the file in all spots, upon testing backing up the files was ok but the find and replace even though it states successful didn't work, I am trying to use said but I am not married to that so if there is a cleaner way I am not opposed, please see my code example below:
#!/bin/bash
#set -x
nodemanager="/u01/app/oracle/admin/domain/mserver/ADF_INT/nodemanager/"
bindirectory="/u01/app/oracle/admin/domain/mserver/ADF_INT/bin/"
ouilocation="/u01/app/oracle/product/fmw/middleware12c/oui/bin/"
date=$(date +"%d-%m-%y")
echo $date
read -p "Please enter the current jdk path: " oldjdk
read -p "Please enter the new jdk path: " newjdk
echo "Backing up an uploading files to remote server...."
cd $nodemanager || exit
cp nodemanager.properties nodemanager_$date.bkp
if [ $? -ne 0 ]; # Again checking if the last operation was successful if not shall exit the
script
then
echo -e "nodemanager.properties backup failed"
echo -e "Terminating script"
exit 0
fi
sed -i -e 's/${$oldjdk/$newjdk}/g' nodemanager.properties
if [ $? -ne 0 ]; # Again checking if the last operation was successful if not shall exit the
script
then
echo -e "find and replace failed for nodemanager.properties"
echo -e "Terminating script"
exit 0
fi
echo -e "nodemanager.properties operations completed successfully\n"
Thanks
JJ
To save you some trouble on this I figured it out the / is not part of sed it can be any delimiter that is not clashing with the path so I used this:
sed -i "s+$oldjdk+$newjdk+g" file
Thanks
JJ

BASH How to split my terminal during a long command line, and communicate with process?

First, sorry for the title, it's really difficult to resume my problem in a catch phrase.
I would like to build some bash script witch make a backup of a targeted directory, this backup will be in a server, using ssh.
My strategy is using rsync, to make a save properly and this program support ssh connection.
BUT, the problem is, when I use rsync command in order to copy heavy datas it's take some times and during this time I want print a loader.
My question is : how can I print a loader during rsync process, and when the copy is terminated close the loader ? I tried to lauch rsync in background with & but I don't know how to communicate with this process.
My script :
#!/bin/bash
#Strat : launch in local machine rsync
#$1 contain the source file ex : cmd my_dir
function loader(){
local i sp n
sp[0]="."
sp[1]=".."
sp[2]="..."
sp[3]=".."
sp[4]="."
for i in "${spin[#]}"
do
echo -ne "\b$i"
sleep 0.1
done
}
#main
if [ $# -ne 1 ]; then
help #a function not detailed here
exit 1
else
if [ $1 = "-h" ]; then
help
else
echo "==== SAVE PROGRAM ===="
echo "connection in progress ..."
sleep 1 #esthetic only
#I launch the copy, it works fine. rsync is launch with non-verbose mode
rsync -az $1"/" -e "ssh -p 22" serveradress:"SAVE/" &
$w=$! #rsync previous command's pid
#some code here to "wait" rsync processing, and during this time I would like to print some loading animation in my term (function loader).
while [ #condition ??? ]
do
loader
done
wait $w
echo "Copy complete !"
fi
fi
Thank's for the help.

Ignore specific error conditions with SFTP/SCP File transfer

I am trying to bash script a daily file transfer between 2 machines. The script runs on the destination machine and pulls a file from the source machine. Occasionally the source machine will not have a file ready, this is acceptable.
I would like the script to exit 0 on successful transfer, and when there is no file available to be transferred. I would like the script to exit non 0 on any other failure condition (connectivity, etc).
I tried the following 2 approaches, I found with SCP the return code is always 1 no matter what the actual error, so its hard for the script to differentiate between my acceptable error condition, and others.
The sftp method seems to always return 0 no matter what takes place during the command. Any suggestions?
scpGet(){
  echo "Attempting File Transfer"
  scp -P $REMOTEPORT $REMOTEHOST:$REMOTEPATH $LOCALPATH
  echo $?
}
sftpGet(){
cd $LOCALPATH
sftp -P $REMOTEPORT $REMOTEHOST << EOF
get $REMOTEPATH
quit
EOF
echo $?
}
I haven't validated this, so please check that it actually does what you want -
but you are apparently running scp with no password, so you can probably execute arbitrary code remotely to test for the existence of the file. Just be careful.
scpGet() {
echo "Attempting File Transfer"
if scp -P $REMOTEPORT $REMOTEHOST:$REMOTEPATH $LOCALPATH
then echo "$( ls -l $LOCALPATH) - successfully retrieved"
elif ssh -P $REMOTEPORT ls -l $REMOTEHOST:$REMOTEPATH
then echo "$REMOTEHOST:$REMOTEPATH exists, but I can't retrieve it!" >&2
exit $oopsieCode
elif (( 2 == $rc )) # ls failed to find the file - verify this code
then echo "File not ready. Ignoring."
else : handle errors other than "not found"
fi
}

How do I kill a Bash script if it can't find or read a file?

I have written a small bash program which needs to read a file with name input. I want the script to print the message file not found and exit or kill itself if it can't find the file.
Just before reading, check if the file exists:
if [ ! -f input ]; then
echo "File Not found"
exit 1
fi
Use the Bash Exit Handler
You can use Bash's set -e option to handle most similar situations automatically, with system-generated (but generally sensible) error messages. For example:
$ set -e; ls /tmp/doesnt_exist
ls: cannot access /tmp/doesnt_exist: No such file or directory
Note that the -e option will also cause the current shell to exit immediately with a non-zero exit status after displaying the error message. This is a quick-and-dirty way to get what you want.
Manually Test for a Readable File
If you really need a custom message, then you want to use a test conditional. For example, to ensure that a file exists and is readable you could use something similar to the following:
if [[ -r "/path/to/input" ]]; then
: # do something with "input"
else
# Send message to standard error.
echo "file not found" > /dev/stderr
# Exit with EX_DATAERR from sysexits.h.
exit 65
fi
See Also
See man 1 test for a more complete list of possible test conditionals.

how to find a file exists in particular dir through SSH

how to find a file exists in particular dir through SSH
for example :
host1 and dir /home/tree/TEST
Host2:- ssh host1 - find the TEST file exists or not using bash
ssh will return the exit code of the command you ask it to execute:
if ssh host1 stat /home/tree/TEST \> /dev/null 2\>\&1
then
echo File exists
else
echo Not found
fi
You'll need to have key authentication setup of course, so you avoid the password prompt.
This is what I ended up doing after reading and trying out the stuff here:
FileExists=`ssh host "test -e /home/tree/TEST && echo 1 || echo 0"`
if [ ${FileExists} = 0 ]
#do something because the file doesn't exist
fi
More info about test: http://linux.die.net/man/1/test
An extension to Erik's accepted answer.
Here is my bash script for waiting on an external process to upload a file. This will block current script execution indefinitely until the file exists.
Requires key-based SSH access although this could be easily modified to a curl version for checks over HTTP.
This is useful for uploads via external systems that use temporary file names:
rsync
transmission (torrent)
Script below:
#!/bin/bash
set -vx
#AUTH="user#server"
AUTH="${1}"
#FILE="/tmp/test.txt"
FILE="${2}"
while (sleep 60); do
if ssh ${AUTH} stat "${FILE}" > /dev/null 2>&1; then
echo "File found";
exit 0;
fi;
done;
No need for echo. Can't get much simpler than this :)
ssh host "test -e /path/to/file"
if [ $? -eq 0 ]; then
# your file exists
fi

Resources