Bash script: Attempting to upload datestamped file via SFTP - bash

I'm attempting to write a bash script that I can set as a cronjob to automatically upload a backup file via SFTP to a remote server.
The backup files on the local server are datestamped (e.g. backup-file-YYYY-mm-dd.tar.gz) and I'd like the script to only upload a file from the directory that has the same datestamp as the current date.
Any ideas on where I'm going wrong? I can't help but think I'm missing something basic but I can't think what it is!
Current broken script below:
#!/bin/bash
FILE=$backups/$(date+%Y-%m-%d).tar.gz *<<<<< I'm guessing this is where it's slipping up*
sshpass -p "remoteserverpassword" sftp -o StrictHostKeyChecking=no <user>#<remoteserverip)
cd /directory1/directory2/
put $FILE
exit 0
EOF

You are right about where it is slipping up, date needs to be eval'd prior to passing to here script. Reformatted for clarity but you could plug into the original script too.
#!/bin/bash
backup=/tmp
today=`date +%Y-%m-%d`
FILE=$backup/$today.tar.gz
sshpass -p "remoteserverpassword" sftp -o StrictHostKeyChecking=no <user>#<remoteserverip) <<EOF
cd /directory1/directory2/
put $FILE
exit 0
EOF

Related

Retrieve files from remote server using scp in crontab

I am trying to retrieve files from a remote server to my local PC using a cron job. However my script has to wait until the files are available on the remote server. From code pieces I gathered here and there, I came up with the code below
#!/bin/bash
year=$(date +%Y)
month=$(date +%m)
day=$(date +%d)
hour="00"
ssh-keygen # I suspect this line and the one below should be done once and not in the script.
ssh-copy-id _lms_2023
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
goto GOTO_1
if ssh lmshpc#41.203.191.69 "test -e /${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png"; then
scp lmshpc#41.203.191.69:${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png .
if [ $? -eq 0 ]; then
exit
fi
else
sleep 30
GOTO_1
fi
I want the script to keep checking until the files are available and downloaded. The above script gives the errors below.
Any assistance will be appreciated.
/usr/bin/ssh-copy-id: ERROR: ssh: Could not resolve hostname xxxxxxxxxx: Name or service not known
./cplocalfromRemote2.sh: line 14: goto: command not found
host's password:
A simpler way to do this is by using rsync instead of scp.
rsync checks/compares for new files automatically and copies only those files which are new/changed/modified or are not present in your destination location.
You can use rsync instead of scp in the script and set the script in cron to run every minute so that it gets the files which are newly added to the remote server.
The script:
#!/bin/bash
year=$(date +%Y)
month=$(date +%m)
day=$(date +%d)
hour="00"
cd ${HOME}/ModelOutput_LMS/WRF_OUTPUT/tmp
sshpass -p'your_password' rsync -avh --progress lmshpc#41.203.191.69:${HOME}/DA/OUTPUT/"$year$month$day$hour"/noda/graphics/*.png .
Set up a cronjob that runs the script every minute.
* * * * * /path/to/above/script.sh
EDIT: I have changed the script to add host password for rsync in non-interactive mode.
Hope this helps!

Copy files from a local to remote folder with scp and script throws "No such file or directory"

Description
I want to copy all files ending on .jpg from my local machine to the remote machine with scp.
For this i have a small "script". It looks like this:
#!/bin/bash
xfce4-terminal -e "scp -r -v /path/to/local/folder/*.jpg <user>#<IP>:/var/path/to/remote/folder/" --hold
Problem
When i open a terminal and enter scp -r -v /path/to/local/folder/*.jpg <user>#<IP>:/var/path/to/remote/directory/ it works.
So SSH is working correct.
When i start the script it doesnt.
The script works, when i copy the whole local folder. It then looks like this (simply the *.jpg is removed):
#!/bin/bash
xfce4-terminal -e "scp -r -v /path/to/local/folder/ <user>#<IP>:/var/path/to/remote/folder/" --hold
But then i have the local folder inside the remote folder, where i only want to have the files.
I dont know, if it is important but currently i use a computer with Linux Mint 19.3, xfce terminal and zshell.
Question
So how do i run a script correctly that copys files from a local folder to remote folder?
It's the shell who expands the wildcard, but when you run -e in xfce4-terminal, it runs the command without letting the shell parse it. You can run a shell to run the command, though:
xfce4-terminal -e "bash -c 'scp -r -v /path/to/local/folder/*.jpg user#ip:/var/path/to/remote'" --hold
Are you sure you need the -r? Directories are usually not named .jpg.

SFTP bash shell script to copy the file from source to destination

I have created one script to copy the local files to the remote folder. The script is working fine outside of if condition. But when I enclosed inside the if condition the put command is not working. It logged into the remote server using SFTP protocol and when exist it's showing the error:
put command not found
See what is happening after executing the script:
Connected to 10.42.255.209.
sftp> bye
sftp.sh: line 23: put: command not found
Please find the below script.
echo -e;
echo -e "This script is used to copy the files";
sleep 2;
localpath=/home/localpath/sftp
remotepath=/home/destination/sftp/
if [ -d $localpath ]
then
echo -e "Source Path found"
echo -e "Reading source path"
echo -e "Uploading the files"
sleep 2;
sftp username#10.42.255.209
put $localpath/* $remotepath
else
In a simple case such as this, you could use scp instad of sftp and specify the files to copy on the command line:
scp $localpath/* username#10.42.255.209:/$remotepath/
But if you would rather want to issue sftp commands, then sftp can read commands from its stdin, so you can do:
echo "put $localpath/* $remotepath" | sftp username#10.42.255.209
Or you can use a here document to pass data as stdin to sftp, which might be easier if you want to run several sftp commands:
sftp username#10.42.255.209 << EOF
put $localpath/fileA $remotepath/
put $localpath/fileB $remotepath/
EOF
Finally, you could place the sftp commands in a separate file, say sftp_commands.txt , and have sftp execute those commands using its -b flag:
sftp -b ./sftp_commands.txt username#10.42.255.209
I got the result using this format
HOST='xyz.abc.com'
USER='xyzasd'
REMOTEPATH='/var/www/data-csv/'
file_name='/tmp/sample.csv'
sftp $USER#$HOST <<EOF
cd /var/www/data-csv/
put $file_name
EOF
It will ask for password if the user have a password. Otherwise this code works fine.
This code worked for me
for reference read https://help.oclc.org/Librarian_Toolbox/Exchange_files_with_OCLC/Upload_files_with_SFTP/40SFTP_commands?sl=en
uploadFileToMFT(){
sftp -P ${PORT_NO} ${HOST_NAME}#${HOST_ID} <<EOF
cd /mdm_dev05
put ${EXPORT_OUTPUT}'/'${ID}'/'${F_NAME}
quit
EOF
}

How to create directory if doesn't exists in sftp

I want to create a directory if it doesn't exists after login to sftp server.
test.sh
sftp name#example.com << EOF
mkdir test
put test.xml
bye
EOF
Now i call test.sh and upload different files each time to test folder. When running this
mkdir test
First time it works and second time it throws Couldn't create directory: Failure error?
How to create a directory if doesn't exists and if exists don't create directory in sftp.
man 1 sftp (from openssh-client package):
-b batchfile
Batch mode reads a series of commands from an input
batchfile instead of stdin. Since it lacks user
interaction it should be used in conjunction with
non-interactive authentication. A batchfile of ‘-’
may be used to indicate standard input. sftp will
abort if any of the following commands fail: get,
put, reget, reput, rename, ln, rm, mkdir, chdir, ls,
lchdir, chmod, chown, chgrp, lpwd, df, symlink, and
lmkdir. Termination on error can be suppressed on a
command by command basis by prefixing the command
with a ‘-’ character (for example, -rm /tmp/blah*).
So:
{
echo -mkdir dir1
echo -mkdir dir1/dir2
echo -mkdir dir1/dir2/dir3
} | sftp -b - $user#$host
I understand this thread is old and has been marked as answered but the answer did not work in my case. The second page on google for a search regarding "sftp checking for directory" so here is an update that would have saved me a few hours.
Using an EOT you cannot capture the error code resulting from the directory not being found. The work around I found was to create a file containing instructions for the call and then capture the result of that automated call.
The example below using sshpass but my script also uses this same method authenticating with sshkeys.
Create the file containing the instructions:
echo "cd $RemoteDir" > check4directory
cat check4directory; echo "bye" >> check4directory
Set permissions:
chmod +x check4directory
Then make the connection using the batch feature:
export SSHPAA=$remote_pass
sshpass -e sftp -v -oBatchMode=no -b check4directory $remote_user#$remote_addy
Lastly check for the error code:
if [ $? -ge "1" ] ; then
echo -e "The remote directory was not found or the connection failed."
fi
At this point you can exit 1 or initiate some other action. Note that if the SFTP connection fails for another reason like password or the address is incorrect the error will trip the action.
Another variant is to split the SFTP session into two.
First SFTP session simply issues the MKDIR command.
Second SFTP session can then assume existence of the directory and put the files.
You can use the SSH access of your account to first verify if the directory exists at all (using the "test" command). If it returns exit code 0, the dir exists, otherwise it doesn't. You can act on that accordingly.
# Both the command and the name of your directory are "test"
# To avoid confusion, I just put the directory in a separate variable
YOURDIR="test"
# Check if the folder exists remotely
ssh name#example.com "test -d $YOURDIR"
if [ $? -ne 0 ]; then
# Directory does not exist
sftp name#example.com << EOF
mkdir test
put test.xml
bye
EOF
else
# Directory already exists
sftp name#example.com << EOF
put test.xml
bye
EOF
fi
Try this to ignore errors if directory already exists.
# Turn OFF error
set +e
# Create remote dirs
sftp -P 22 -o StrictHostKeyChecking=no -oIdentityFile=key.pem -v $user#$host <<EOF
mkdir <remote_path> # create remote directory
bye
EOF
# Turn ON error
set -e
# Do upload to SFTP
sftp -P 22 -o StrictHostKeyChecking=no -oIdentityFile=key.pem -v $user#$host <<EOF
cd <remote_path> # remote_path
put <local_file_path> # local_path
quit
EOF

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources