SFTP remove files from remote with local script - shell

I have a local script, i want to remove a file from the remote via the local script.
I tried the following way below but it does not work. I have permissions to delete in the remote.
#!bin/sh
USER=test_user
HOST=xx.xx.xxx.xx
REMOTE_DIR=/somedirectoryinremote
while true
do
sftp $USER#$HOST:$REMOTE_DIR "rm -f $REMOTE_DIR/*.txt"
# sftp rm $USER#$HOST:$REMOTE_DIR/*.txt <- tried this but does not work too.
sleep 1800
done
done

Try :-
echo "rm $REMOTE_DIR/*.txt" |sftp $USER#$HOST:$REMOTE_DIR
If you can sftp, can you not run ssh? ssh will be more simple.
ssh $USER#$HOST "rm -f $REMOTE_DIR/*.txt"

Related

Copy file from remote machine when it changes

I'm using this script to detect when a file changes in a remote machine and copy it when it does:
interval=5
target_file=/home/sergioro/file
last_modified=0
while :; do
last_modified_new=$(ssh jh 2>/dev/null stat -c %Y $target_file)
if [ $last_modified_new -gt $last_modified ]; then
last_modified=$last_modified_new
rsync -az jh:$target_file log/ 2>/dev/null
fi
sleep $interval
done &
This works but I would prefer to perform a persistent ssh connection instead of making multiple ssh calls. Is there a tool or SSH option to keep the ssh connection open and copy the file if changes? Thanks.

SFTP bash shell script to copy the file from source to destination

I have created one script to copy the local files to the remote folder. The script is working fine outside of if condition. But when I enclosed inside the if condition the put command is not working. It logged into the remote server using SFTP protocol and when exist it's showing the error:
put command not found
See what is happening after executing the script:
Connected to 10.42.255.209.
sftp> bye
sftp.sh: line 23: put: command not found
Please find the below script.
echo -e;
echo -e "This script is used to copy the files";
sleep 2;
localpath=/home/localpath/sftp
remotepath=/home/destination/sftp/
if [ -d $localpath ]
then
echo -e "Source Path found"
echo -e "Reading source path"
echo -e "Uploading the files"
sleep 2;
sftp username#10.42.255.209
put $localpath/* $remotepath
else
In a simple case such as this, you could use scp instad of sftp and specify the files to copy on the command line:
scp $localpath/* username#10.42.255.209:/$remotepath/
But if you would rather want to issue sftp commands, then sftp can read commands from its stdin, so you can do:
echo "put $localpath/* $remotepath" | sftp username#10.42.255.209
Or you can use a here document to pass data as stdin to sftp, which might be easier if you want to run several sftp commands:
sftp username#10.42.255.209 << EOF
put $localpath/fileA $remotepath/
put $localpath/fileB $remotepath/
EOF
Finally, you could place the sftp commands in a separate file, say sftp_commands.txt , and have sftp execute those commands using its -b flag:
sftp -b ./sftp_commands.txt username#10.42.255.209
I got the result using this format
HOST='xyz.abc.com'
USER='xyzasd'
REMOTEPATH='/var/www/data-csv/'
file_name='/tmp/sample.csv'
sftp $USER#$HOST <<EOF
cd /var/www/data-csv/
put $file_name
EOF
It will ask for password if the user have a password. Otherwise this code works fine.
This code worked for me
for reference read https://help.oclc.org/Librarian_Toolbox/Exchange_files_with_OCLC/Upload_files_with_SFTP/40SFTP_commands?sl=en
uploadFileToMFT(){
sftp -P ${PORT_NO} ${HOST_NAME}#${HOST_ID} <<EOF
cd /mdm_dev05
put ${EXPORT_OUTPUT}'/'${ID}'/'${F_NAME}
quit
EOF
}

Bash script: Attempting to upload datestamped file via SFTP

I'm attempting to write a bash script that I can set as a cronjob to automatically upload a backup file via SFTP to a remote server.
The backup files on the local server are datestamped (e.g. backup-file-YYYY-mm-dd.tar.gz) and I'd like the script to only upload a file from the directory that has the same datestamp as the current date.
Any ideas on where I'm going wrong? I can't help but think I'm missing something basic but I can't think what it is!
Current broken script below:
#!/bin/bash
FILE=$backups/$(date+%Y-%m-%d).tar.gz *<<<<< I'm guessing this is where it's slipping up*
sshpass -p "remoteserverpassword" sftp -o StrictHostKeyChecking=no <user>#<remoteserverip)
cd /directory1/directory2/
put $FILE
exit 0
EOF
You are right about where it is slipping up, date needs to be eval'd prior to passing to here script. Reformatted for clarity but you could plug into the original script too.
#!/bin/bash
backup=/tmp
today=`date +%Y-%m-%d`
FILE=$backup/$today.tar.gz
sshpass -p "remoteserverpassword" sftp -o StrictHostKeyChecking=no <user>#<remoteserverip) <<EOF
cd /directory1/directory2/
put $FILE
exit 0
EOF

How to create directory if doesn't exists in sftp

I want to create a directory if it doesn't exists after login to sftp server.
test.sh
sftp name#example.com << EOF
mkdir test
put test.xml
bye
EOF
Now i call test.sh and upload different files each time to test folder. When running this
mkdir test
First time it works and second time it throws Couldn't create directory: Failure error?
How to create a directory if doesn't exists and if exists don't create directory in sftp.
man 1 sftp (from openssh-client package):
-b batchfile
Batch mode reads a series of commands from an input
batchfile instead of stdin. Since it lacks user
interaction it should be used in conjunction with
non-interactive authentication. A batchfile of ‘-’
may be used to indicate standard input. sftp will
abort if any of the following commands fail: get,
put, reget, reput, rename, ln, rm, mkdir, chdir, ls,
lchdir, chmod, chown, chgrp, lpwd, df, symlink, and
lmkdir. Termination on error can be suppressed on a
command by command basis by prefixing the command
with a ‘-’ character (for example, -rm /tmp/blah*).
So:
{
echo -mkdir dir1
echo -mkdir dir1/dir2
echo -mkdir dir1/dir2/dir3
} | sftp -b - $user#$host
I understand this thread is old and has been marked as answered but the answer did not work in my case. The second page on google for a search regarding "sftp checking for directory" so here is an update that would have saved me a few hours.
Using an EOT you cannot capture the error code resulting from the directory not being found. The work around I found was to create a file containing instructions for the call and then capture the result of that automated call.
The example below using sshpass but my script also uses this same method authenticating with sshkeys.
Create the file containing the instructions:
echo "cd $RemoteDir" > check4directory
cat check4directory; echo "bye" >> check4directory
Set permissions:
chmod +x check4directory
Then make the connection using the batch feature:
export SSHPAA=$remote_pass
sshpass -e sftp -v -oBatchMode=no -b check4directory $remote_user#$remote_addy
Lastly check for the error code:
if [ $? -ge "1" ] ; then
echo -e "The remote directory was not found or the connection failed."
fi
At this point you can exit 1 or initiate some other action. Note that if the SFTP connection fails for another reason like password or the address is incorrect the error will trip the action.
Another variant is to split the SFTP session into two.
First SFTP session simply issues the MKDIR command.
Second SFTP session can then assume existence of the directory and put the files.
You can use the SSH access of your account to first verify if the directory exists at all (using the "test" command). If it returns exit code 0, the dir exists, otherwise it doesn't. You can act on that accordingly.
# Both the command and the name of your directory are "test"
# To avoid confusion, I just put the directory in a separate variable
YOURDIR="test"
# Check if the folder exists remotely
ssh name#example.com "test -d $YOURDIR"
if [ $? -ne 0 ]; then
# Directory does not exist
sftp name#example.com << EOF
mkdir test
put test.xml
bye
EOF
else
# Directory already exists
sftp name#example.com << EOF
put test.xml
bye
EOF
fi
Try this to ignore errors if directory already exists.
# Turn OFF error
set +e
# Create remote dirs
sftp -P 22 -o StrictHostKeyChecking=no -oIdentityFile=key.pem -v $user#$host <<EOF
mkdir <remote_path> # create remote directory
bye
EOF
# Turn ON error
set -e
# Do upload to SFTP
sftp -P 22 -o StrictHostKeyChecking=no -oIdentityFile=key.pem -v $user#$host <<EOF
cd <remote_path> # remote_path
put <local_file_path> # local_path
quit
EOF

bash: check if remote file exists using scp

I am writing a bash script to copy a file from a remote server, to my local machine. I need to check to see if the file is available, so I can take an alternative action if it is not there.
I know how to test if a local file exists, however, using scp complicates things a bit. Common sense tells me that one way would be to try to scp the file anyhow, and check the return code from the scp command. Is this the correct way to go about it?
If yes, how do I test the return code from the scp invocation?
using ssh + some shell code embedded in the cmd line; use this method when you need to take a decision before the file transfer will fail;
ssh remote-host 'sh -c "if [ -f ~/myfile ] ; then gzip -c ~/myfile ; fi" ' | gzip -dc > /tmp/pkparse.py
if you want to transfer directories you may want to "tar"-it first
if you want to use scp you can check the return code like this:
if scp remote-host:~/myfile ./ >&/dev/null ; then echo "transfer OK" ; else echo "transfer failed" ; fi
it really depends on when its important for you to know if the file is there or not; before the transfer starts (use ssh+sh) or after its finished.
well, since you can use scp you can try using ssh to list and see if the file is their or not before proceeding.

Resources