Correct Regex in SFTP bash script - bash

I want to automate a SFTP process to transfer the last file created in local server and send it to remote server.
In local server I have "/Source/Path/" I have files named like below:
Logfile_2019-04-24
Logfile_2019-04-24_old.txt
This is my current script:
dyear=`date +'%Y' -d "1 day ago"`
dmonth=`date +'%b' -d "1 day ago"`
ddate=`date +%Y-%m-%d -d "1 day ago"`
HOST='192.168.X.X'
USER='user'
PASSWD='password'
localpath='/Source/Path/'$dyear'/'$dmonth'/'*$ddate*'.txt'
remotepath='/Destination/Path/'$dyear'/'$dmonth'/'
echo $localpath
echo $remotepath
export SSHPASS=$PASSWD
sshpass -e sftp $USER#$HOST << EOF
put '$localpath' '$remotepath'
EOF
When I do echo $localpath it prints the correct file but in the script I get this error:
Connecting to 192.168.X.X...
sftp> put '/Source/Path/2019/Apr/*2019-04-24*' '/Destination/Path/2019/Apr/'
stat /Source/Path/2019/Apr/*2019-04-24*: No such file or directory
How would be the correct regex in this pasrt *$ddate*'.txt' in followingline:
localpath='/Source/Path/'$dyear'/'$dmonth'/'*$ddate*'.txt'
in order to transfer the file "Logfile_2019-04-24_old.txt"?
Thanks in advance

Replace
put '$localpath' '$remotepath'
with
put "$(echo $localpath)" '$remotepath'
to force wildcard (*) replacement in your here-doc.
This does not work if your wildcard is replaced by multiple files.

I don't think you need a regex for this problem. You can get the latest file created in the directory by the following shell command and assign it to your localpath variable.
ls -t directoryPath | head -n1
latestfile=`ls -t /Source/Path/$dyear/$dmonth | head -n1`
localpath='/Source/Path/'$dyear'/'$dmonth'/'$latestfile''
remotepath='/Destination/Path/'$dyear'/'$dmonth'/'

If you are able to get the filename, source and destination directories properly, you can directly use scp to copy the file to remote server:
sshpass -p $PASSWD scp $localpath $USER#$HOST:$remotepath

Related

How to run the command generated from awk with printf?

I want to create a shell script that will rename all .txt files from a specific directory in remote server by using SFTP (will download the files first then rename in remote server). Please check the attempt below:
sftp user#host <<EOF
cd $remoteDir
get *.txt
ls *.txt | awk '{printf "rename %s %s.done\n",$0,$0 ;}'
exit
EOF
From the statement ls *.txt | awk '{printf "rename %s %s.done\n",$0,$0 ;}' it will generate and print out a list of rename command, my question is, how to run these command generated from awk printf?
You are trying to rename files on the server but you only know what commands to run after you have downloaded the files.
The simple option would be to run two sftp sessions. The first downloads the files. Then you generate the rename commands. Then you run a second sftp session.
However it is possible to do both in one session:
#!/bin/bash
(
# clean up from any previous run
rmdir -f syncpoint
# echo commands are fed into the sftp session
# be careful with quoting to avoid local shell expansion
echo 'cd remoteDir'
echo 'get *.txt'
echo '!mkdir syncpoint'
# wait for sftp to create the syncpoint folder
while [ ! -d syncpoint ]; do sleep 5; done
# the files have been downloaded
# now we can generate the rename commands
for f in *.txt; do
# #Q is a bash (v4.4+) way to quote special characters
echo "rename ${f#Q} ${f#Q}.done"
# if not available, single-quoting may be enough
#echo "rename '$f' '$f'.done"
done
# clean up
rmdir syncpoint
) | sftp user#host
Hello Newbie please use this
sftp user#host <<EOF
cd $remoteDir
ls *.txt | awk '{printf "mv %s %s.done\n",$0,$0 ;}' | sh
exit
EOF

How to get oldest file in ftp directory using bash script

I have a working script that get all file list in a ftp directory and sae it in a local file with this:
curl -s -l ftp://username:password#ftpserver.com/directory/ > source.txt
Now, I need to sort this result by creation date instead of name. I only need to write the oldest file name in the source.txt file. Is it possible?
Thank you.
To get filename (and further informations) about file with oldest modification date in a given directory with lftp:
Example:
lftp -u anonymous,anonymous -e "ls -t; quit" ccrma-ftp.stanford.edu/pub | tail -n 1
Finally this script Works for me: lftp -u user,password -e "cls --sort=date; quit" ftpserveraddress/Folder 2> /dev/null | tail -n 1

List files on remote server

I'm trying to run the following command:
ssh -A -t -i ~/.ssh/DevKP.pem -o StrictHostKeyChecking=no root#MyServer "for file in \`ls /root/spark/work/ \`; do echo 'file - ' $file; done"
The output is:
file -
file -
Connection to MyServer closed.
When I ran the command on the remote server itself:
for file in `ls /root/spark/work/ `; do echo 'file - ' $file; done
I get the output:
file - test1.txt
file - test2.txt
How do I get ti to work on the local server? it seems that it gets the right files (because there were two sysouts)
anyone has any idea?
thanks
You need to escape the $ in $file to make sure the remote shell interprets it instead of your local. You should also simplify the ls /root/.. to for file in /root/../*:
ssh root#MyServer "for file in /root/spark/work/* ; do echo 'file - ' \$file; done"

Bash: Check if remote directory exists using FTP

I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}

How can I upload (FTP) files to server in a Bash script?

I'm trying to write a Bash script that uploads a file to a server. How can I achieve this? Is a Bash script the right thing to use for this?
Below are two answers. First is a suggestion to use a more secure/flexible solution like ssh/scp/sftp. Second is an explanation of how to run ftp in batch mode.
A secure solution:
You really should use SSH/SCP/SFTP for this rather than FTP. SSH/SCP have the benefits of being more secure and working with public/private keys which allows it to run without a username or password.
You can send a single file:
scp <file to upload> <username>#<hostname>:<destination path>
Or a whole directory:
scp -r <directory to upload> <username>#<hostname>:<destination path>
For more details on setting up keys and moving files to the server with RSYNC, which is useful if you have a lot of files to move, or if you sometimes get just one new file among a set of random files, take a look at:
http://troy.jdmz.net/rsync/index.html
You can also execute a single command after sshing into a server:
From man ssh
ssh [...snipped...] hostname [command] If command is specified, it is
executed on the remote host instead of a login shell.
So, an example command is:
ssh username#hostname.example bunzip file_just_sent.bz2
If you can use SFTP with keys to gain the benefit of a secured connection, there are two tricks I've used to execute commands.
First, you can pass commands using echo and pipe
echo "put files*.xml" | sftp -p -i ~/.ssh/key_name username#hostname.example
You can also use a batchfile with the -b parameter:
sftp -b batchfile.txt ~/.ssh/key_name username#hostname.example
An FTP solution, if you really need it:
If you understand that FTP is insecure and more limited and you really really want to script it...
There's a great article on this at http://www.stratigery.com/scripting.ftp.html
#!/bin/sh
HOST='ftp.example.com'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
binary
put $FILE
quit
END_SCRIPT
exit 0
The -n to ftp ensures that the command won't try to get the password from the current terminal. The other fancy part is the use of a heredoc: the <<END_SCRIPT starts the heredoc and then that exact same END_SCRIPT on the beginning of the line by itself ends the heredoc. The binary command will set it to binary mode which helps if you are transferring something other than a text file.
You can use a heredoc to do this, e.g.
ftp -n $Server <<End-Of-Session
# -n option disables auto-logon
user anonymous "$Password"
binary
cd $Directory
put "$Filename.lsm"
put "$Filename.tar.gz"
bye
End-Of-Session
so the ftp process is fed on standard input with everything up to End-Of-Session. It is a useful tip for spawning any process, not just ftp! Note that this saves spawning a separate process (echo, cat, etc.). It is not a major resource saving, but it is worth bearing in mind.
The ftp command isn't designed for scripts, so controlling it is awkward, and getting its exit status is even more awkward.
Curl is made to be scriptable, and also has the merit that you can easily switch to other protocols later by just modifying the URL. If you put your FTP credentials in your .netrc, you can simply do:
# Download file
curl --netrc --remote-name ftp://ftp.example.com/file.bin
# Upload file
curl --netrc --upload-file file.bin ftp://ftp.example.com/
If you must, you can specify username and password directly on the command line using --user username:password instead of --netrc.
Install ncftpput and ncftpget. They're usually part of the same package.
Use this to upload a file to a remote location:
#!/bin/bash
#$1 is the file name
#usage:this_script <filename>
HOST='your host'
USER="your user"
PASSWD="pass"
FILE="abc.php"
REMOTEPATH='/html'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $REMOTEPATH
put $FILE
quit
END_SCRIPT
exit 0
The command in one line:
ftp -in -u ftp://username:password#servername/path/to/ localfile
#/bin/bash
# $1 is the file name
# usage: this_script <filename>
IP_address="xx.xxx.xx.xx"
username="username"
domain=my.ftp.domain
password=password
echo "
verbose
open $IP_address
USER $username $password
put $1
bye
" | ftp -n > ftp_$$.log
Working example to put your file on root...see, it's very simple:
#!/bin/sh
HOST='ftp.users.qwest.net'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
put $FILE
quit
END_SCRIPT
exit 0
There isn't any need to complicate stuff. This should work:
#/bin/bash
echo "
verbose
open ftp.mydomain.net
user myusername mypassword
ascii
put textfile1
put textfile2
bin
put binaryfile1
put binaryfile2
bye
" | ftp -n > ftp_$$.log
Or you can use mput if you have many files...
If you want to use it inside a 'for' to copy the last generated files for an everyday backup...
j=0
var="`find /backup/path/ -name 'something*' -type f -mtime -1`"
# We have some files in $var with last day change date
for i in $var
do
j=$(( $j + 1 ))
dirname="`dirname $i`"
filename="`basename $i`"
/usr/bin/ftp -in >> /tmp/ftp.good 2>> /tmp/ftp.bad << EOF
open 123.456.789.012
user user_name passwd
bin
lcd $dirname
put $filename
quit
EOF # End of ftp
done # End of 'for' iteration
echo -e "open <ftp.hostname>\nuser <username> <password>\nbinary\nmkdir New_Folder\nquit" | ftp -nv

Resources