I am trying to upload multiple files from one folder to a ftp site and wrote this script:
#!/bin/bash
for i in '/dir/*'
do
if [-f /dir/$i]; then
HOST='x.x.x.x'
USER='username'
PASSWD='password'
DIR=archives
File=$i
ftp -n $HOST << END_SCRIPT
quote USER $USER
quote PASS $PASSWD
ascii
put $FILE
quit
END_SCRIPT
fi
It is giving me following error when I try to execute:
username#host:~/Documents/Python$ ./script.sh
./script.sh: line 22: syntax error: unexpected end of file
I can't seem to get this to work. Any help is much appreciated.
Thanks,
Mayank
It's complaining because your for loop does not have a done marker to indicate the end of the loop. You also need more spaces in your if:
if [ -f "$i" ]; then
Recall that [ is actually a command, and it won't be recognized if it doesn't appear as such.
And... if you single quote your glob (at the for) like that, it won't be expanded. No quotes there, but double quotes when using $i. You probably also don't want to include the /dir/ part when you use $i as it's included in your glob.
If I'm not mistaken, ncftp can take wildcard arguments:
ncftpput -u username -p password x.x.x.x archives /dir/*
If you don't already have it installed, it's likely available in the standard repo for your OS.
First, the literal, fixing-your-script answer:
#!/bin/bash
# no reason to set variables that don't change inside the loop
host='x.x.x.x'
user='username'
password='password'
dir=archives
for i in /dir/*; do # no quotes if you want the wildcard to be expanded!
if [ -f "$i" ]; then # need double quotes and whitespace here!
file=$i
ftp -n "$host" <<END_SCRIPT
quote USER $user
quote PASS $password
ascii
put $file $dir/$file
quit
END_SCRIPT
fi
done
Next, the easy way:
lftp -e 'mput -a *.i' -u "$user,$password" "ftp://$host/"
(yes, lftp expands the wildcard internally, rather than expecting this to be done by the outer shell).
First of all my apologies in not making myself clear in the question. My actual task was to copy a file from local folder to a SFTP site and then move the file to an archive folder. Since the SFTP is hosted by a vendor I cannot use the key sharing (vendor limitation. Also, SCP will require password entering if used in a shell script so I have to use SSHPASS. SSHPASS is in the Ubuntu repo however for CentOS it needs to be installed from here
Current thread and How to run the sftp command with a password from Bash script? did gave me better understanding on how to write the script and I will share my solution here:
#!/bin/bash
#!/usr/bin
for i in /dir/*; do
if [ -f "$i" ]; then
file=$i
export SSHPASS=password
sshpass -e sftp -oBatchMode=no -b - user#ftp.com << !
cd foldername/foldername
put $file
bye
!
mv $file /somedir/test
fi
done
Thanks everyone for all the responses!
--Mayank
Related
I am trying to rename all files in a remote directory over SSH or SFTP. The rename should convert the file into a date extension, for example .txt into .txt.2016-05-25.
I have the following command to loop each .txt file and try to rename, but am getting an error:
ssh $user#$server "for FILENAME in $srcFolder/*.txt; do mv $FILENAME $FILENAME.$DATE; done"
The error I am getting is:
mv: missing destination file operand after `.20160525_1336'
I have also tried this over SFTP with no such luck. Any help would be appreciated!
You need to escape (or single-quote) the $ of variables in the remote shell. It's also recommended to quote variables that represent file paths:
ssh $user#$server "for FILENAME in '$srcFolder'/*.txt; do mv \"\$FILENAME\" \"\$FILENAME.$DATE\"; done"
Try this:
By using rename (perl tool):
ssh user#host /bin/sh <<<$'
rename \047use POSIX;s/$/strftime(".%F",localtime())/e\047 "'"$srcFolder\"/*.txt"
To prepare/validate your command line, replace ssh...bin/sh by cat:
cat <<<$'
rename \047use POSIX;s/$/strftime(".%F",localtime())/e\047 "'"$srcFolder\"/*.txt"
will render something like:
rename 'use POSIX;s/$/strftime(".%F",localtime())/e' "/tmp/test dir"/*.txt
And you could localy try (ensuring $srcFolder contain a path to a local test folder):
/bin/sh <<<$'
rename \047use POSIX;s/$/strftime(".%F",localtime())/e\047 "'"$srcFolder\"/*.txt"
Copy of your own syntax:
ssh $user#$server /bin/sh <<<'for FILENAME in "'"$srcFolder"'"/*.txt; do
mv "$FILENAME" "$FILENAME.'$DATE'";
done'
Again, you could locally test your inline script:
sh <<<'for FILENAME in "'"$srcFolder"'"/*.txt; do
mv "$FILENAME" "$FILENAME.'$DATE'";
done'
or preview by replacing sh by cat.
When using/sending variables over SSH, you need to be careful what is a local variable and which is a remote variable. Remote variables must be escaped; otherwise they will be interpreted locally versus remotely as you intended. Other characters also need to be escaped such as backticks. The example below should point you in the right direction:
Incorrect
user#host1:/home:> ssh user#host2 "var=`hostname`; echo \$var"
host1
Correct
user#host1:/home:> ssh user#host2 "var=\`hostname\`; echo \$var"
host2
I'm writing a cron to backup some stuffs on a server.
Basically I'm sending specific files form a local directory using scp.
I'm using a public key to avoid authentication.
For reusability I'm passing the local directory and the server url by arguments to my bash script.
How I set my parameters:
#!/bin/bash
DIR="$1"
URL="$2"
FILES="$DIR*.ext"
My problem is about formatting the url.
Without formatting
How I send files to the server:
#!/bin/bash
for F in $FILEs
do
scp $F $URL;
if ssh $URL stat $(basename "$F")
then
rm $F
else
echo "Fails to copy $F to $URL"
fi
done
If I try to copy at user's home on the server I do:
$ ~/backup /path/to/local/folder/ user#server.com:
If I try to copy at a specific directory on the server I do:
$ ~/backup /path/to/local/folder/ user#server.com:/path/to/remote/folder/
In all cases it gives me the well known error (and my custom echo):
ssh: Could not resolve hostname user#server.com: nodename nor [...]
Can't upload /path/to/local/folder/file.ext to user#server.com
And it works anyway (the file is copied). But that's not a solution, cause as scp fails (seems to), the file is never deleted.
With formatting
I tried sending files using this method:
#!/bin/bash
for F in $FILES
do
scp $F "$URL:"
done
I no longer get an error, and it works for copying at user's home directory then deleting the local file:
$ ~/backup /path/to/local/folder/ user#server.com
But, of course, sending to a specific directory don't work at all.
Finally
So I think that my first method is more appropriate, but how can I get rid of that error?
Your mistake is that you can scp to user#server.com: but not ssh to it : you need to remove the trailing : character (and possible path after it). You can do it easily like this with bash parameter expansion :
ssh "${URL%:*}" stat "$(basename "$F")"
RECOMMENDATIONS
"USE MORE QUOTES!" They are vital. Also, learn the difference between ' and " and `. See http://mywiki.wooledge.org/Quotes and http://wiki.bash-hackers.org/syntax/words
if you have spaces in filenames, your code will breaks things up. Better use while IFS= read -r line; do #stuff with $line; done < file.txt
See bash parameter expansion
The script uses scp to upload a file. That works.
Now I want to log in with ssh, cd to the directory that holds the uploaded file, do an md5sum on the file. The script keeps telling me that md5sum cannot find $LOCAL_FILE. I tried escaping: \$LOCAL_FILE. Tried quoting the EOI: <<'EOI'. I'm partially understanding this, that no escaping means everything happens locally. echo pwd unescaped gives the local path. But why can I do "echo $MD5SUM > $LOCAL_FILE.md5sum", and it creates the file on the remote machine, yet "echo md5sum $LOCAL_FILE > md5sum2" does not work? And if it the local md5sum, how do I tell it to work on the remote?
scp "files/$LOCAL_FILE" "$i#$i.567.net":"$REMOTE_FILE_PATH"
ssh -T "$i#$i.567.net" <<EOI
touch I_just_logged_in
cd $REMOTE_DIRECTORY_PATH
echo `date` > I_just_changed_directories
echo `whoami` >> I_just_changed_directories
echo `pwd` >> I_just_changed_directories
echo "$MD5SUM" >> I_just_changed_directories
echo $MD5SUM > $LOCAL_FILE.md5sum
echo `md5sum $LOCAL_FILE` > md5sum2
EOI
You have to think about when $LOCAL_FILE is being interpreted. In this case, since you've used double-quotes, it's being interpreted on the sending machine. You need instead to quote the string in such a way that $LOCAL_FILE is in the command line on the receiving machine. You also need to get your "here document" correct. What you show just sends the output to touch to the ssh.
What you need will look something like
ssh -T address <'EOF'
cd $REMOTE_DIRECTORY_PATH
...
EOF
The quoting rules in bash are somewhat arcane. You might want to read up on them in Mendel Cooper's Advanced Guide to Bash Scripting.
this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"
I'm trying to write a Bash script that uploads a file to a server. How can I achieve this? Is a Bash script the right thing to use for this?
Below are two answers. First is a suggestion to use a more secure/flexible solution like ssh/scp/sftp. Second is an explanation of how to run ftp in batch mode.
A secure solution:
You really should use SSH/SCP/SFTP for this rather than FTP. SSH/SCP have the benefits of being more secure and working with public/private keys which allows it to run without a username or password.
You can send a single file:
scp <file to upload> <username>#<hostname>:<destination path>
Or a whole directory:
scp -r <directory to upload> <username>#<hostname>:<destination path>
For more details on setting up keys and moving files to the server with RSYNC, which is useful if you have a lot of files to move, or if you sometimes get just one new file among a set of random files, take a look at:
http://troy.jdmz.net/rsync/index.html
You can also execute a single command after sshing into a server:
From man ssh
ssh [...snipped...] hostname [command] If command is specified, it is
executed on the remote host instead of a login shell.
So, an example command is:
ssh username#hostname.example bunzip file_just_sent.bz2
If you can use SFTP with keys to gain the benefit of a secured connection, there are two tricks I've used to execute commands.
First, you can pass commands using echo and pipe
echo "put files*.xml" | sftp -p -i ~/.ssh/key_name username#hostname.example
You can also use a batchfile with the -b parameter:
sftp -b batchfile.txt ~/.ssh/key_name username#hostname.example
An FTP solution, if you really need it:
If you understand that FTP is insecure and more limited and you really really want to script it...
There's a great article on this at http://www.stratigery.com/scripting.ftp.html
#!/bin/sh
HOST='ftp.example.com'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
binary
put $FILE
quit
END_SCRIPT
exit 0
The -n to ftp ensures that the command won't try to get the password from the current terminal. The other fancy part is the use of a heredoc: the <<END_SCRIPT starts the heredoc and then that exact same END_SCRIPT on the beginning of the line by itself ends the heredoc. The binary command will set it to binary mode which helps if you are transferring something other than a text file.
You can use a heredoc to do this, e.g.
ftp -n $Server <<End-Of-Session
# -n option disables auto-logon
user anonymous "$Password"
binary
cd $Directory
put "$Filename.lsm"
put "$Filename.tar.gz"
bye
End-Of-Session
so the ftp process is fed on standard input with everything up to End-Of-Session. It is a useful tip for spawning any process, not just ftp! Note that this saves spawning a separate process (echo, cat, etc.). It is not a major resource saving, but it is worth bearing in mind.
The ftp command isn't designed for scripts, so controlling it is awkward, and getting its exit status is even more awkward.
Curl is made to be scriptable, and also has the merit that you can easily switch to other protocols later by just modifying the URL. If you put your FTP credentials in your .netrc, you can simply do:
# Download file
curl --netrc --remote-name ftp://ftp.example.com/file.bin
# Upload file
curl --netrc --upload-file file.bin ftp://ftp.example.com/
If you must, you can specify username and password directly on the command line using --user username:password instead of --netrc.
Install ncftpput and ncftpget. They're usually part of the same package.
Use this to upload a file to a remote location:
#!/bin/bash
#$1 is the file name
#usage:this_script <filename>
HOST='your host'
USER="your user"
PASSWD="pass"
FILE="abc.php"
REMOTEPATH='/html'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $REMOTEPATH
put $FILE
quit
END_SCRIPT
exit 0
The command in one line:
ftp -in -u ftp://username:password#servername/path/to/ localfile
#/bin/bash
# $1 is the file name
# usage: this_script <filename>
IP_address="xx.xxx.xx.xx"
username="username"
domain=my.ftp.domain
password=password
echo "
verbose
open $IP_address
USER $username $password
put $1
bye
" | ftp -n > ftp_$$.log
Working example to put your file on root...see, it's very simple:
#!/bin/sh
HOST='ftp.users.qwest.net'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
put $FILE
quit
END_SCRIPT
exit 0
There isn't any need to complicate stuff. This should work:
#/bin/bash
echo "
verbose
open ftp.mydomain.net
user myusername mypassword
ascii
put textfile1
put textfile2
bin
put binaryfile1
put binaryfile2
bye
" | ftp -n > ftp_$$.log
Or you can use mput if you have many files...
If you want to use it inside a 'for' to copy the last generated files for an everyday backup...
j=0
var="`find /backup/path/ -name 'something*' -type f -mtime -1`"
# We have some files in $var with last day change date
for i in $var
do
j=$(( $j + 1 ))
dirname="`dirname $i`"
filename="`basename $i`"
/usr/bin/ftp -in >> /tmp/ftp.good 2>> /tmp/ftp.bad << EOF
open 123.456.789.012
user user_name passwd
bin
lcd $dirname
put $filename
quit
EOF # End of ftp
done # End of 'for' iteration
echo -e "open <ftp.hostname>\nuser <username> <password>\nbinary\nmkdir New_Folder\nquit" | ftp -nv