I would like to process a directory listing from a ftp server. At the end of the day I would like to delete old files on the ftp server, but I'm not there yet. I have problems to interpret the lines from the directory listing. I'm a newbie in shell scripts, so please be patient. It´s probably a very basic mistake I've made....
You find the piece of code below and the printout when I run this script.
#!/bin/bash
# LOCAL/FTP/SCP/MAIL PARAMETERS
SERVER="xxx.yyy.zzz" # IP of NAS, used for ftp
USERNAME="joe" # FTP username of Network disk used for ftp
PASSWORD="joe1234" # FTP password of Network disk used for ftp
DESTDIR="/opt/backup" # used for temorarily storage
DESTDIRNAS="/sdb1/domoticz_backup/A/" # Path to your NAS backup folder
DOMO_IP="192.168.0.243" # Domoticz IP
DOMO_PORT="8085" # Domoticz port # get directory listing from remote source
putdir=$DESTDIRNAS
ftp -n $SERVER > /tmp/dirlist <<END_SCRIPT
quote USER $USERNAME
quote PASS $PASSWORD
cd $putdir
ls -l
quit
END_SCRIPT
cat /tmp/dirlist | while read LINE; do
echo $LINE
awk '{print $9}' $(LINE)
done;
The output is as follows:
pi#raspberrypi:~ $ ./test.sh
-rwxrwxrwx 1 0 0 229632 Oct 09 07:27 domoticz_20171009092723.db.gz
./test.sh: line 23: LINE: command not found
domoticz_20171009105333.db.gz
domoticz_20171009111940.db.gz
domoticz_20171009113716.db.gz
domoticz_scripts_20171009105333.tar.gz
domoticz_scripts_20171009111940.tar.gz
domoticz_scripts_20171009113716.tar.gz
pi#raspberrypi:~ $
The file /tmp/dirlist holds 7 lines similar to the first printed when I run the script. However, the script can't parse the first line, but throws an error. Subsequent lines works fine but they are not printed by the echo command. I can't figure out why this doesn't work....
Related
I am trying to send a file to my phone using the SFTP protocol in my home network.
Though I can easily send the file to my phone using the put command in FTP, but I want to automate the task.
So I wrote this script:
#! /bin/bash
#Capture and share screenshot to my phone
gnome-screenshot
cd /home/prm/Pictures
FILE="$(ls -Art | tail -n 1)" #To get the last created file
sftp sftp://192.168.1.2:1753/primary/DCIM/Screenshots
put /home/prm/Pictures/$FILE
I am able to connect to my phone and required directory, but I don't how to upload.
Please help!
After updating the code to:
#! /bin/bash
#Capture and share screenshot to my phone
gnome-screenshot
cd /home/prm/Pictures
FILE="$(ls -Art | tail -n 1)" #To get the last created file
echo $FILE
sftp sftp://192.168.1.3:1761/primary/DCIM/Screenshots -b <<<"put /home/prm/Pictures/$FILE"
I got the following output:
prm#prm-2018-02:~/Documents/Anubhav/Bash$ ./capture.sh
Screenshot from 2020-06-04 22-38-27.png
Connected to 192.168.1.3.
Fetching /primary/DCIM/Screenshots/ to -b/Screenshots
Cannot download non-regular file: /primary/DCIM/Screenshots/
I also tried adding -r flag:
sftp -r sftp://192.168.1.3:1761/primary/DCIM/Screenshots -b <<<"put /home/prm/Pictures/$FILE"
But this copied screenshots from my phone to the local system.
prm#prm-2018-02:~/Documents/Anubhav/Bash$ ./capture.sh
Screenshot from 2020-06-04 22-51-51.png
Connected to 192.168.1.3.
Fetching /primary/DCIM/Screenshots/ to -b/Screenshots
Retrieving /primary/DCIM/Screenshots
/primary/DCIM/Screenshots/Screenshot_20200604-225146.jpg 100% 176KB 549.6KB/s 00:00
The put /home/prm/Pictures/$FILE command is "executed" by the shell, and you want it to be executed by the sftp command.
sftp has support for batch files using -b.Something like this should do the trick:
[sorin#localhost ~]$ sftp -b- sftp://test/ <<< "put $FILE"
Connected to test.
sftp> put test.txt
Uploading test.txt to /home/sorin/test.txt
test.txt 100% 0 0.0KB/s 00:00
Note that -b requires non-interactive authentication, any prompt will get stuck.
Note: previous variant sftp sftp://test/ -b<<< "put $FILE" was wrong!The -b was ignored, options should precede the connection string. It seemed to work because sftp checks if the stdin is a terminal and handles that case correctly.
However there are some issues: in batch mode, sftp terminates on first error and sets a non-zero exit code, in "interactive mode" it ignores errors, so you can't do any error handling.
[sorin#localhost ~]$ sftp sftp://test/ -b<<<"put fkdkd
put test.txt"
Connected to test.
sftp> put fkdkd
stat fkdkd: No such file or directory
sftp> put test.txt
Uploading test.txt to /home/sorin/test.txt
test.txt 100% 0 0.0KB/s 00:00
[sorin#localhost ~]$ echo $?
0
[sorin#localhost ~]$ sftp -b- sftp://test/ <<<"put fkdkd
put test.txt"
sftp> put fkdkd
stat fkdkd: No such file or directory
[sorin#localhost ~]$ echo $?
1
[sorin#localhost ~]$
Thanks Sorin, you brought me closer to solution, and finally this solved.
#! /bin/bash
#Capture and share screenshot to my phone
gnome-screenshot
cd /home/prm/Pictures
FILE="$(ls -Art | tail -n 1)" #To get the last created file
echo $FILE
sftp sftp://192.168.1.3:1761/primary/DCIM/Screenshots <<EOF
put "$FILE"
bye
EOF
I need to do an FTP of the files available in the file files_to_download.
I have put an FTP script in between but it throws and error saying
" Syntax error: end of file unexpected (expecting "done")". Do I need to do an FTP login for downloading the file every time. I want to download all the files in a single FTP login?
if [ $update -eq 1 ]
then
#echo "File needs to be updated"
while read file_data
do
#echo $file_data
file_name=`echo $file_data | cut -d':' -f1` #truncate the file path
echo $file_name
#ftp -inv <<!
#open ${SERVER}
#user ${USERNAME} ${PASSWORD}
#binary
#cd $REMOTEDIR
#get server_version
#lcd $LOCALDIR
#close
#quit
#!
done < files_to_download
fi
You can use an outline script like this:
{
cat << EOF
open ${SERVER}
user ${USERNAME} ${PASSWORD}
binary
cd ${REMOTEDIR}
get server_version
EOF
sed -e 's/:.*//' -e 's/^/get /' files_to_download
cat <<EOF
lcd ${LOCALDIR}
close
quit
EOF
} | ftp -inv
The first cat sets up the connection. The sed edits the list of file names into get statements. The final cat puts out the remaining commands. The surrounding { and } send all the output of the commands within to the ftp command. The chances are that simply omitting all the second cat would work fine; the FTP command would read EOF on its input after the final file transfer and then exit of its own accord.
The get server_version can be deleted if server_version was meant to be a file name rather than a request for the version of the FTP server. The lcd probably isn't necessary either.
I've used the ${VAR} notation consistently; the original code used that an $VAR. Consistency is good.
You should not indent !, that is, you should place it at the beginning of the line, without any whitespaces before it. You indented it, so it's not parsed as the end of the heredoc.
To download all files in a single login, you can print the FTP commands in a subshell. Or, you can also generate read the filenames beforehand and store it into a variable.
I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}
I have file containing a list of files separated by end of lines
$ cat file_list
file1
file2
file3
I want to copy this list of files with FTP
How can I do that ? Do I have to write a script ?
You can turn your list of files into list of ftp commands easily enough:
(echo open hostname.host;
echo user username;
cat filelist | awk '{ print "put " $1; }';
echo bye) > script.ftp
Then you can just run:
ftp -s script.ftp
Or possibly (with other versions of ftp)
ftp -n < script.ftp
Something along these lines - the somecommand depends on what you want to do - I don't get that from your question, sorry.
#!/bin/bash
# Iterate through lines in file
for line in `cat file.txt`;do
#your ftp command here do something
somecommand $line
done
edit: If you really want to persue this route for multiple files (you shouldn't!), you can use the following command in place of somecommand $line:
ncftpput -m -u username -p password ftp.server.com /remote/folder $line
ncftpput propably also takes an arbitrary number of files to upload in one go, but I havn't checked it. Notice that this approach will connect and disconnect for every single file!
Thanks for the very helpful example of how to feed a list of files to ftp. This worked beautifully for me.
After creating my ftp script in Linux (CentOs 5.5), I ran the script with:
ftp –n < ../script.ftp
My script (with names changed to protect the innocent) starts with:
open <ftpsite>
user <userid> <passwd>
cd <remote directory>
bin
prompt
get <file1>
get <file2>
And ends with:
get <filen-1>
get <filen>
bye
I'm trying to write a Bash script that uploads a file to a server. How can I achieve this? Is a Bash script the right thing to use for this?
Below are two answers. First is a suggestion to use a more secure/flexible solution like ssh/scp/sftp. Second is an explanation of how to run ftp in batch mode.
A secure solution:
You really should use SSH/SCP/SFTP for this rather than FTP. SSH/SCP have the benefits of being more secure and working with public/private keys which allows it to run without a username or password.
You can send a single file:
scp <file to upload> <username>#<hostname>:<destination path>
Or a whole directory:
scp -r <directory to upload> <username>#<hostname>:<destination path>
For more details on setting up keys and moving files to the server with RSYNC, which is useful if you have a lot of files to move, or if you sometimes get just one new file among a set of random files, take a look at:
http://troy.jdmz.net/rsync/index.html
You can also execute a single command after sshing into a server:
From man ssh
ssh [...snipped...] hostname [command] If command is specified, it is
executed on the remote host instead of a login shell.
So, an example command is:
ssh username#hostname.example bunzip file_just_sent.bz2
If you can use SFTP with keys to gain the benefit of a secured connection, there are two tricks I've used to execute commands.
First, you can pass commands using echo and pipe
echo "put files*.xml" | sftp -p -i ~/.ssh/key_name username#hostname.example
You can also use a batchfile with the -b parameter:
sftp -b batchfile.txt ~/.ssh/key_name username#hostname.example
An FTP solution, if you really need it:
If you understand that FTP is insecure and more limited and you really really want to script it...
There's a great article on this at http://www.stratigery.com/scripting.ftp.html
#!/bin/sh
HOST='ftp.example.com'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
binary
put $FILE
quit
END_SCRIPT
exit 0
The -n to ftp ensures that the command won't try to get the password from the current terminal. The other fancy part is the use of a heredoc: the <<END_SCRIPT starts the heredoc and then that exact same END_SCRIPT on the beginning of the line by itself ends the heredoc. The binary command will set it to binary mode which helps if you are transferring something other than a text file.
You can use a heredoc to do this, e.g.
ftp -n $Server <<End-Of-Session
# -n option disables auto-logon
user anonymous "$Password"
binary
cd $Directory
put "$Filename.lsm"
put "$Filename.tar.gz"
bye
End-Of-Session
so the ftp process is fed on standard input with everything up to End-Of-Session. It is a useful tip for spawning any process, not just ftp! Note that this saves spawning a separate process (echo, cat, etc.). It is not a major resource saving, but it is worth bearing in mind.
The ftp command isn't designed for scripts, so controlling it is awkward, and getting its exit status is even more awkward.
Curl is made to be scriptable, and also has the merit that you can easily switch to other protocols later by just modifying the URL. If you put your FTP credentials in your .netrc, you can simply do:
# Download file
curl --netrc --remote-name ftp://ftp.example.com/file.bin
# Upload file
curl --netrc --upload-file file.bin ftp://ftp.example.com/
If you must, you can specify username and password directly on the command line using --user username:password instead of --netrc.
Install ncftpput and ncftpget. They're usually part of the same package.
Use this to upload a file to a remote location:
#!/bin/bash
#$1 is the file name
#usage:this_script <filename>
HOST='your host'
USER="your user"
PASSWD="pass"
FILE="abc.php"
REMOTEPATH='/html'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $REMOTEPATH
put $FILE
quit
END_SCRIPT
exit 0
The command in one line:
ftp -in -u ftp://username:password#servername/path/to/ localfile
#/bin/bash
# $1 is the file name
# usage: this_script <filename>
IP_address="xx.xxx.xx.xx"
username="username"
domain=my.ftp.domain
password=password
echo "
verbose
open $IP_address
USER $username $password
put $1
bye
" | ftp -n > ftp_$$.log
Working example to put your file on root...see, it's very simple:
#!/bin/sh
HOST='ftp.users.qwest.net'
USER='yourid'
PASSWD='yourpw'
FILE='file.txt'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
put $FILE
quit
END_SCRIPT
exit 0
There isn't any need to complicate stuff. This should work:
#/bin/bash
echo "
verbose
open ftp.mydomain.net
user myusername mypassword
ascii
put textfile1
put textfile2
bin
put binaryfile1
put binaryfile2
bye
" | ftp -n > ftp_$$.log
Or you can use mput if you have many files...
If you want to use it inside a 'for' to copy the last generated files for an everyday backup...
j=0
var="`find /backup/path/ -name 'something*' -type f -mtime -1`"
# We have some files in $var with last day change date
for i in $var
do
j=$(( $j + 1 ))
dirname="`dirname $i`"
filename="`basename $i`"
/usr/bin/ftp -in >> /tmp/ftp.good 2>> /tmp/ftp.bad << EOF
open 123.456.789.012
user user_name passwd
bin
lcd $dirname
put $filename
quit
EOF # End of ftp
done # End of 'for' iteration
echo -e "open <ftp.hostname>\nuser <username> <password>\nbinary\nmkdir New_Folder\nquit" | ftp -nv