I have bash script ftp upload file. In log file I have all events. How can I have in log file only several events, like: "Connected to $HOST" or "WARNING - failed ftp connection"; "Ok to send data"; "Transfer complete"; "10000 bytes sent in 0.00 secs (73.9282 MB/s)".
#!/bin/sh
echo "####################" >> $TESTLOG
echo "$(date +%Y%m%d_%H%M%S)" >> $TESTLOG
ftp -i -n -v <<SCRIPT >> ${TESTLOG} 2>&1
open $HOST
user $USER $PASSWD
bin
cd $DPATH
lcd $TFILE
mput *.txt
quit
SCRIPT
exit 0
####################
20210304_111125
Connected to $HOST.
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
200 Switching to Binary mode.
250 Directory successfully changed.
Local directory now /home/pi/Data/data_files/tmp
local: 20210304_111125_ftp_10k.txt remote: 20210304_111125_ftp_10k.txt
200 PORT command successful. Consider using PASV.
150 Ok to send data.
226 Transfer complete.
10000 bytes sent in 0.00 secs (73.9282 MB/s)
221 Goodbye.
First, I'd like to say while this is a very common idiom, it has no error checking and I despise it.
Now that I got that off my chest...
ftp 2>&1 -inv << SCRIPT | grep -f patfile >> ${TESTLOG}
patfile is a list of patterns to keep. c.f. the grep manual page.
...to continue my rant, though...
What if someone changes the permissions on your $DPATH? The cd fails; ftp reports the failure to the console (your grep ignores it so it doesn't even get logged...), the script continues and puts the files in the wrong place. Full disk prevents files from being placed? Same cycle. A thousand things could go wrong, but ftp just blithely goes on, and doesn't even return an error on exit for most of them. Don't do this.
Just use scp. Yes, you have to set up something like ssh keys, but then the command is simple and checkable.
scp $TFILE/*.txt $HOST:$DPATH/ || echo "copy failed"
For a more elaborate response -
if scp $TFILE/*.txt $HOST:$DPATH/ # test the scp
then ssh "$HOST" "ls -ltr $DPATH/*.txt" # show result if success
else echo "copy failed" # code here for if it fails...
exit 1 # as much code as you feel you need
fi
Otherwise, use a different language with an ftp module like Perl so you can check steps and handle failures.
(I wrote a ksh script that handled an ftp coprocess, fed it a command at a time, checked the results... it's doable, but I don't recommend it.)
Related
I have tried searching but can't find exactly what I'm after and maybe I don't even know exactly what to search for...
I need to FTP a variety of csv files from multiple sites each with different credentials.
I am able to do this one by one with the following, however I need to do this for 30 sites and do not want to copy paste all this.
What would be the best way to write this and if you can show me how or point me to an answer that would be great.
And for bonus points (I might have to ask a separate question), mget is not working linux to linux, only from linux to windows. I have also tried curl but no luck either.
Thanks a lot.
p.s. not sure if it makes a difference, but I will be running this as a cron job every 15 minutes. I'm ok with that part ;)
#!/bin/bash
chmod +x ftp.sh
#Windows site global variables
ROOT='/data'
PASSWD='passwd'
# Site 1
SITE='site1'
HOST='10.10.10.10'
USER='sitename1'
ftp -in $HOST <<EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
mget "${SITE}}.csv1" "${SITE}}.csv2" #needs second "}" as part of file name
quit
EOF
echo "Site 1 FTP complete"
# Site 2
SITE='site2'
HOST='20.20.20.20'
USER='sitename2'
ftp -in $HOST <<EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/instrum/Downloads"
mget "${SITE}}.csv1" "${SITE}}.csv2" #needs second "}" as part of file name
quit
EOF
echo "Site 2 FTP complete"
#Linux site Global variables
ROOT='/home/path'
USER='user'
PASSWD='passwd2'
#Site 3
SITE='site_3'
HOST='30.30.30.30'
ftp -in $HOST << EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
get "${SITE}file1.csv" #mget not working for linux to linux FTP, don't know why.
get "${SITE}file2.csv"
quit
EOF
echo "Site 3 FTP complete"
#Site 4
SITE='site_4'
HOST='40.40.40.40'
ftp -in $HOST << EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
get "${SITE}file1.csv" #mget not working for linux to linux FTP, don't know why.
get "${SITE}file2.csv"
quit
EOF
echo "Site 4 FTP complete"
For credentials, put this into a separate file, with variables for site 1 as, site1, host1, user1, and comments, so if a different user is running this script, the user would be able to understand this quickly, and also for less chance of amending the passwords on the file and creating an error. When your main script loads, you can load the file with the passwords before running the main script.
On your main script, if the functionality is similar on all sites, and you are always going to run the same code for all 30 sites as well, then you can use a while loop starting at 1 and ending at 30. In your code amend the variables, site, host and user, to insert the number at the end, to execute the code with the right variables.
There are tools for copying files for example, if these servers are on your network, for example rsync which is efficient as well. If you would like to take a look
I have a Bash script on a RHEL 5 system I use to perform various secure file transfer operations between the server and many external systems. Normally the script works fine and uses expect to spawn an ftp session to do things like MGET and then rm'ing the files from the target server. However occasionally the script will error with the output "Couldn't stat remote file: No such file or directory", when attempting the file deletion after a successful MGET transfer. At first glance, this error is straightforward and appears to indicate the file doesn't exist. But the script output shows the files exist and have been transferred normally:
SFTP protocol version 3
sftp> get /home/sistran/parchment/*.xml /nfs/ftp-banp/college_xml
Fetching /home/sistran/parchment/TQB10G4D.xml to /nfs/ftpbanp/college_xml/TQB10G4D.xml
Fetching /home/sistran/parchment/TQB1343C.xml to /nfs/ftp-banp/college_xml/TQB1343C.xml
Then my output shows the stat error:
sftp> quit
Result of error check: 617
!SFTP - Successful File Transfer Performed.
Connecting to abc.xyz.edu...
sftp>
sftp> rm /home/sistran/parchment/TQB10G4D.xml
Couldn't stat remote file: No such file or directory
Removing /home/sistran/parchment/TQB10G4D.xml
Couldn't delete file: No such file or directory
sftp> quit
\r
This sometimes occurs a handful of times during a file transfer of many files in a batch operation. Unfortunately i cannot get my hands on the external server for troubleshooting. Below is a snippet of the script that performs the mget and rm:
....if [ "$F1" = "$ftp_success_string" ]; then
if [ "$password_option" = "1" ]; then
# ----- SFTP command here (password) -----
/usr/bin/expect - << EOF
spawn /usr/bin/sftp -oPort=$PORT $userid#$dest
expect "assword: "
send "${send_password}\r"
expect "sftp> "
send "rm $F2\r"
expect "sftp> "
send "bye \r"
EOF
err=$?
else
# ----- SFTP command here (public Key) -----
echo "
rm $F2
quit
"|sftp -oPort=$PORT -oPubkeyAuthentication=yes $userid#$dest 2>&1
fi....
Help! I'm also open to not using expect, if there is a better method.
I have a bash script which is doing very plain sftp to transfer data to production and uat servers. See my code below.
if [ `ls -1 ${inputPath}|wc -l` -gt 0 ]; then
sh -x wipprod.sh >> ${sftpProdLog}
sh -x wipdev.sh >> ${sftpDevLog}
sh -x wipdevone.sh >> ${sftpDevoneLog}
fi
sometimes the UAT server may go down. In those cases the number of scripts hanged are getting increased. If it reaches the user max. number of process the other scripts also getting affected. So I am thinking before executing each of the above script i have to test the port 22 availability on the destination server. Then I can execute the script.
Is this the right way? If yes what is the optimistic way to do that? If no what else is the best approach to avoid unnecessary sftp connection when destination not available? Thanks in advance.
Use sftp in batch mode together with ConnectTimeout option explicitely set. Sftp will take care about up/down problems by itself.
Note, that ConnectTimeout should be slightly higher if your network is slow.
Then put sftp commands into your wip*.sh backup scripts.
If UAT host is up:
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_up <<<"put testfile.xml /tmp/"; echo $?
sftp> put testfile.xml /tmp/
Uploading testfile.xml to /tmp/testfile.xml
0
File is uploaded, sftp exits with exit code 0.
If UAT host is down, sftp exits wihin 1 second with exit code 255.
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_down <<<"put testfile.xml /tmp/"; echo $?
ssh: connect to host this_host_is_down port 22: Connection timed out
Couldn't read packet: Connection reset by peer
255
It sounds reasonable - if the server is inaccessible you want to immediately report an error and not try and block.
The question is - why does the SFTP command block if the server is unavailable? If the server is down, then I'd expect the port open to fail almost immediately and you need only detect that the SFTP copy has failed and abort early.
If you want to detect a closed port in bash, you can simply as bash to connect to it directly - for example:
(echo "" > /dev/tcp/remote-host/22) 2>/dev/null || echo "failed"
This will open the port and immediately close it, and report a failure if the port is closed.
On the other hand, if the server is inaccessible because the port is blocked (in a firewall, or something, that drops all packets), then it makes sense for your process to hang and the base TCP test above will also hang.
Again this is something that should probably be handled by your SFTP remote copy using a timeout parameter, as suggested in the comments, but a bash script to detect blocked port is also doable and will probably look something like this:
(
(echo "" > /dev/tcp/remote-host/22) &
pid=$!
timeout=3
while kill -0 $pid 2>/dev/null; do
sleep 1
timeout=$(( $timeout - 1 ))
[ "$timeout" -le 0 ] && kill $pid && exit 1
done
) || echo "failed"
(I'm going to ignore the ls ...|wc business, other than to say something like find and xargs --no-run-if-empty are generally more robust if you have GNU find, or possibly AIX has an equivalent.)
You can perform a runtime connectivity check, OpenSSH comes with ssh-keyscan to quickly probe an SSH server port and dump the public key(s), but sadly it doesn't provide a usable exit code, leaving parsing the output as a messy solution.
Instead you can do a basic check with a bash one-liner:
read -t 2 banner < /dev/tcp/127.0.0.1/22
where /dev/tcp/127.0.0.1/22 (or /dev/tcp/hostname/ssh) indicates the host and port to connect to.
This relies on the fact that the SSH server will return an identifying banner terminated with CRLF. Feel free to inspect $banner. If it fails after the indicated timeout read will receive SIGALARM (exit code 142), and connection refused will result in exit code 1.
(Support for /dev/tcp and network redirection is enabled by default since before bash-2.05, though it can be disabled explicitly with --disable-net-redirections or with --enable-minimal-config at build time.)
To prevent such problems, an alternative is to set a timeout: with any of the ssh, scp or sftp commands you can set a connection timeout with the option -o ConnectTimeout=15, or, implicitly via ~/.ssh/config:
Host 1.2.3.4 myserver1
ConnectionTimeout 15
The commands will return non-zero on timeout (though the three commands may not all return the same exit code on timeout). See this related question: how to make SSH command execution to timeout
Finally, if you have GNU parallel you may use its sem command to limit concurrency to prevent this kind of problem, see https://unix.stackexchange.com/questions/168978/limit-maximum-number-of-concurrent-scp-processes-running-on-a-host .
I am using bash a script to connect to an FTP server for deleting a file.
I would like to store the output message and code of the delete command executed on the FTP server into a variable of my script.
How could I do this ?
Here is my snippet :
...
function delete_on_ftp{
ftp -i -n $ftp_host $ftp_port <<EOF
quote USER $ftp_login
quote PASS $ftp_pass
delete $1
quit
EOF
}
output_cmd=$(delete_on_ftp $myfile)
...
By the way I do above I only get the message, no way to get the returned code. Is there another way allowing to get the code and the message, in 1 or 2 variables ?
Thanks, Cheers
I just tested the following curl command, which make your task easy.
curl --ftp-ssl -vX "DELE oldfile.pdf" ftp://$user:$pass#$server/public_html/downloads/
Please do not forget the slash at the end of your directory, it is necessary.
curl: (19) RETR response: 550
550 oldfile.pdf: No such file or directory
curl: (19) RETR response: 250
250 DELE command successful
curl is available at http://curl.haxx.se/.
One of the ways to get FTP to act automatically is to use a Netrc file. By default, FTP will use $HOME/.netrc, but you can override that via the -N parameter. The format of a netrc file is fairly straight forward. A line is either a Macrodef or a line that contains login information. Here's an example below:
Netrc File
mysystem login renard password swordfish
another login renard password 123456
default login renard password foofighter
macdef init
binary
cd foo
get bar
delete bar
quit
macdef fubar
...
The three first lines are the logins for various systems. The default is a login for any system which you don't define a particular login for. The lines that start with marcodef are macros you define to do a series of steps for you. The init macro automatically runs right after login. If the last line is quit, it will quit out of FTP for you. There should be a blank line to end the macro, (although most systems will take an End of the File as the end of the macrodef too).
You can create a Netrc file on the fly, enter your FTP command in that, and then, run your FTP command with that Netrc file:
cat > $netrc_file <<<EOF
$ftp_host login $ftp_login password $ftp_password
macdef init
delete $my_file
quit
EOF
ftp -N $netrc_file
You can capture the output via STDOUT, or in a variable and then parse that for what you need:
ftp -N $netrc_file | tee $ftp_output
Other answers on this question should provide you what you want.
However, if you are keen on specifically using ftp command, you can use expect command for the same...
Note, that this is not the best way to achieve what you are trying.
expect -c "log_user 0;
spawn ftp -i -n $ftp_host $ftp_port;
expect \"<add ftp login username: prompt details here>\"
send \"quote USER $ftp_login\r\n\"
expect \"<add ftp login password: prompt details here>\"
send \"quote PASS $ftp_pass\r\n\"
expect \"<add ftp shell prompt details here>\"
log_user 1; send \"delete $1\r\n\"
log_user 0;
expect \"<add ftp shell prompt details here>\"
send \"quit\r\n\";
interact"
You may need to add some more lines in the above for the login & shell prompts returned by the ftp command.
Scenario:
I have to transfer approx 3000 files, 30 to 35 MB each from one server to another (Both servers are IBM-AIX servers).
These files are in .gz format. They are unzipped at the destination using gunzip command to b of use.
The way i am doing it now:
I have made .sh files containing ftp scripts of 500 files each. These .sh files when run, transfer the file to the destination. At the destination i keep on checking how many files have arrived, as soon as 100 files have arrived, i run gunzip for these 100 files, then again the same for the next 100 files and so on. I run gunzip for a batch of 100 just to save on time.
What is in my mind:
I am in search of a command or any other way which will ftp my files to the destination, and as soon as 100 files are transferred they are started for unzipping BUT this unzipping should not pause the transfer for the remaining files.
Script that i tried:
ftp -n 192.168.0.22 << EOF
quote user username
quote pass password
cd /gzip_files/files
lcd /unzip_files/files
prompt n
bin
mget file_00028910*gz
! gunzip file_00028910*gz
mget file_00028911*gz
! gunzip file_00028911*gz
mget file_00028912*gz
! gunzip file_00028912*gz
mget file_00028913*gz
! gunzip file_00028913*gz
mget file_00028914*gz
! gunzip file_00028914*gz
bye
The drawback in the above code is that when the
! gunzip file_00028910*gz
lines is executing, the ftp for the next batch i.e ftp for ( file_00028911*gz ) is paused, hence wasting lot of time and loss of bandwidth utilization.
The ! mark is used to run Operating system commands within ftp prompt.
Hope i have explained my scenario properly. Will update the post if i get a solution, if any one already has a solution do reply.
Regards
Yash.
Since you seem to do it on a UNIX system you probably have Perl installed. You might try the following Perl code:
use strict;
use warnings;
use Net::FTP;
my #files = #ARGV; # get files from command line
my $server = '192.168.0.22';
my $user = 'username';
my $pass = 'password';
my $gunzip_after = 100; # collect up to 100 files
my $ftp = Net::FTP->new($server) or die "failed connect to the server: $!";
$ftp->login($user,$pass) or die "login failed";
my $pid_gunzip;
while (1) {
my #collect4gunzip;
GET_FILES:
while (my $file = shift #files) {
my $local_file = $ftp->get($file);
if ( ! $local_file ) {
warn "failed to get $file: ".$ftp->message;
next;
}
push #collect4gunzip,$local_file;
last if #collect4gunzip == $gunzip_after;
}
#collect4gunzip or last; # no more files ?
while ( $pid_gunzip && kill(0,$pid_gunzip)) {
# gunzip is still running, wait because we don't want to run multiple
# gunzip instances at the same time
warn "wait for last gunzip to return...\n";
wait();
# instead of waiting for gunzip to return we could go back to retrieve
# more files and add them to #collect4gunzip
# goto GET_FILES;
}
# last gunzip is done, start to gunzip collected files
defined( $pid_gunzip = fork()) or die "fork failed: $!";
if ( ! $pid_gunzip ) {
# child process should run gunzip
# maybe one needs so split it into multipl gunzip calls to make
# sure, that the command line does not get too long!!
system( ['gunzip', #collect4gunzip ]);
# child will exit once done
exit(0);
}
# parent continues with getting more files
}
It's not tested, but at least it passes the syntax check.
One of two solutions. Don't call gunzip directly. Call "blah" and "blah" is a script:
#!/bin/sh
gunzip "$#" &
so the gunzip is put into the background, the script returns immediately, and you continue with the FTP. The other thought is to just add the & to the sh command -- I bet that would work just as well. i.e. within the ftp script, do:
! gunzip file_00028914*gz &
But... I believe you are somewhat leading yourself astray. rsync and other solutions are the way to go for many reasons.