BASH: sending commands to ftp and validating status codes - bash

I want to write a bash script that runs ftp on background. I want to some way send commands to it and receive responses. For example a want run ftp, then sent it
user username pass
cd foo
ls
binary
mput *.html
and receive status codes and verify them. I tried to do it this way
tail -n 1 -f in | ftp -n host >> out &
and then reading out file and verifying. But it doesn't work. Can somebody show me the right way? Thanks a lot.

I'd run one set of commands, check the output, and then run the second set in reaction to the output. You could use here-documents for the command sets and command substitution to capture the output in a variable, e.g. like this:
output=$(cat <<EOF | ftp -n host
user username pass
cd foo
ls
binary
mput *.html
EOF
)
if [[ $output =~ "error message" ]]; then
# do stuff
fi

Related

how to store the values to a variable

I'm try to get the used storage for a ftp server through lftp.
lftp :~> open username:password#IP
lftp username#IP:~> du
897146 ./volume(sda1)
897146 .
I want to get the value of 897146 from a sh script.
This is what I got so far:
#!/bin/bash
FTP_PASS=password
FTP_HOST=IP
FTP_USER=username
LFTP=lftp
lftp << EOF
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
FOO="$(du)"
quit
EOF
echo "$FOO"
But I'm getting
Unknown command `FOO=9544 ./logs'.
Unknown command `9636'.
The du command inside the FTP session will output within the lftp command output. So to get the output of the du command, you need to capture the output of the lftp command inside your variable:
#!/usr/bin/env bash
FTP_PASS=password
FTP_HOST=IP
FTP_USER=username
FOO=$(lftp << EOF | filter_out_things_unrelated_to_du
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
du
quit
EOF
)
echo "$FOO"
You will probably need to filter-out the FTP session header and MOTD from the remote FTP server and anything not related with the output of du.

How to interactive command in expect script

I have a small expect script, and I want to send command based on output.
this is example
#! /usr/bin/expect
spawn ssh root#hostname
expect "Password:"
send "12345\r"
expect "root#host:#"
send "ls -lrt" # depend on this output I need delete file
from here, if i have file list a,b,c,d
I want to send "rm a" but file name will change each time when I run script.
I don't know how script make wait until I put command, also I don't want to type rm command every time. I only want to type file name.(this is example, the real command is long, I don't want to type same long command every time.)
So what I want is that the script wait until I put only file name and after I type file name, it send "rm filename" and keep going rest of script.
please help..
this does not need to be interactive at all. I assume your requirement is to delete the oldest file. so do this:
ssh root#hostname 'stat -c "%Y:%n" * | sort -t: -k1,1n | head -1 | cut -d: -f2- | xargs echo rm'
# .. remove the "echo" if you're satisfied it finds the right file .................... ^^^^
Use expect_user:
#!/usr/bin/expect
spawn ssh root#hostname
expect "Password:"
send "12345\r"
expect "root#host:#"
send "ls -lrt" # depend on this output I need delete file
expect_user -re "(.*)\n" {
set filename $expect_out(1,string)
send "ls -al $filename\r" ;#// Substitute with desired command
}
expect eof

bash script to accept log on stdin and email log if inputting process fails

I'm a sysadmin and I frequently have a situation where I have a script or command that generates a lot of output which I would only like to have emailed to me if the command fails. It's pretty easy to write a script that runs the command, collects the output and emails it if the command fails, but I was thinking I should be able to write a command that
1) accepts log info on stdin
2) waits for the inputting process to exit and see what it's exit status was
3a) if the inputting process exited cleanly, append the logging input to a normal log file
3b) if the inputting process failed, append the logging input to the normal log and also send me an email.
It would look something like this on the command line:
something_important | mailonfail.sh me#example.com /var/log/normal_log
That would make it really easy to use in crontabs.
I'm having trouble figuring out how to make my script wait for the writing process and evaluate how that process exits.
Just to be exatra clear, here's how I can do it with a wrapper:
#! /bin/bash
something_important > output
ERR=$!
if [ "$ERR" -ne "0" ] ; then
cat something_important | mail -s "something_important failed" me#example.com
fi
cat something_important >> /var/log/normal_log
Again, that's not what I want, I want to write a script and pipe commands into it.
Does that make sense? How would I do that? Am I missing something?
Thanks Everyone!
-Dylan
Yes it does make sense, and you are close.
Here are some advises:
#!/bin/sh
TEMPFILE=$(mktemp)
trap "rm -f $TEMPFILE" EXIT
if [ ! something_important > $TEMPFILE ]; then
mail -s 'something goes oops' -a $TEMPFILE you#example.net
fi
cat $TEMPFILE >> /var/log/normal.log
I won't use bashisms so /bin/sh is fine
create a temporary file to avoid conflicts using mktemp(1)
use trap to remove file when the script exit, normally or not
if the command fail
then attach the file, which would or would not be preferred over embedding it
if it's a big file you could even gzip it, but the attachment method will change:
# using mailx
gzip -c9 $TEMPFILE | uuencode fail.log.gz | mailx -s subject ...
# using mutt
gzip $TEMPFILE
mutt -a $TEMPFILE.gz -s ...
gzip -d $TEMPFILE.gz
etc.

Bash, stdout redirect of commands like scp

I have a bash script with some scp commands inside.
It works very well but, if I try to redirect my stdout with "./myscript.sh >log", only my explicit echos are shown in the "log" file.
The scp output is missing.
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
Ok, what should I do now?
Thank you
scp is using interactive terminal in order to print that fancy progress bar. Printing that output to a file does not make sense at all, so scp detects when its output is redirected to somewhere else other than a terminal and does disable this output.
What makes sense, however, is redirect its error output into the file in case there are errors. You might want to disable standard output if you want.
There are two possible ways of doing this. First is to invoke your script with redirection of both stderr and stdout into the log file:
./myscript.sh >log 2>&1
Second, is to tell bash to do this right in your script:
#!/bin/sh
exec 2>&1
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
...
If you need to check for errors, just verify that $? is 0 after scp command is executed:
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
RET=$?
if [ $RET -ne 0 ]; then
echo SOS 2>&1
exit $RET
fi
fi
Another option is to do set -e in your script which tells bash script to report failure as soon as one of commands in scripts fails:
#!/bin/bash
set -e
...
Hope it helps. Good luck!
You cat simply test your tty with:
[ ~]#echo "hello" >/dev/tty
hello
If that works, try:
[ ~]# scp <user>#<host>:<source> /dev/tty 2>/dev/null
This has worked for me...
Unfortunately SCP's output can't simply be redirected to stdout it seems.
I wanted to get the average transfer speed of my SCP transfer, and the only way that I could manage to do that was to send stderr and stdout to a file, and then to echo the file to stdout again.
For example:
#!/bin/sh
echo "Starting with upload test at `date`:"
scp -v -i /root/.ssh/upload_test_rsa /root/uploadtest.tar.gz speedtest#myhost:/home/speedtest/uploadtest.tar.gz > /tmp/scp.log 2>&1
grep -i bytes /tmp/scp.log
rm -f /tmp/scp.log
echo "Done with upload test at `date`."
Which would result in the following output:
Starting with upload test at Thu Sep 20 13:04:44 SAST 2012:
Transferred: sent 10191920, received 5016 bytes, in 15.7 seconds
Bytes per second: sent 650371.2, received 320.1
Done with upload test at Thu Sep 20 13:05:04 SAST 2012.
I found a rough solution for scp:
$ scp -qv $USER#$HOST:$SRC $DEST
According to the scp man page, -q (quiet) disables the progress meter, as well as disabling all other output. Add -v (verbose) as well, you get heaps of output... and the progress meter is still disabled! Disabling the progress meter allows you to redirect the output to a file.
If you don't need all the authentication debug output, redirect the output to stdout and grep out the bits you don't want:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug
Final output is something like this:
Executing: program /usr/bin/ssh host myhost, user (unspecified), command scp -v -f ~/file.txt
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
Warning: Permanently added 'myhost,10.0.0.1' (ECDSA) to the list of known hosts.
Authenticated to myhost ([10.0.0.1]:22).
Sending file modes: C0644 426 file.txt
Sink: C0644 426 file.txt
Transferred: sent 2744, received 2464 bytes, in 0.0 seconds
Bytes per second: sent 108772.7, received 97673.4
Plus, this can be redirected to a file:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug > scplog.txt

A script to ssh into a remote folder and check all files?

I have a public/private key pair set up so I can ssh to a remote server without having to log in. I'm trying to write a shell script that will list all the folders in a particular directory on the remote server. My question is: how do I specify the remote location? Here's what I've got:
#!/bin/bash
for file in myname#example.com:dir/*
do
if [ -d "$file" ]
then
echo $file;
fi
done
Try this:
for file in `ssh myname#example.com 'ls -d dir/*/'`
do
echo $file;
done
Or simply:
ssh myname#example.com 'ls -d dir/*/'
Explanation:
The ssh command accepts an optional command after the hostname and, if a command is provided, it executes that command on login instead of the login shell; ssh then simply passes on the stdout from the command as its own stdout. Here we are simply passing the ls command.
ls -d dir/*/ is a trick to make ls skip regular files and list out only the directories.

Resources