I have a bash script with some scp commands inside.
It works very well but, if I try to redirect my stdout with "./myscript.sh >log", only my explicit echos are shown in the "log" file.
The scp output is missing.
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
Ok, what should I do now?
Thank you
scp is using interactive terminal in order to print that fancy progress bar. Printing that output to a file does not make sense at all, so scp detects when its output is redirected to somewhere else other than a terminal and does disable this output.
What makes sense, however, is redirect its error output into the file in case there are errors. You might want to disable standard output if you want.
There are two possible ways of doing this. First is to invoke your script with redirection of both stderr and stdout into the log file:
./myscript.sh >log 2>&1
Second, is to tell bash to do this right in your script:
#!/bin/sh
exec 2>&1
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
...
If you need to check for errors, just verify that $? is 0 after scp command is executed:
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
RET=$?
if [ $RET -ne 0 ]; then
echo SOS 2>&1
exit $RET
fi
fi
Another option is to do set -e in your script which tells bash script to report failure as soon as one of commands in scripts fails:
#!/bin/bash
set -e
...
Hope it helps. Good luck!
You cat simply test your tty with:
[ ~]#echo "hello" >/dev/tty
hello
If that works, try:
[ ~]# scp <user>#<host>:<source> /dev/tty 2>/dev/null
This has worked for me...
Unfortunately SCP's output can't simply be redirected to stdout it seems.
I wanted to get the average transfer speed of my SCP transfer, and the only way that I could manage to do that was to send stderr and stdout to a file, and then to echo the file to stdout again.
For example:
#!/bin/sh
echo "Starting with upload test at `date`:"
scp -v -i /root/.ssh/upload_test_rsa /root/uploadtest.tar.gz speedtest#myhost:/home/speedtest/uploadtest.tar.gz > /tmp/scp.log 2>&1
grep -i bytes /tmp/scp.log
rm -f /tmp/scp.log
echo "Done with upload test at `date`."
Which would result in the following output:
Starting with upload test at Thu Sep 20 13:04:44 SAST 2012:
Transferred: sent 10191920, received 5016 bytes, in 15.7 seconds
Bytes per second: sent 650371.2, received 320.1
Done with upload test at Thu Sep 20 13:05:04 SAST 2012.
I found a rough solution for scp:
$ scp -qv $USER#$HOST:$SRC $DEST
According to the scp man page, -q (quiet) disables the progress meter, as well as disabling all other output. Add -v (verbose) as well, you get heaps of output... and the progress meter is still disabled! Disabling the progress meter allows you to redirect the output to a file.
If you don't need all the authentication debug output, redirect the output to stdout and grep out the bits you don't want:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug
Final output is something like this:
Executing: program /usr/bin/ssh host myhost, user (unspecified), command scp -v -f ~/file.txt
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
Warning: Permanently added 'myhost,10.0.0.1' (ECDSA) to the list of known hosts.
Authenticated to myhost ([10.0.0.1]:22).
Sending file modes: C0644 426 file.txt
Sink: C0644 426 file.txt
Transferred: sent 2744, received 2464 bytes, in 0.0 seconds
Bytes per second: sent 108772.7, received 97673.4
Plus, this can be redirected to a file:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug > scplog.txt
Related
I have multiple scripts to connect to an sftp server and put/get files. Recently the sftp admins added a large header for all the logins, which has blown up my log files, and I'd like to exclude that as it's blowing up the file sizes as well as making the logs somewhat unreadable.
As is, the commands are of the following format:
sftp user#host <<EOF &>> ${LOGFILE}
get ...
put ...
exit
EOF
Now, I've tried grepping out all the new lines, which all start with a pipe (they basically made a box to put them in).
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
get ...
put ...
exit
EOF
This excludes the lines from ${LOGFILE} but throws them to stdout instead which means they end up in another log file, where we also don't want them (these scripts are called by a scheduler). Oddly, it also seems to filter out the first line from the connection attempt output.
Connecting to <host>...
and redirect that to stdout as well. Not the end of the world, but I do find it odd.
How can I filter the lines beginning with | so they don't show anywhere?
In
sftp user#host <<EOF &>> ${LOGFILE}
You are redirecting stdout and stderr to the logfile for appending data (&>>). But when you use
sftp user#host <<EOF | grep -v '^|' &>> ${LOGFILE}
you are only redirecting stdout to grep, leaving the stderr output of sftp to pass untouched. Finally, you are redirecting agains stdout and stderr of grep to the logfile.
In fact, you are interested in redirecting both (stdout and stderr) from sftp, so you can use someting like:
sftp user#host <<EOF |& grep -v '^|' >> ${LOGFILE}
or, in older versions of bash, using the specific redirection instead of the shorthand:
sftp user#host <<EOF 2>&1 | grep -v '^|' >> ${LOGFILE}
I'm a sysadmin and I frequently have a situation where I have a script or command that generates a lot of output which I would only like to have emailed to me if the command fails. It's pretty easy to write a script that runs the command, collects the output and emails it if the command fails, but I was thinking I should be able to write a command that
1) accepts log info on stdin
2) waits for the inputting process to exit and see what it's exit status was
3a) if the inputting process exited cleanly, append the logging input to a normal log file
3b) if the inputting process failed, append the logging input to the normal log and also send me an email.
It would look something like this on the command line:
something_important | mailonfail.sh me#example.com /var/log/normal_log
That would make it really easy to use in crontabs.
I'm having trouble figuring out how to make my script wait for the writing process and evaluate how that process exits.
Just to be exatra clear, here's how I can do it with a wrapper:
#! /bin/bash
something_important > output
ERR=$!
if [ "$ERR" -ne "0" ] ; then
cat something_important | mail -s "something_important failed" me#example.com
fi
cat something_important >> /var/log/normal_log
Again, that's not what I want, I want to write a script and pipe commands into it.
Does that make sense? How would I do that? Am I missing something?
Thanks Everyone!
-Dylan
Yes it does make sense, and you are close.
Here are some advises:
#!/bin/sh
TEMPFILE=$(mktemp)
trap "rm -f $TEMPFILE" EXIT
if [ ! something_important > $TEMPFILE ]; then
mail -s 'something goes oops' -a $TEMPFILE you#example.net
fi
cat $TEMPFILE >> /var/log/normal.log
I won't use bashisms so /bin/sh is fine
create a temporary file to avoid conflicts using mktemp(1)
use trap to remove file when the script exit, normally or not
if the command fail
then attach the file, which would or would not be preferred over embedding it
if it's a big file you could even gzip it, but the attachment method will change:
# using mailx
gzip -c9 $TEMPFILE | uuencode fail.log.gz | mailx -s subject ...
# using mutt
gzip $TEMPFILE
mutt -a $TEMPFILE.gz -s ...
gzip -d $TEMPFILE.gz
etc.
When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.
I want to write a bash script that runs ftp on background. I want to some way send commands to it and receive responses. For example a want run ftp, then sent it
user username pass
cd foo
ls
binary
mput *.html
and receive status codes and verify them. I tried to do it this way
tail -n 1 -f in | ftp -n host >> out &
and then reading out file and verifying. But it doesn't work. Can somebody show me the right way? Thanks a lot.
I'd run one set of commands, check the output, and then run the second set in reaction to the output. You could use here-documents for the command sets and command substitution to capture the output in a variable, e.g. like this:
output=$(cat <<EOF | ftp -n host
user username pass
cd foo
ls
binary
mput *.html
EOF
)
if [[ $output =~ "error message" ]]; then
# do stuff
fi
I wrote a bash script which is supposed to read usernames and IP addresses from a file and execute a command on them via ssh.
This is hosts.txt :
user1 192.168.56.232
user2 192.168.56.233
This is myScript.sh :
cmd="ls -l"
while read line
do
set $line
echo "HOST:" $1#$2
ssh $1#$2 $cmd
exitStatus=$?
echo "Exit Status: " $exitStatus
done < hosts.txt
The problem is that execution seems to stop after the first host is done. This is the output:
$ ./myScript.sh
HOST: user1#192.168.56.232
total 2748
drwxr-xr-x 2 user1 user1 4096 2011-11-15 20:01 Desktop
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Documents
...
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Videos
Exit Status: 0
$
Why does is behave like this, and how can i fix it?
In your script, the ssh job gets the same stdin as the read line, and in your case happens to eat up all the lines on the first invocation. So read line only gets to see
the very first line of the input.
Solution: Close stdin for ssh, or better redirect from /dev/null. (Some programs
don't like having stdin closed)
while read line
do
ssh server somecommand </dev/null # Redirect stdin from /dev/null
# for ssh command
# (Does not affect the other commands)
printf '%s\n' "$line"
done < hosts.txt
If you don't want to redirect from /dev/null for every single job inside the loop, you can also try one of these:
while read line
do
{
commands...
} </dev/null # Redirect stdin from /dev/null for all
# commands inside the braces
done < hosts.txt
# In the following, let's not override the original stdin. Open hosts.txt on fd3
# instead
while read line <&3 # execute read command with fd0 (stdin) backed up from fd3
do
commands... # inside, you still have the original stdin
# (maybe the terminal) from outside, which can be practical.
done 3< hosts.txt # make hosts.txt available as fd3 for all commands in the
# loop (so fd0 (stdin) will be unaffected)
# totally safe way: close fd3 for all inner commands at once
while read line <&3
do
{
commands...
} 3<&-
done 3< hosts.txt
The problem that you are having is that the SSH process is consuming all of the stdin, so read doesn't see any of the input after the first ssh command has ran. You can use the -n flag for SSH to prevent this from happening, or you can redirect /dev/null to the stdin of the ssh command.
See the following for more information:
http://mywiki.wooledge.org/BashFAQ/089
Make sure the ssh command does not read from the hosts.txt using ssh -n
I have a feeling your question is unnecessarily verbose..
Essentially you should be able to reproduce the problem with:
while read line
do
echo $line
done < hosts.txt
Which should work just fine.. Do you edit the right file? Are there special characters in it? Check it with a proper editor (eg: vim).