I have created tests using sshUnit2[shUnit2 is a xUnit unit test framework for Bourne based shell scripts]. Once the tests are executed I can see the execution on the console including the test status. I would like to redirect the output including error/exception to a webpage just like rake task in rspec. Your help is much appreciated.
When you type a command in the terminal you'll get a the normal output from stdout. If you want to see errors from stderr you have to redirect them from stderr to stdout by appending 2>&1 to your command (YOUR_COMMAND 2>&1).
To view the output from a command in a web browser, you can pipe the output to netcat (e.g. YOUR_COMMAND 2>&1 | netcat -l -p PORT_NUMBER). Now the command waits until you navigate your web browser to localhost:PORT_NUMBER. After opening the URL, netcat will print some server-client specific stuff and then quits. You can prevent the netcat output by redirecting it to /dev/null (YOUR_COMMAND 2>&1 | netcat -l -p PORT_NUMBER 2>&1 >/dev/null).
If you want to keep the "command-output-server" alive after loading the content in the browser, you have to loop over the command. With while true you can loop infinitely. So do while true; do YOUR_COMMAND 2>&1 | netcat -l -p PORT_NUMBER 2>&1 >/dev/null; done to keep the server alive. With an & at the end you can run the whole thing in the background.
For example your final command could look like this:
while true; do date 2>&1 | netcat -l -p 8888 2>&1 >/dev/null; done &
(Browse to 127.0.0.1:8888 to see the current date and time)
Related
I set up a cron that will run a script this script will run a command which renews lets encrypt.
#!/bin/bash
/usr/local/sbin/certbot-auto renew --renew-hook "service nginx reload" -q >> /var/log/certbot-renew.log | mail -s "CERTBOT Renewals" test#test.com < /var/log/certbot-renew.log
exit 0
This produced an email every time the cron ran but what I want is if there is an error/renewal to send an email. Ive read up that if I use &> this will write errors will this work if i replace >> with &> or should I be using 2>&1 to capture both stdout and stderr?
On this command
command >>file 2>&1 | other command
The output is redirected to a file >>, then to a pipe, a tee can duplicate the output.
command 2>&1 | tee -a file | other command
Otherwise some shell accept &>> to redirect stdout and stderr to a file in append mode.
following command do the same, the order is important (fd1 is redirected to file and fd2 to fd1)
command >>file 2>&1
When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.
Abstract: How to run an interactive task in background?
Details: I am trying to run this simple script under ash shell (Busybox) as a background task.
myscript.sh&
However the script stops immediately...
[1]+ Stopped (tty input) myscript.sh
The myscript.sh contents... (only the relvant part, other then that I trap SIGINT, SIGHUP etc)
#!/bin/sh
catpid=0
START_COPY()
{
cat /dev/charfile > /path/outfile &
catpid = $!
}
STOP_COPY()
{
kill catpid
}
netcat SOME_IP PORT | while read EVENT
do
case $EVENT in
start) START_COPY;;
stop) STOP_COPY;;
esac
done
From simple command line tests I found that bot cat and netcat try to read from tty.
Note that this netcat version does not have -e to supress tty.
Now what can be done to avoid myscript becoming stopped?
Things I have tried so for without any success:
1) netcat/cat ... < /dev/tty (or the output of tty)
2) Running the block containing cat and netcat in a subshell using (). This may work but then how to grab PID of cat?
Over to you experts...
The problem still exists.
A simple test for you all to try:
1) In one terminal run netcat -l -p 11111 (without &)
2) In another terminal run netcat localhost 11111 & (This should stop after a while with message Stopped (TTY input) )
How to avoid this?
you probably want netcat's "-d" option, which tells it not to read from STDIN.
I can confirm that -d will help netcat run in the background.
I was seeing the same issue with:
nc -ulk 60001 | nc -lk 60002 &
Every time I queried the jobs, the pipe input would stop.
Changing the command to the following fixed it:
nc -ulkd 60001 | nc -lk 60002 &
Are you sure you've given your script as is or did you just type in a rough facsimile meant to illustrate the general idea? The script in your question has many errors which should prevent it from ever running correctly, which makes me wonder.
The spaces around the = in catpid=$! make the line not a valid variable assignment. If that was in your original script I am surprised you were not getting any errors.
The kill catpid line should fail because the literal word catpid is not a valid job id. You probably want kill "$catpid".
As for your actual question:
cat should be reading from /dev/charfile and not from stdin or anywhere else. Are you sure it was attempting to read tty input?
Have you tried redirecting netcat's input like netcat < /dev/null if you don't need netcat to read anything?
I have to use a netcat that doesn't have the -d option.
"echo -n | netcat ... &" seems to be an effective workaround: i.e. close the standard input to netcat immediately if you don't need to use it.
As it was not yet really answered, if using Busybox and -d option is not available, the following command will keep netcat "alive" when sent to background:
tail -f /dev/null | netcat ...
netcat < /dev/null and echo -n | netcat did not work for me.
Combining screen and disown-ing process work for me, as '-d' option is not a valid anymore for netcat. Tried redirecting like nc </dev/null but session ends prematurely (as I need -q 1 to make sure nc process stop as file transfer finished)
Setup Receiver side first,
on Receiver side, screen keep stdin for netcat so it won't terminated
EDIT: I was wrong, you need to enter command INSIDE screen. You'll end with no file saved, or weird binary thing flow in your terminal while attach to screen, if you redirecting nc inline of screen command. (Example, this is THE WRONG WAY: screen nc -l -p <listen port> -q 1 > /path/to/yourfile.bin)
Open screen , then press return/Enter on welcome message. new blank shell will appear (you're inside screen now)
type command: nc -l -p 1234 > /path/to/yourfile.bin
then, press CTRL + a , then press d to detach screen.
on Sender sides, disown process, quit 1s after reaching EOF
cat /path/to/yourfile.bin | nc -q1 100.10.10.10 1234 & disown
I have a bash script with some scp commands inside.
It works very well but, if I try to redirect my stdout with "./myscript.sh >log", only my explicit echos are shown in the "log" file.
The scp output is missing.
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
Ok, what should I do now?
Thank you
scp is using interactive terminal in order to print that fancy progress bar. Printing that output to a file does not make sense at all, so scp detects when its output is redirected to somewhere else other than a terminal and does disable this output.
What makes sense, however, is redirect its error output into the file in case there are errors. You might want to disable standard output if you want.
There are two possible ways of doing this. First is to invoke your script with redirection of both stderr and stdout into the log file:
./myscript.sh >log 2>&1
Second, is to tell bash to do this right in your script:
#!/bin/sh
exec 2>&1
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
fi
...
If you need to check for errors, just verify that $? is 0 after scp command is executed:
if $C_SFTP; then
scp -r $C_SFTP_USER#$C_SFTP_HOST:$C_SOURCE "$C_TMPDIR"
RET=$?
if [ $RET -ne 0 ]; then
echo SOS 2>&1
exit $RET
fi
fi
Another option is to do set -e in your script which tells bash script to report failure as soon as one of commands in scripts fails:
#!/bin/bash
set -e
...
Hope it helps. Good luck!
You cat simply test your tty with:
[ ~]#echo "hello" >/dev/tty
hello
If that works, try:
[ ~]# scp <user>#<host>:<source> /dev/tty 2>/dev/null
This has worked for me...
Unfortunately SCP's output can't simply be redirected to stdout it seems.
I wanted to get the average transfer speed of my SCP transfer, and the only way that I could manage to do that was to send stderr and stdout to a file, and then to echo the file to stdout again.
For example:
#!/bin/sh
echo "Starting with upload test at `date`:"
scp -v -i /root/.ssh/upload_test_rsa /root/uploadtest.tar.gz speedtest#myhost:/home/speedtest/uploadtest.tar.gz > /tmp/scp.log 2>&1
grep -i bytes /tmp/scp.log
rm -f /tmp/scp.log
echo "Done with upload test at `date`."
Which would result in the following output:
Starting with upload test at Thu Sep 20 13:04:44 SAST 2012:
Transferred: sent 10191920, received 5016 bytes, in 15.7 seconds
Bytes per second: sent 650371.2, received 320.1
Done with upload test at Thu Sep 20 13:05:04 SAST 2012.
I found a rough solution for scp:
$ scp -qv $USER#$HOST:$SRC $DEST
According to the scp man page, -q (quiet) disables the progress meter, as well as disabling all other output. Add -v (verbose) as well, you get heaps of output... and the progress meter is still disabled! Disabling the progress meter allows you to redirect the output to a file.
If you don't need all the authentication debug output, redirect the output to stdout and grep out the bits you don't want:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug
Final output is something like this:
Executing: program /usr/bin/ssh host myhost, user (unspecified), command scp -v -f ~/file.txt
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
Warning: Permanently added 'myhost,10.0.0.1' (ECDSA) to the list of known hosts.
Authenticated to myhost ([10.0.0.1]:22).
Sending file modes: C0644 426 file.txt
Sink: C0644 426 file.txt
Transferred: sent 2744, received 2464 bytes, in 0.0 seconds
Bytes per second: sent 108772.7, received 97673.4
Plus, this can be redirected to a file:
$ scp -qv $USER#$HOST:$SRC $DEST 2>&1 | grep -v debug > scplog.txt
I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console