When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.
Related
bash4.2, centos
the script
#!/bin/bash
LOG_FILE=$homedir/logs/result.log
exec 3>&1
exec > >(tee -a ${LOG_FILE}) 2>&1
echo
end_shell_number=10
for script in `seq -f "%02g_*.sh" 0 $end_shell_number`; do
if ! bash $homedir/$script; then
printf 'Script "%s" failed, terminating...\n' "$script" >&2
exit 1
fi
done
It basically runs through sub-shells number 00 to 10 and logs everything to a LOG_FILE while also displaying on stdout.
I was watching the log getting stacked with tail -F ./logs/result.log,
and it was working nicely until the log file suddenly got removed.
The sub-shells does nothing related to file descriptors nor the log file. They remotely restart tomcats via ssh commands.
Question :
tee was writing on a log file successfully until the file gets erased and logging stops from then on.
Is there a filesize limit or timeout in tee? Is there any known behavior of tee that it deletes a file?
On what occasion does 'tee' deletes the file it was writing on?
tee does not delete nor truncate the file once it has started writing.
Is there a filesize limit or timeout in tee?
No.
Is there any known behavior of tee that it deletes a file?
No.
Note that file can be removed, but the process (tee) still will wrote the open file descriptor, but the file will not be accessible (see man 3 unlink).
I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.
In my case I have to run openvpn before ssh'ing into a server, and the openvpn command echos out "Initialization Sequence Completed".
So, I want my script to setup the openvpn and then ssh in.
My question is: How do you execute a command in bash in the background and await it to echo "completed" before running another program?
My current way of doing this is having 2 terminal panes open, one running:
sudo openvpn --config FILE
and in the other I run:
ssh SERVER
once the the first terminal pane has shown me the "Initialization Sequence Completed" text.
It seems like you want to run openvpn as a process in the background while processing its stdout in the foreground.
exec 3< <(sudo openvpn --config FILE)
sed '/Initialization Sequence Completed$/q' <&3 ; cat <&3 &
# VPN initialization is now complete and running in the background
ssh SERVER
Explanation
Let's break it into pieces:
echo <(sudo openvpn --config FILE) will print out something like /dev/fd63
the <(..) runs openvpn in the background, and...
attaches its stdout to a file descriptor, which is printed out by echo
exec 3< /dev/fd63
(where /dev/fd63 is the file descriptor printed from step 1)
this tells the shell to open the file descriptor (/dev/fd63) for reading, and...
make it available at the file descriptor 3
sed '/Initialization Sequence Completed$/q' <&3
now we run sed in the foreground, but make it read from the file descriptor 3 we just opened
as soon as sed sees that the current line ends with "Initialization Sequence Completed", it quits (the /q part)
cat <&3 &
openvpn will keep writing to file descriptor 3 and eventually block if nothing reads from it
to prevent that, we run cat in the background to read the rest of the output
The basic idea is to run openvpn in the background, but capture its output somewhere so that we can run a command in the foreground that will block until it reads the magic words, "Initialization Sequence Completed". The above code tries to do it without creating messy temporary files, but a simpler way might be just to use a temporary file.
Use -m 1 together with --line-buffered in grep to terminate a grep after first match in a continuous stream. This should work:
sudo openvpn --config FILE | grep -m "Initialization Sequence Completed" --line-buffered && ssh SERVER
I am trying to redirect output of a command to a file. The command I am using (zypper) downloads packages from the internet. The command I am using is
zypper -x -n in geany >> log.txt
The command gradually prints output to the console. The problem I am facing is that the above command writes the command output all at once after the command finishes executing. How do I redirect the bash output as I get it onto the terminal, rather than writing all the command output at the end.
Not with bash itself, but via the tee command:
zipper -x -n in geany | tee log.txt
&>>FILE COMMAND
will append the output of COMMAND to FILE
In your case
&>>log.txt zypper -x -n in geany
If you want to pipe a command through a filter, you must assure that the command outputs to standard output (file descriptor 1) -- if it outputs to standard error (file descriptor 2), you have to redirect the 2 to 1 before the pipe. Take into account that only stdout passed through a pipe.
So you have to do so:
2>&1 COMMAND | FILTER
If you want to grep the output and in the same keep it into a log file, you have to duplicate it with tee, and use a filter like ... | tee log-file | grep options
Abstract: How to run an interactive task in background?
Details: I am trying to run this simple script under ash shell (Busybox) as a background task.
myscript.sh&
However the script stops immediately...
[1]+ Stopped (tty input) myscript.sh
The myscript.sh contents... (only the relvant part, other then that I trap SIGINT, SIGHUP etc)
#!/bin/sh
catpid=0
START_COPY()
{
cat /dev/charfile > /path/outfile &
catpid = $!
}
STOP_COPY()
{
kill catpid
}
netcat SOME_IP PORT | while read EVENT
do
case $EVENT in
start) START_COPY;;
stop) STOP_COPY;;
esac
done
From simple command line tests I found that bot cat and netcat try to read from tty.
Note that this netcat version does not have -e to supress tty.
Now what can be done to avoid myscript becoming stopped?
Things I have tried so for without any success:
1) netcat/cat ... < /dev/tty (or the output of tty)
2) Running the block containing cat and netcat in a subshell using (). This may work but then how to grab PID of cat?
Over to you experts...
The problem still exists.
A simple test for you all to try:
1) In one terminal run netcat -l -p 11111 (without &)
2) In another terminal run netcat localhost 11111 & (This should stop after a while with message Stopped (TTY input) )
How to avoid this?
you probably want netcat's "-d" option, which tells it not to read from STDIN.
I can confirm that -d will help netcat run in the background.
I was seeing the same issue with:
nc -ulk 60001 | nc -lk 60002 &
Every time I queried the jobs, the pipe input would stop.
Changing the command to the following fixed it:
nc -ulkd 60001 | nc -lk 60002 &
Are you sure you've given your script as is or did you just type in a rough facsimile meant to illustrate the general idea? The script in your question has many errors which should prevent it from ever running correctly, which makes me wonder.
The spaces around the = in catpid=$! make the line not a valid variable assignment. If that was in your original script I am surprised you were not getting any errors.
The kill catpid line should fail because the literal word catpid is not a valid job id. You probably want kill "$catpid".
As for your actual question:
cat should be reading from /dev/charfile and not from stdin or anywhere else. Are you sure it was attempting to read tty input?
Have you tried redirecting netcat's input like netcat < /dev/null if you don't need netcat to read anything?
I have to use a netcat that doesn't have the -d option.
"echo -n | netcat ... &" seems to be an effective workaround: i.e. close the standard input to netcat immediately if you don't need to use it.
As it was not yet really answered, if using Busybox and -d option is not available, the following command will keep netcat "alive" when sent to background:
tail -f /dev/null | netcat ...
netcat < /dev/null and echo -n | netcat did not work for me.
Combining screen and disown-ing process work for me, as '-d' option is not a valid anymore for netcat. Tried redirecting like nc </dev/null but session ends prematurely (as I need -q 1 to make sure nc process stop as file transfer finished)
Setup Receiver side first,
on Receiver side, screen keep stdin for netcat so it won't terminated
EDIT: I was wrong, you need to enter command INSIDE screen. You'll end with no file saved, or weird binary thing flow in your terminal while attach to screen, if you redirecting nc inline of screen command. (Example, this is THE WRONG WAY: screen nc -l -p <listen port> -q 1 > /path/to/yourfile.bin)
Open screen , then press return/Enter on welcome message. new blank shell will appear (you're inside screen now)
type command: nc -l -p 1234 > /path/to/yourfile.bin
then, press CTRL + a , then press d to detach screen.
on Sender sides, disown process, quit 1s after reaching EOF
cat /path/to/yourfile.bin | nc -q1 100.10.10.10 1234 & disown