bash sequence: wait for output, then start next program - bash

In my case I have to run openvpn before ssh'ing into a server, and the openvpn command echos out "Initialization Sequence Completed".
So, I want my script to setup the openvpn and then ssh in.
My question is: How do you execute a command in bash in the background and await it to echo "completed" before running another program?
My current way of doing this is having 2 terminal panes open, one running:
sudo openvpn --config FILE
and in the other I run:
ssh SERVER
once the the first terminal pane has shown me the "Initialization Sequence Completed" text.

It seems like you want to run openvpn as a process in the background while processing its stdout in the foreground.
exec 3< <(sudo openvpn --config FILE)
sed '/Initialization Sequence Completed$/q' <&3 ; cat <&3 &
# VPN initialization is now complete and running in the background
ssh SERVER
Explanation
Let's break it into pieces:
echo <(sudo openvpn --config FILE) will print out something like /dev/fd63
the <(..) runs openvpn in the background, and...
attaches its stdout to a file descriptor, which is printed out by echo
exec 3< /dev/fd63
(where /dev/fd63 is the file descriptor printed from step 1)
this tells the shell to open the file descriptor (/dev/fd63) for reading, and...
make it available at the file descriptor 3
sed '/Initialization Sequence Completed$/q' <&3
now we run sed in the foreground, but make it read from the file descriptor 3 we just opened
as soon as sed sees that the current line ends with "Initialization Sequence Completed", it quits (the /q part)
cat <&3 &
openvpn will keep writing to file descriptor 3 and eventually block if nothing reads from it
to prevent that, we run cat in the background to read the rest of the output
The basic idea is to run openvpn in the background, but capture its output somewhere so that we can run a command in the foreground that will block until it reads the magic words, "Initialization Sequence Completed". The above code tries to do it without creating messy temporary files, but a simpler way might be just to use a temporary file.

Use -m 1 together with --line-buffered in grep to terminate a grep after first match in a continuous stream. This should work:
sudo openvpn --config FILE | grep -m "Initialization Sequence Completed" --line-buffered && ssh SERVER

Related

How to capture the output of a bash command which prompts for a user's confirmation without blocking the output nor the command

I need to capture the output of a bash command which prompts for a user's confirmation without altering its flow.
I know only 2 ways to capture a command output:
- output=$(command)
- command > file
In both cases, the whole process is blocked without any output.
For instance, without --assume-yes:
output=$(apt purge 2>&1 some_package)
I cannot print the output back because the command is not done yet.
Any suggestion?
Edit 1: The user must be able to answer the prompt.
EDIT 2: I used dash-o's answer to complete a bash script allowing a user to remove/purge all obsolete packages (which have no installation candidate) from any Debian/Ubuntu distribution.
To capture partial output from that is waiting for a prompt, one can use a tail on temporary file, potentiality with 'tee' to keep the output flowing if needed. The downside of this approach is that stderr need to be tied with stdout, making it hard to tell between the two (if this is an issue)
#! /bin/bash
log=/path/to/log-file
echo > $log
(
while ! grep -q -F 'continue?' $log ; do sleep 2 ; done ;
output=$(<$log)
echo do-something "$output"
) &
# Run command with output to terminal
apt purge 2>&1 some_package | tee -a $log
# If output to terminal not needed, replace above command with
apt purge 2>&1 some_package > $log
There is no generic way to tell (from a script) when exactly a program prompts for input. The above code looks for the prompt string ('continue?'), so this will have to be customized per command.

Print all script output to file from within another script

English is not my native language, please accept my apologies for any language issues.
I want to execute a script (bash / sh) through CRON, which will perform various maintenance actions, including backup. This script will execute other scripts, one for each function. And I want the entirety of what is printed to be saved in a separate file for each script executed.
The problem is that each of these other scripts executes commands like "duplicity", "certbot", "maldet", among others. The "ECHO" commands in each script are printed in the file, but the outputs of the "duplicity", "certbot" and "maldet" commands do not!
I want to avoid having to put "| tee --append" or another command on each line. But even doing this on each line, the "subscripts" do not save in the log file. That is, ideally in the parent script, you could specify in which file each script prints.
Does not work:
sudo bash /duplicityscript > /path/log
or
sudo bash /duplicityscript >> /path/log
sudo bash /duplicityscript | sudo tee –append /path/log > /dev/null
or
sudo bash /duplicityscript | sudo tee –append /path/log
Using exec (like this):
exec > >(tee -i /path/log)
sudo bash /duplicityscript
exec > >(tee -i /dev/null)`
Example:
./maincron:
sudo ./duplicityscript > /myduplicity.log
sudo ./maldetscript > /mymaldet.log
sudo ./certbotscript > /mycertbot.log
./duplicityscript:
echo "Exporting Mysql/MariaDB..."
{dump command}
echo "Exporting postgres..."
{dump command}
echo "Start duplicity data backup to server 1..."
{duplicity command}
echo "Start duplicity data backup to server 2..."
{duplicity command}
In the log file, this will print:
Exporting Mysql/MariaDB...
Exporting postgres...
Start duplicity data backup to server 1...
Start duplicity data backup to server 2...
In the example above, the "ECHO" commands in each script will be saved in the log file, but the output of the duplicity and dump commands will be printed on the screen and not on the log file.
I made a googlada, I even saw this topic, but I could not adapt it to my necessities.
There is no problem in that the output is also printed on the screen, as long as it is in its entirety, printed on the file.
try 2>&1 at the end of the line, it should help. Or run the script in sh -x mode to see what is causing the issue.
Hope this helps

Failed to run scripts in multiple remote host by ssh

I write a deployAll.sh, which read ip_host.list line by line, then add group for all the remote hosts,
when I run: sh deployAll.sh
results:
Group is added to 172.25.30.11
not expected results:
Group is added to 172.25.30.11
Group is added to 172.25.30.12
Group is added to 172.25.30.13
Why it just execute the first one? please help, thanks a lot!
deployAll.sh
#!/bin/bash
function deployAll()
{
while read line;do
IFS=';' read -ra ipandhost<<< "$line"
ssh "${ipandhost[0]}" "groupadd -g 1011 test"
printf "Group is added to ${ipandhost[0]}\n"
done < ip_host.list
}
deployAll
ip_host.list
172.25.30.11;test-30-11
172.25.30.12;test-30-12
172.25.30.13;test-30-13
That's a frequent problem which is caused by the special behavior of ssh, which sucks up stdin, starving the loop ( i.e. while read line;do ...;done )
Please see Bash FAQ 89 which discusses this subject in detail.
I also just answered ( and solved ) a similar question regarding ffmpeg with the same behavior as ssh in this case. Here: When reading a file line by line, I only get to execute ffmpeg on the first line.
TL;DR :
There are three main options:
Using ssh's -n option. Quoted from man ssh:
-n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh pro‐
gram will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)
Adding a </dev/null at the end of ssh's line ( i.e. ssh ... </dev/null ) will fix the issue and will make ssh behave as expected.
Let read read from a File Descriptor which is unlikely to be used by a random program:
while IFS= read -r line <&3; do
# Here read is reading from FD 3, to which 'ip_host.list' is redirected.
done 3<ip_host.list
Without the ssh command (which wouldn't make sense on my network), I get the expected output so I suspect that the ssh command is swallowing the remaining standard input. You should use the -n flag to prevent ssh from reading from stdin (equivalent to redirecting stdin from /dev/null):
ssh -n "${ipandhost[0]}" "groupadd -g 1011 test"
or
ssh "${ipandhost[0]}" "groupadd -g 1011 test" < /dev/null
See also How to keep script from swallowing all of stdin?
My solution is to generate ssh keys through ssh-keygen command and replace existing public key file (if any). After which installation will resume.

Reading realtime output from airodump-ng

When I execute the command airodump-ng mon0 >> output.txt , output.txt is empty. I need to be able to run airodump-ng mon0 and after about 5 seconds stop the command , than have access to its output. Any thoughts where I should begin to look? I was using bash.
Start the command as a background process, sleep 5 seconds, then kill the background process. You may need to redirect a different stream than STDOUT for capturing the output in a file. This thread mentions STDERR (which would be FD 2). I can't verify this here, but you can check the descriptor number with strace. The command should show something like this:
$ strace airodump-ng mon0 2>&1 | grep ^write
...
write(2, "...
The number in the write statement is the file descriptor airodump-ng writes to.
The script might look somewhat like this (assuming that STDERR needs to be redirected):
#!/bin/bash
{ airodump-ng mon0 2>> output.txt; } &
PID=$!
sleep 5
kill -TERM $PID
cat output.txt
You can write the output to a file using the following:
airodump-ng [INTERFACE] -w [OUTPUT-PREFIX] --write-interval 30 -o csv
This will give you a csv file whose name would be prefixed by [OUTPUT-PREFIX]. This file will be updated after every 30 seconds. If you give a prefix like /var/log/test then the file will go in /var/log/ and would look like test-XX.csv
You should then be able to access the output file(s) by any other tool while airodump is running.
By airodump-ng 1.2 rc4 you should use following command:
timeout 5 airodump-ng -w my --output-format csv --write-interval 1 wlan1mon
After this command has compeleted you can access it's output by viewing my-01.csv. Please not that the output file is in CSV format.
Your command doen't work because airodump-ng output to stderr instead of stdout!!! So following command is corrected version of yours:
airodump-ng mon0 &> output.txt
The first method is better in parsing the output using other programs/applications.

Using netcat/cat in a background shell script (How to avoid Stopped (tty input)? )

Abstract: How to run an interactive task in background?
Details: I am trying to run this simple script under ash shell (Busybox) as a background task.
myscript.sh&
However the script stops immediately...
[1]+ Stopped (tty input) myscript.sh
The myscript.sh contents... (only the relvant part, other then that I trap SIGINT, SIGHUP etc)
#!/bin/sh
catpid=0
START_COPY()
{
cat /dev/charfile > /path/outfile &
catpid = $!
}
STOP_COPY()
{
kill catpid
}
netcat SOME_IP PORT | while read EVENT
do
case $EVENT in
start) START_COPY;;
stop) STOP_COPY;;
esac
done
From simple command line tests I found that bot cat and netcat try to read from tty.
Note that this netcat version does not have -e to supress tty.
Now what can be done to avoid myscript becoming stopped?
Things I have tried so for without any success:
1) netcat/cat ... < /dev/tty (or the output of tty)
2) Running the block containing cat and netcat in a subshell using (). This may work but then how to grab PID of cat?
Over to you experts...
The problem still exists.
A simple test for you all to try:
1) In one terminal run netcat -l -p 11111 (without &)
2) In another terminal run netcat localhost 11111 & (This should stop after a while with message Stopped (TTY input) )
How to avoid this?
you probably want netcat's "-d" option, which tells it not to read from STDIN.
I can confirm that -d will help netcat run in the background.
I was seeing the same issue with:
nc -ulk 60001 | nc -lk 60002 &
Every time I queried the jobs, the pipe input would stop.
Changing the command to the following fixed it:
nc -ulkd 60001 | nc -lk 60002 &
Are you sure you've given your script as is or did you just type in a rough facsimile meant to illustrate the general idea? The script in your question has many errors which should prevent it from ever running correctly, which makes me wonder.
The spaces around the = in catpid=$! make the line not a valid variable assignment. If that was in your original script I am surprised you were not getting any errors.
The kill catpid line should fail because the literal word catpid is not a valid job id. You probably want kill "$catpid".
As for your actual question:
cat should be reading from /dev/charfile and not from stdin or anywhere else. Are you sure it was attempting to read tty input?
Have you tried redirecting netcat's input like netcat < /dev/null if you don't need netcat to read anything?
I have to use a netcat that doesn't have the -d option.
"echo -n | netcat ... &" seems to be an effective workaround: i.e. close the standard input to netcat immediately if you don't need to use it.
As it was not yet really answered, if using Busybox and -d option is not available, the following command will keep netcat "alive" when sent to background:
tail -f /dev/null | netcat ...
netcat < /dev/null and echo -n | netcat did not work for me.
Combining screen and disown-ing process work for me, as '-d' option is not a valid anymore for netcat. Tried redirecting like nc </dev/null but session ends prematurely (as I need -q 1 to make sure nc process stop as file transfer finished)
Setup Receiver side first,
on Receiver side, screen keep stdin for netcat so it won't terminated
EDIT: I was wrong, you need to enter command INSIDE screen. You'll end with no file saved, or weird binary thing flow in your terminal while attach to screen, if you redirecting nc inline of screen command. (Example, this is THE WRONG WAY: screen nc -l -p <listen port> -q 1 > /path/to/yourfile.bin)
Open screen , then press return/Enter on welcome message. new blank shell will appear (you're inside screen now)
type command: nc -l -p 1234 > /path/to/yourfile.bin
then, press CTRL + a , then press d to detach screen.
on Sender sides, disown process, quit 1s after reaching EOF
cat /path/to/yourfile.bin | nc -q1 100.10.10.10 1234 & disown

Resources