Persistent connection in Bash script - bash

I'm trying to create a persistent connection using bash. On terminal 1, I keep a netcat running as a server:
$ nc -vlkp 3000
Listening on [0.0.0.0] (family 0, port 3000)
On terminal 2, I create a fifo and keep a cat:
$ mkfifo fifo
$ cat > fifo
On terminal 3, I make the fifo as an input to a client netcat:
$ cat fifo | nc -v localhost 3000
Connection to localhost 3000 port [tcp/*] succeeded!
On terminal 4, I send whatever I want:
$ echo command1 > fifo
$ echo command2 > fifo
$ echo command3 > fifo
Going back to terminal 1, I see the commands being received:
$ nc -vlkp 3000
Listening on [0.0.0.0] (family 0, port 3000)
Connection from [127.0.0.1] port 3000 [tcp/*] accepted (family 2, sport 41722)
command1
command2
command3
So, everything works. But when I put that in a script (I called that fifo.sh), bash is not able to write into fifo:
On terminal 1, same listening server:
$ nc -vlkp 3000
Listening on [0.0.0.0] (family 0, port 3000)
On terminal 2, I run the script:
#!/bin/bash
rm -f fifo
mkfifo fifo
cat > fifo &
pid1=$!
cat fifo | nc -v localhost 3000 &
pid2=$!
echo sending...
echo comando1 > fifo
echo comando2 > fifo
echo comando3 > fifo
kill -9 $pid1 $pid2
The output in terminal 2 is:
$ ./fifo.sh
Connection to localhost 3000 port [tcp/*] succeeded!
sending...
On terminal 1 I see only the connection. No commands:
$ nc -vlkp 3000
Listening on [0.0.0.0] (family 0, port 3000)
Connection from [127.0.0.1] port 3000 [tcp/*] accepted (family 2, sport 42191)
Connection closed, listening again.
Any idea on why it only works interactively? Or is there any other way to create a persistent connection using only Bash? I don't want to go for Expect because I have a bigger Bash script that does some work after sending the command1, and command2 depends on command1 output, etc.
Thank you!

When a process is started in the background in a script, standard input is redirected from /dev/null. This means the first cat command will read and emit EOF as soon as it executes, which will cause netcat to exit immediately after starting, so the output later in the script will never make it to the fifo, because there isn't an active listener at that time.
In this case, when cat > fifo is evaluated, the shell forks a child process, redirects standard input from /dev/null, and attempts to open fifo for write. The child remains in a blocking open call at this time. Note that cat is not executed until after the open call completes.
Next, cat fifo | nc -v localhost 3000 is spawned. cat opens fifo for read, which allows the blocking open from the first child to complete and the first cat to execute.
The first cat inherits its parent's file descriptors, so its standard input is attached to /dev/null and it thus reads and emits an EOF immediately. The second cat reads the EOF and passes that to the standard input of nc, which causes netcat to exit.
By the time the echo statements are evaluated, the processes identified by $pid1 and $pid2 are finished. Since there is no longer a listener on fifo, the first echo will block forever.
I don't have a pure-shell fix, but you can use an external program like perl to open the placeholder writer to fifo instead of using shell redirection. Aside, please note that there is a race with nc starting after the echo statements (where the kill happens before netcat has a chance to process input/send output), so here I added a delay after the cat | nc expression. There is almost certainly a better solution out there, but here's what I came up with:
#!/bin/bash
rm -f fifo
mkfifo fifo
perl -e 'open(my $fh, ">", "fifo"); sleep 3600 while 1' &
pid1=$!
cat fifo | nc -v localhost 3000 &
pid2=$!
sleep 2
echo sending...
echo comando1 > fifo
echo comando2 > fifo
echo comando3 > fifo
kill -9 $pid1 $pid2
Hope this helps, great question!

Related

What is really happening in this bash code that creates a shell with netcat and pipes?

mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
I am new to bash, I am trying to understand this piece of "code".
Why a while loop is not needed? How can this work? Is it itself a loop? Why? How?
Also, cat filePipe by itself ONLY PRINTS ONE LINE, and then exits (I just tested it), and to make cat not to exit I do: while cat pipeFile ; do : ; done. So how does that above work?
I don't get the order of execution... at the beginning /tmp/f is empty, so cat /tmp/f should "send" an empty stream to /bin/bash which just send it to nc which opens a connection and "sends" the interactive bash to whoever connects... and the response of the client is sent to /tmp/f ... and then? What? How can it can go back and do the same things again?
When bash parses the line mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f, several things happen. First, the fifo is created. Then, in no particular order, 3 things happen: cat is started, bash is started and nc is started with its output stream connected to /tmp/f. cat is now going to block until some other process opens /tmp/f for writing; the nc is about to do that (or already did it, but we don't know if cat will start before nc or if nc starts before cat, nor do we know in which order they will open the fifo, but whoever does it first will block until the other completes the operation). Once all 3 processes start, they will just sit there waiting for some data. Eventually, some external process connects to port 1234 and sends some data into nc, which writes into /tmp/f. cat (eventually) reads that data and sends it downstream to bash, which processes the input and (probably) writes some data into nc, which sends it back across the socket connection.
If you have a test case in which cat /tmp/f only writes one line of data, that is simply because whatever process you used to write into /tmp/f only wrote a single line. Try: printf 'foo\nbar\nbaz\n' > /tmp/f & cat /tmp/f or while sleep 1; do date; done > /tmp/f & cat /tmp/f
/tmp/f is NOT empty, but a fifo, a bi-directional link.
Someone connects to port 1234, type something, which nc will forward to fifo which then feeds into bash.
bash runs the command and sends results back to nc.
.1 You misunderstand what happen when you echo "string" >/path/fifo
.a) When you just echo something >/path/to/somewhere, you
(test accessibility, then) open target somewhere for writting
write someting in openned file descriptor (fd)
close (relax) accessed file.
.b) A fifo (The First In is the First Out.) is not a file.
Try this:
# Window 1:
mkfifo /tmp/fifotest
cat /tmp/fifotest
# Window 2:
exec {fd2fifo}>/tmp/fifotest
echo >&$fd2fifo Foo bar
You will see cat not terminating.
echo >&$fd2fifo Baz
exec {fd2fifo}>&-
Now, cat will close
So there is no need of any loop!
.2 command cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
could be written (avoid useless use of cat):
bash -i 2>&1 </tmp/f | nc -l -p 1234 > /tmp/f
but you could do same operation but from different point of vue:
nc -l -p 1234 </tmp/f | bash -i >/tmp/f 2>&1
The goal is
to drive bash's STDIN from nc's STDOUT and
connect back bash's STDOUT and STDERR to nc's STDIN.
.3 The more: bashism
Under bash, you could avoid creating fifo by using unnamed fifo:
coproc nc -l -p 1234; bash -i >&${COPROC[1]} 2>&1 <&${COPROC[0]}
or
exec {ncin}<> <(:); nc -l -p 1234 <&$ncin | bash -i >&$ncin 2>&1

Test if netcat listener got a connection and run a command locally

I need a way to fire a netcat listener from a shell script and if a connection received I need to run a command on the same local listener machine and without interrupting the netcat process / connection
it's like the -e option but I need to run a command locally while keeping the netcat connection running
I don't really know if it can be done I mean after the shell process forked the netcat child can it interact with nc's output for example and run other command before netcat exit?
Edit: I figured it's even easier to do it on the client C code side by checking the return value of an initial send() message to determine if the client connected successfully if we got the sent message length
sret = send(sock, message, strlen(message), 0);
if (sret == strlen(message)) // We're Connected
do something
Thanks
This will check if the initial nc process has started listening, and it will echo every line of input it receives and will then send back a Received response:
rm -f input.txt
touch input.txt
tail -f input.txt | nc -l 5555 > output.txt &
if ! ps -p $! >/dev/null; then
echo "Netcat didn't start. Exiting..."
exit 1
fi
tail -f output.txt | while read -r LINE; do
echo "Received input: $LINE"
echo "Received" >> input.txt
done
See if you can adapt this to meet your needs.

Check whether named pipe/FIFO is open for writing

I've created a named pipe for some other process to write to and want to check that the other process started correctly, but don't know its PID. The context is running a command in screen, making sure the command started correctly. I was hoping this might work:
mkfifo /tmp/foo
echo hello > /tmp/foo &
lsof /tmp/foo
Sadly, lsof does not report echo. inotifywait might be another option, but isn't always installed and I really want to poll just once, rather than block until some event.
Is there any way to check if a named pipe is open for writing? Even open in general?
UPDATE:
Once both ends are connected lsof seems to work. This actually solves my problem, but for the sake of the question I'd be interested to know if it's possible to detect the initial redirection to the named pipe without a reader.
> mkfifo /tmp/foo
> yes > /tmp/foo &
> lsof /tmp/foo
> cat /tmp/foo > /dev/null &
> lsof /tmp/foo
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
yes 16915 user 1w FIFO 8,18 0t0 16660270 /tmp/foo
cat 16950 user 3r FIFO 8,18 0t0 16660270 /tmp/foo
Update 2: After playing with inotify-tools, there doesn't seem to be a way to get a notification that a named pipe has been opened for writing and is blocking. This is probably why lsof doesn't show the pipe until it has a reader and a writer.
Update: After researching named pipes, I don't believe that there is any method that will work with named pipes by themselves.
Reasoning:
there is no way to limit the number of writers to a named pipe (without resorting to locking)
all writers block if there is no reader
no writers block if there is a reader (presumably as long as the kernel buffers aren't full)
You could try writing nothing to the pipe with a short timeout. If the timeout expires, then the write blocked indicating that someone has already opened the pipe for writing.
Note: As pointed out in the comments, if a reader exists and presumably is fast enough, our test write will not block and the test essentially fails. Comment out the cat line below to test this.
#!/bin/bash
is_named_pipe_already_opened_for_writing() {
local named_pipe="$1"
# Make sure it's a named pipe
if ! [ -p "$named_pipe" ]; then
return 1
fi
# Try to write zero bytes in the background
echo -n > "$named_pipe" &
pid=$!
# Wait a short amount of time
sleep 0.1
# Kill the background process. If kill succeeds, then
# the write was blocked indicating that someone
# else is already writing to the named pipe.
kill $pid 2>/dev/null
}
PIPE=/tmp/foo
# Ignore any bash messages from killing below
trap : TERM
mkfifo $PIPE
# a writer
yes > $PIPE &
# a reader
cat $PIPE >/dev/null &
if is_named_pipe_already_opened_for_writing "$PIPE"; then
echo "$PIPE is already being written to by another process"
else
echo "$PIPE is NOT being written to by another process"
fi
jobs -pr | kill 2>/dev/null
rm -f $PIPE
you need two pipes one for each directions:
one is use to wait for the ready for new data signal, another just for the data:
in my case process to files, line by line:
mkfifo r w;
cat file1 | while read l; do echo "$l" >w; read <r; done &
cat file2 | while read ln; do if read l <w; then echo "$ln"; echo "$l"; fi; echo 1>r; done

How can I terminate netcat so my script can loop again?

I'm running a bash script that goes through the list of my remote server IPs, connects via netcat (telnet) for each line, and runs a few commands.
The problem is I can't seem to figure out how to terminate netcat so the script can loop to the next IP in the list.
Here's the relevant bit:
#!/bin/bash
while ISF= read -r line;do
(
sleep 3
printf 'command1'
sleep 3
printf 'command2'
sleep 3
) | nc $line
done < ~/servers.txt
The remote servers don't send an EOF, so is there something I can echo or printf at netcat to terminate netcat so the script can loop through again? I would really rather not do a -w flag for a timeout, because I have quite a few servers I need to do this on, and a timeout would make it take much longer.
Specify a timeout after which nc will exit if it receives no further input, either from the remote end or via standard input.
... | nc "$line" -w 10 # Choose a value for -w as appropriate
Depends on your version of netcat, but -c should do what your looking for. From the usage statement of gnu netcat (which is likely what you're running on Ubuntu):
-c, --close close connection on EOF from stdin

Starting a process over ssh using bash and then killing it on sigint

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.
Here is a short example of what I'm trying to do:
#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
kill -SIGTERM $bash2_pid
exit
}
ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait
However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.
Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?
edit:
Seems like I have to go with option B, killing the remotescript through another ssh connection
So no I want to know how do I get the remotepid?
I've tried a something along the lines of :
remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')
This doesn't work since it blocks.
How do I wait for a variable to print and then "release" a subprocess?
It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.
When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.
That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:
command & read; kill $!
This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.
To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.
With bash 4 we can use co-processes:
coproc ssh user#host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...
Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.
Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
This should work in pretty much any bash version.
Try this:
ssh -tt host command </dev/null &
When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.
Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script
run.sh:
#/bin/bash
log="log"
eval "$#" \&
PID=$!
echo "running" "$#" "in PID $PID"> $log
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0
trap "echo EXIT >> $log" EXIT
wait $PID
The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.
$ ssh localhost ./run.sh true; echo $?; cat log
0
running true in PID 19247
EXIT
$ ssh localhost ./run.sh false; echo $?; cat log
1
running false in PID 19298
EXIT
$ ssh localhost ./run.sh sleep 99; echo $?; cat log
^C130
running sleep 99 in PID 20499
killed
EXIT
$ ssh localhost ./run.sh sleep 2; echo $?; cat log
0
running sleep 2 in PID 20556
EXIT
For a one-liner:
ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
For convenience:
HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
ssh localhost "sleep 99 $HUP_KILL"
Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.
Update:
A secondary job control channel is better than reading from stdin.
ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
Set job control mode and monitor the job control channel:
set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$#" &
wait %3
Finally, here's another approach and a reference to a bug filed on openssh:
https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14
This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.
ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.
The solution for bash 3.2:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.
Writing again into the pipe does not terminate the process.
So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.
You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):
/sbin/lsmod | grep -i fuse
You can then mount the remote file system with the following command:
sshfs user#remote_system: mount_point
Now just run your script on the file located in mount_point.

Resources