Bash Script Statement - bash

I'm trying to figure out what a line means in a bash script file:
mkfifo mypipe
nc -l 12345 < mypipe | /home/myprogram > mypipe
Here's what I understand: nc -l part creates a server-side like behavior on port 12345, which takes in input from mypipe, which pipes that output to a program, which pipes the program output back into mypipe.
My question is firstly is my analysis correct? Second, what exactly is the mkfifo, like what kind of file is it? I also don't understand what nc -l outputs exactly in order to pipe into the myprogram.
Thanks for any help.

mkfifo creates a pipe file. Here, FIFO means "first-in, first-out". Whatever one process writes into the pipe, the second process can read. It is not a "real" file - the data never gets saved to the disk; but Linux abstracts a lot of its mechanisms as files, to simplify things.
nc -l 12345 will bind to socket 12345 and listen; when it catches an incoming connection, it will pass the standard input to the remote host, and the remote host's incoming data to the standard output.
Thus, the architecture here is:
remote host -> nc -> regular pipe -> myprogram
myprogram -> mypipe -> nc -> remote host
effectively letting myprogram and remote host talk, even though myprogram was designed to read from stdin and write to stdout.
Since the bash pipe (|) only handles one direction of communication, you need to make an explicit second pipe to do bidirectional inter-process connection.

Related

How can I communicate with a unix socket using one connection in a bash script?

I want to read/write to a unix socket in a bash script, but only do it with one connection. All of the examples I've seen using nc say to open different socket connections for every read/write.
Is there a way to do it using one connection throughout the script for every read/write?
(nc only lets me communicate with the socket in a one shot manner)
Run the whole script with output redirected:
{
command
command
command
} | nc -U /tmp/cppLLRBT-socket
However, pipes are one-way, so you can do this for reading or writing, but not both.

SSH command within a script terminates prematurely

From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:
nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"
These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.
When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?
I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.
Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.
On a second note, the approach looks overly complex to me. Try to reduce it to
ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz
(ssh connects stdout of the remote with the local). It's more clear when you write it like this:
ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"
That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.

Bash: Linking I/O of two processes

I have two programs A and B (in my case a C program and a Java program) that are supposed communicate with each other. The invocation inside a bash script of those programs looks like this:
mkfifo fifo1
mkfifo fifo2
A < fifo1 > fifo2 &
java B < fifo2 > fifo1
I know that I could do it with one fifo, but I also want to be able to show the communication on the console. The following works fine:
mkfifo fifo1
mkfifo fifo2
A < fifo1 | tee fifo2 &
java B < fifo2 | tee fifo1
My question is: Why does the second script work while the first one just hangs?
Side question: While the second version works, as soon as I redirect the output of the script to a file, the communication is no longer interleaved but ordered by process. Is there a way to prevent this?
Why does the second script work while the first one just hangs?
man open:
When opening a FIFO with O_RDONLY or O_WRONLY set:
If O_NONBLOCK is set, an open() for reading-only
shall return without delay. An open() for writing-only shall
return an error if no process currently has the file open for
reading.
If O_NONBLOCK is clear, an open() for reading-only
shall block the calling thread until a thread opens the file for
writing. An open() for writing-only shall block the calling
thread until a thread opens the file for reading.
In the first script, A opens fifo1 and B opens fifo2, both with O_RDONLY; A blocks until B would open fifo1 for writing, while B blocks until A would open fifo2 for writing… a circular wait situation. (Actually, shells open the fifos, but the resulting circular waiting is the same.)
In the second script, A opens fifo1 and B opens fifo2, both with O_RDONLY - so far the same as above. But in parallel, the first tee opens fifo2 and the second tee opens fifo1 for writing, thus unblocking A and B.
While the second version works, as soon as I redirect the output of
the script to a file, the communication is no longer interleaved but
ordered by process. Is there a way to prevent this?
This may be due to stdout buffering; try … stdbuf -oL tee … or post your input and output.

Script for sending a header to netcat

I work with a protocol that's easy to use simply with netcat. The protocol starts with a login message, so I thought I could bang out a little script which pipes the login message before stdin to netcat for me.
I was able to get close, but there's one problem I can't figure out. The following script works, in that it sends the login message and allows me to interact with netcat. But if netcat exits (because the server side closed the connection), the script just hangs there (presumably because cat is still reading stdin even though no one is reading stdout any more).
( echo "${LOGIN}"; cat ) | nc ${HOST} ${PORT}
It's a tricky problem, and you're right about the cause. Processes don't get a NOPIPE error and SIGPIPE until they actually try to write to the pipe.
If nothing else, you can use the interaction scripting tool expect:
expect <(echo '
spawn nc google.com 80
send "GET / HTTP/1.0\n"
send "Host: www.google.com\n"
interact
')
This will run nc, send some HTTP headers, and then gives control to you. When nc exits, so does the command.

Redirect stdout to a running process

I know how to redirect stdout to a file, but is it possible to redirect stdout to a process (linux environment)?
For example, if I move an active SSH session to the background via "~^Z", is there a way to then start another program on my local host and redirect its stdout to the SSH session?
Thanks!
Sometimes a trick like
echo "something" > /proc/pid-of-process/fd/0
works, but if they're linked to pseudoterminals, it won't.
So, here's one way to do what you want.
** Configure your SSH connection to use certificates / passwordless login.
Create a named pipe (eg mkfifo mypipe)
Use tail to read from the pipe and pass that to the SSH process, eg:
tail -f mypipe | ssh -t -t user#somehost.com
Send whatever you want to go into the ssh session into the named pipe, eg:
echo "ls -l" > mypipe
So if you need to pipe stuff from another program, you'd just do
./my-program > /path/to/mypipe
You're done.
Some notes:
** is optional, but if you don't, you will have to type your password on the terminal you start the SSH session on, you can't pass it through the pipe. In fact, if you try, it will just appear as plaintext passed through the pipe once the SSH connection completes.
Your SSH session is now only as secure as your named pipe. Just a heads up.
You won't be able to use the SSH session, once you connect, from the originating terminal. You'll have to use the named pipe.
You can always ctrl+c the SSH process in the original terminal to kill it and restore functionality of the terminal.
Any output will appear on the original terminal -- probably obvious but I just wanted to point it out.
The -f option to tail is necessary to prevent the SSH process from receiving an EOF when the pipe is empty. There are other ways to prevent the pipe from closing, but this is the easiest in my opinion.
The -t -t option to ssh forces tty allocation, otherwise it would complain since stdin is being piped in your case.
Lastly, there is almost definitely a better way to do what you want -- you should consider finding it if this is anything serious / long term.

Resources