Why bash is closed while writing to named pipe? - bash

In bash 1:
$ mkfifo /tmp/pipe
$ echo 'something' > /tmp/pipe
Now it hangs and waits that data to be read.
In bash 2:
$ </tmp/pipe
Now shell 1 goes away, it is closed, my terminal is gone.
Why is this happening?
In bash manual there is written
The command substitution $(cat file) can be replaced by the
equivalent but faster $(< file).
So I was experimenting if plain "< file" works in a similar way to cat file content to terminal.
$ bash --version | head -1
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
$ cat /proc/version
Linux version 3.16.0-71-generic (buildd#lgw01-46) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #92~14.04.1-Ubuntu SMP Thu May 12 23:31:46 UTC 2016
Edit
After seeing initial comments and answers I will add a bit of clarification.
I'm not concerned about different command line syntaxes.
But what I was really after was that in reader shell $ < /tmp/pipe scenario writer shell exits, but with $ cat /tmp/pipe in reader shell the writer shell does not exit. Why?
I see that I really did not phrase that in question and in body and should probably initiate another question?

From the pipe(7) manual page:
If all file descriptors referring to the read end of a pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process.
What happens is that when the reading shell has finished reading and closes its end of the pipe, the writing shell will receive the SIGPIPE signal, and if it doesn't catch it then the shell will be terminated.

In manual sign $ is connected with variable not command prompt.
Try the following scripts:
1)
#!/bin/bash
echo $(< /tmp/pipe);
2)
#!/bin/bash
echo $(cat /tmp/pipe);
Both works correctly.

When you type < /tmp/pipe, you connect the standard input of the current shell to the named pipe instead. bash works by continuously reading from its input and executing what it reads as a command.
In shell 1, echo something > /tmp/pipe opens the pipe for writing, writes the string, then blocks until something reads it. As soon as echo completes, it will close its end of the pipe.
< /tmp/pipe opens the pipe for reading, and connects it to shell 2's standard input.
Shell 2 reads from the pipe (and tries to execute a command).
Back in shell 1, the echo, having unblocked after the 2nd shell read from the pipe, completes. The write end of the pipe closes.
With the write-end of the pipe closed, shell 2 will get a SIGPIPE when it tries to read another command, then exit.
(An alternate possibility is that shell 2 exits if the command it reads from the pipe and tries to execute causes an error.)
$(< file), on the other hand, is a special case of command substitution. When bash sees that, it simply reads from file itself, rather than spawning a cat process and capturing its output.

Related

Why does "(echo <Payload> && cat) | nc <link> <port>" creates a persistent connection?

I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Piping multiple commands to bash, pipe behavior question

I have this command sequence that I'm having trouble understanding:
[me#mine ~]$ (echo 'test'; cat) | bash
echo $?
1
echo 'this is the new shell'
this is the new shell
exit
[me#mine ~]$
As far as I can understand, here is what happens:
A pipe is created.
stdout of echo 'test' is sent to the pipe.
bash receives 'test' on stdin.
echo $? returns 1, which is what happens when you run test without args.
cat runs.
It is copying stdin to stdout.
stdout is sent to the pipe.
bash will execute whatever you type in, but stderr won't get printed to the screen (we used |, not |&).
I have three questions:
It looks like, even though we run two commands, we use the same pipe and bash process for both commands. Is that the case?
Where do the prompts go?
When something like cat uses stdin, does it take exclusive ownership of stdin as long as the shell runs, or can other things use it?
I suspect I'm missing some detail with ttys, but I'm not sure. Any help or details or man excerpt appreciated!
So...
Yes, there's a single pipe sending commands to a single instance of bash. Note:
$ echo 'date "+%T hello $$"; sleep 1; date "+%T world $$"' | bash
22:18:52 hello 72628
22:18:53 world 72628
There are no prompts. From the man page:
An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and error are both connected to terminals. PS1 is set and $- includes i if bash is interactive.
So a pipe is not an interactive shell, and therefore has no prompt.
Stdin and stdout can only connect to one thing at a time. cat will take stdin from the process that ran it (for example, your interactive shell) and send its stdout through the pipe to bash. If you need multiple things to be able to submit to the stdin of that cat, consider using a named pipe.
Does that cover it?

Bash redirection: named pipes and EOF

Take the following code:
rm -f pipe
mkfifo pipe
foo () {
echo 1
sleep 1
echo 2
}
#1
exec 3< <(foo &)
cat <&3 # works
#2
foo >pipe &
cat <pipe # works
#3
exec 3<>pipe
foo >&3 &
cat <&3 # hangs
#4 -- update: this is the correct approach for what I want to do
foo >pipe &
exec 3<pipe
rm pipe
cat <&3 # works
Why does approach #3 hang, while others do not? Is there a way to make approach #3 not hang?
Rationale: I wish to use quasi-unnamed pipes to connect several asynchronously running subprocesses, for this I need to delete the pipe after making a file descriptor point to it:
mkfifo pipe
exec {fd}<>pipe
rm pipe
# use &$fd only
The problem in approach 3 is that the FIFO pipe then has 2 writers: The bash script (because you have opened it read/write by using exec 3<>) and the sub-shell running foo. You'll read EOF when all writers have closed the file descriptor. One writer (the sub-shell running foo) will exit fairly quickly (after roughly 1s) and therefore close the file descriptor. The other writer however (the main shell) only closes the file descriptor when it'd exit as there's no closes of file descriptor 3 anywhere. But it can't exit because it waits for the cat to exit first. That's a deadlock:
cat is waiting for an EOF
the EOF only appears when the main shell closes the fd (or exits)
the main shell is waiting for cat to terminate
Therefore you'll never exit.
Case 2 works because the pipe only ever has one writer (the sub-shell running foo) which exits very quickly, therefore an EOF will be read. In case 1, there's also only ever one writer because you open fd 3 read-only (exec 3<).
EDIT: Remove nonsense about case 4 not being correct (see comments). It's correct because the writer can't exit before the reader connects because it'll also be blocked when opening the file when the reader isn't opening yet. The newly added case 4 is unfortunately incorrect. It's racy and only works iff foo doesn't terminate (or close the pipe) before exec 3<pipe runs.
Also check the fifo(7) man page:
The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also.

tee to a log within a bash script, while preserving stdout as a TTY

Similar to redirect COPY of stdout to log file from within bash script itself, but I'd also like to preserve stdout as a TTY device.
For example, I have the following scripts:
/tmp/teed-off$ cat some-script
#!/usr/bin/env ruby
if $stdout.tty?
puts "stdout is a TTY"
else
puts "stdout is NOT a TTY"
end
/tmp/teed-off$ cat wrapper
#!/usr/bin/env bash
exec > >(tee some-script.log)
./some-script
When I run them, the wrapper eats stdout as a TTY device:
/tmp/teed-off$ ./some-script
stdout is a TTY
/tmp/teed-off$ ./wrapper
stdout is NOT a TTY
How can I flip that behavior around so that the script believes that its in a TTY even when executed via the wrapper?
It won't be trivial, but I think you can do it via pseudo-ttys. I'm not sure that there's any standard tool, other than perhaps expect, that would do it for you.
It takes a bit of thinking about. You'd have a control program that would open the pseudo-tty master, then the slave. The slave would be connected to the output of ./some-script. The master would be read by the control program, which would copy the data it reads from the master to the file and to standard output.
I've not tried coding that up. I'm not sure whether you could do it with standard shell commands; I can't think of any way. So, I think there will be some C coding to be done.
look for dup2 it duplicates a file descriptor
int dup2(int oldfd, int newfd);

Resources