create read/write environment using named pipes - bash

I am using RedHat EL 4. I am using Bash 3.00.15.
I am writing SystemVerilog and I want to emulate stdin and stdout. I can only use files as the normal stdin and stdout is not supported in the environment. I would like to use named pipes to emulate stdin and stdout.
I understand how to create a to_sv and from_sv file using mkpipe, and how to open them and use them in SystemVerilog.
By using "cat > to_sv" I can output strings to the SystemVerilog simulation. But that also outputs what I'm typing in the shell.
I would like, if possible, a single shell where it acts almost like a UART terminal. Whatever I type goes directly out to "to_sv", and whatever is written to "from_sv" gets printed out.
If I am going about this completely wrong, then by all means suggest the correct way! Thank you so much,
Nachum Kanovsky

Edit: You can output to a named pipe and read from an other one in the same terminal. You can also disable keys to be echoed to the terminal using stty -echo.
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
cat /tmp/from & cat > /tmp/to
Whit this command everything you write goes to /tmp/to and is not echoed and everything written to /tmp/from will be echoed.
Update: I have found a way to send every chars inputed to the /tmp/to one at a time. Instead of cat > /tmp/to use this command:
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to;
fi;
printf "%s" "$c" >> /tmp/to;
done

You probably want to use exec as in:
exec > to_sv
exec < from_sv
See sections 19.1. and 19.2. in the Advanced Bash-Scripting Guide - I/O Redirection

Instead of cat /tmp/from & you may use tail -f /tmp/from & (at least here on Mac OS X 10.6.7 this prevented a deadlock if I echo more than once to /tmp/from).
Based on Lynch's code:
# terminal window 1
(
rm -f /tmp/from /tmp/to
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
#cat -u /tmp/from &
tail -f /tmp/from &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to
fi;
printf "%s" "$c" >> /tmp/to
done
)
# terminal window 2
(
tail -f /tmp/to &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
wait
)
# terminal window 3
echo "hello from /tmp/from" > /tmp/from

Related

Fails to read lines from running process in bash

Using process substitution, we can get every lines of output of a command .
# Echoes every seconds using process substitution
while read line; do
echo $line
done < <(for i in $(seq 1 10); do echo $i && sleep 1; done)
By the same way above, I want to get the stdout output of 'wpa_supplicant' command, while discarding stderr.
But nothing can be seen on screen!
while read line; do
echo $line
done < <(wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null)
I confirmed that typing the same command in prompt shows its output normaly.
$ wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null
What is the mistake? Any help would be appreciated.
Finally I found the answer here!
The problem was easy... the buffering. Using stdbuf (and piping), the original code will be modified as below.
stdbuf -oL wpa_supplicant -iwlan1 -Dwext -c${MY_CONFIG_FILE} | while read line; do
echo "! $line"
done
'stdbuf -oL' make the stream line buffered, so I can get every each line from the running process.

Read full stdin until EOF when stdin comes from `cat` bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?
I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

Best way to use Unix domain socket in bash script

I'm working on a simple bash script daemon that uses Unix domain sockets. I have a loop like this:
#!/bin/bash
while true
do
rm /var/run/mysock.sock
command=`nc -Ul /var/run/mysock.sock`
echo $command > /tmp/command
done
I'm echoing the command out to /tmp/command just for debugging purposes.
Is this the best way to do this?
Looks like I'm late to the party. Anyway, here is my suggestion I employ successfully for one-shot messages with response:
INPUT=$(mktemp -u)
mkfifo -m 600 "$INPUT"
OUTPUT=$(mktemp -u)
mkfifo -m 600 "$OUTPUT"
(cat "$INPUT" | nc -U "$SKT_PATH" > "$OUTPUT") &
NCPID=$!
exec 4>"$INPUT"
exec 5<"$OUTPUT"
echo "$POST_LINE" >&4
read -u 5 -r RESPONSE;
echo "Response: '$RESPONSE'"
Here I use two FIFOs to talk to nc (1) and fetch it's response.
You can use a single file also to use bidirectional.
mkfifo communicate_pipe
exec 3<> communicate_pipe
cat communicate_pipe - | python socket.py 127.0.0.1:8002 | while read line; do
cmd="./something.sh '${line}' > communicate_pipe";
eval $cmd;
fi;
done

nice way to kill piped process?

I want to process each stdout-line for a shell, the moment it is created. I want to grab the output of test.sh (a long process). My current approach is this:
./test.sh >tmp.txt &
PID=$!
tail -f tmp.txt | while read line; do
echo $line
ps ${PID} > /dev/null
if [ $? -ne 0 ]; then
echo "exiting.."
fi
done;
But unfortunately, this will print "exiting" and then wait, as the tail -f is still running. I tried both break and exit
I run this on FreeBSD, so I cannot use the --pid= option of some linux tails.
I can use ps and grep to get the pid of the tail and kill it, but thats seems very ugly to me.
Any hints?
why do you need the tail process?
Could you instead do something along the lines of
./test.sh | while read line; do
# process $line
done
or, if you want to keep the output in tmp.txt :
./test.sh | tee tmp.txt | while read line; do
# process $line
done
If you still want to use an intermediate tail -f process, maybe you could use a named pipe (fifo) instead of a regular pipe, to allow detaching the tail process and getting its pid:
./test.sh >tmp.txt &
PID=$!
mkfifo tmp.fifo
tail -f tmp.txt >tmp.fifo &
PID_OF_TAIL=$!
while read line; do
# process $line
kill -0 ${PID} >/dev/null || kill ${PID_OF_TAIL}
done <tmp.fifo
rm tmp.fifo
I should however mention that such a solution presents several heavy problems of race conditions :
the PID of test.sh could be reused by another process;
if the test.sh process is still alive when you read the last line, you won't have any other occasion to detect its death afterwards and your loop will hang.

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources