Reading lines from piped input in infinite while loop - bash

I made a simple script in bash to serve as a http proxy.
#!/usr/bin/env bash
trap "kill 0" SIGINT SIGTERM EXIT # kill all subshells on exit
port="6000"
rm -f client_output client_output_for_request_forming server_output
mkfifo client_output client_output_for_request_forming server_output # create named pipes
# creating subshell
(
cat <server_output |
nc -lp $port | # awaiting connection from the client of the port specified
tee client_output | # sending copy of ouput to client_output pipe
tee client_output_for_request_forming # sending copy of ouput to client_output_for_request_forming pipe
) & # starting subshell in a separate process
echo "OK!"
# creating another subshell (to feed client_output_for_request_forming to it)
(
while read line; # read input from client_output_for_request_forming line by line
do
echo "line read: $line"
if [[ $line =~ ^Host:[[:space:]]([[:alnum:].-_]*)(:([[:digit:]]+))?[[:space:]]*$ ]]
then
echo "match: $line"
server_port=${BASH_REMATCH[3]} # extracting server port from regular expression
if [[ "$server_port" -eq "" ]]
then
server_port="80"
fi
host=${BASH_REMATCH[1]} # extracting host from regular expression
nc $host $server_port <client_output | # connect to the server
tee server_output # send copy to server_output pipe
break
fi
done
) <client_output_for_request_forming
echo "OK2!"
rm -f client_output client_output_for_request_forming server_output
I start it in first terminal. And it outputs OK!
And in the second I type:
netcat localhost 6000
and then start entering lines of text expecting them to be displayed in the first terminal window as there is a cycle while read line. But nothing is displayed.
What is it that I'm doing wrong? How can I make it work?

If no process is reading from the client_output fifo, then the background pipeline is not starting. Since the process that reads client_output does not start until a line is read from client_output_for_request_forming, your processes are blocked.

Related

Redirect stdout and stderr separately over a socket connection

I am trying to run a script on the other side of a unix-socket connection. For this I am trying to use socat. The script is
#!/bin/bash
read MESSAGE1
echo "PID: $$"
echo "$MESSAGE1"
sleep 2
read MESSAGE2
echo "$MESSAGE2" 1>&2
As the listener for socat I have
socat unix-listen:my_socket,fork exec:./getmsg.sh,stderr
as the client I use:
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket 2> stderr.txt
and I get the output
PID: 57248
message 1
message 2
whereas the file stderr.txt is empty.
My expectation however was that
stdout from the script on the listener side would be piped to stdout on the client and
stderr on the listener side to stderr on the client side.
That is the file stderr.txt should have had the content message 2 instead of being empty.
Any idea on how I can achieve it that stdout and stderr are transferred separately and not combined?
Thanks
If the input and output are just text with reasonably finite line lengths, then you can easily write muxing and demuxing commands in pure Bash.
The only issue is how socat (mis)handles stderr; it basically either forces it to be the same file as stdout or ignores it completely. At which point it is better to use one’s own file descriptor convention in the handler script, with unusual file descriptors that don’t conflict with 0, 1 or 2.
Let’s pick 11 for stdout and 12 for stderr, for example. For stdin we can just keep using 0 as usual.
getmsg.sh
#!/bin/bash
set -e -o pipefail
read message
echo "PID: $$" 1>&11 # to stdout
echo "$message" 1>&11 # to stdout
sleep 2
read message
echo "$message" 1>&12 # to stderr
mux.sh
#!/bin/bash
"$#" \
11> >(while read line; do printf '%s\n' "stdout: ${line}"; done) \
12> >(while read line; do printf '%s\n' "stderr: ${line}"; done)
demux.sh
#!/bin/bash
set -e -o pipefail
declare -ri stdout="${1:-1}"
declare -ri stderr="${2:-2}"
while IFS= read -r line; do
if [[ "$line" = 'stderr: '* ]]; then
printf '%s\n' "${line#stderr: }" 1>&"$((stderr))"
elif [[ "$line" = 'stdout: '* ]]; then
printf '%s\n' "${line#stdout: }" 1>&"$((stdout))"
else
exit 3 # report malformed stream
fi
done
A few examples
#!/bin/bash
set -e -o pipefail
socat unix-listen:my_socket,fork exec:'./mux.sh ./getmsg.sh' &
declare -ir server_pid="$!"
trap 'kill "$((server_pid))"
wait -n "$((server_pid))" || :' EXIT
until [[ -S my_socket ]]; do :; done # ugly
echo '================= raw data from the socket ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket
echo '================= normal mode of operation ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./demux.sh
echo '================= demux / mux test for fun ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./mux.sh ./demux.sh 11 12
My expectation however
There is only one socket and via one socket one stream of data can be sent, not two. You can't send stdout and stderr (two streams) via one handle (I mean, without like inter-mixing them, i.e. without loosing information what data is from which stream). Also see explanation of stderr flag with exec in man socat and see man dup. Both stderr and stdout of the script redirect to the same output.
The expectation would be that stderr.txt is empty, because socat does not write anything to stderr.
how I can achieve it that stdout and stderr are transferred separately and not combined?
Use two sockets separately for each stream.
Transfer messages using a protocol that would differentiate two streams. For example a simple line-based protocol that prefixes the messages:
# script.sh
echo "stdout: this is stdout"
echo "stderr: this is stderr"
# client
... | socat ... | tee >(sed -n 's/stderr: //p' >&2) | sed -n 's/stdout: //p'

How to monitore the stdout of a command with a timer?

I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.

How to get exit codes for different sections of a command in bash

Let's say I have a line in my bash script with ssh bad#location "find -name 'fruit.txt' | grep "Apple" and I'm trying to retrieve the exit codes of ssh, find . -name 'fruit.txt', and "grep "Apple` to see which command went bad.
So far, I've tried something like echo $? ${PIPESTATUS[0]} ${PIPESTATUS[1]}, but it looks like $? returns the same thing as ${PIPESTATUS[0]} in this case. I only need to return the first non-zero exit code along with dmesg for debugging purposes.
I've also considered using set -o pipefail, which will return a failure exit code if any command errors, but I'd like to somehow know which command failed for debugging.
I'd like either get an exit code of 255 (from ssh) and its corresponding dmesg, or somehow get all of the exit codes.
ssh only returns one exit status (per channel) to the calling shell; if you want to get exit status for the individual pipeline components it ran remotely, you need to collect them remotely, put them in with the data, and then parse them back out. One way to do that, if you have a very new version of bash, is like so:
#!/usr/bin/env bash
# note <<'EOF' not just <<EOF; with the former, the local shell does not munge
# heredoc contents.
remote_script=$(cat <<'EOF'
tempfile=$(mktemp "${TMPDIR:-/tmp}/output.XXXXXX"); mktemp_rc=$?
find -name 'fruit.txt' | grep Apple >"$tempfile"
printf '%s\0' "$mktemp_rc" "${PIPESTATUS[#]}"
cat "$tempfile"
rm -f -- "$tempfile"
exit 0 # so a bad exit status will be from ssh itself
EOF
)
# note that collecting a process substitution PID needs bash 4.4!
exec {ssh_fd}< <(ssh bad#location "$remote_script" </dev/null); ssh_pid=$!
IFS= read -r -d '' mktemp_rc <&$ssh_fd # read $? of mktemp
IFS= read -r -d '' find_rc <&$ssh_fd # read $? of find
IFS= read -r -d '' grep_rc <&$ssh_fd # read $? of grep
cat <&$ssh_fd # spool output of grep to our own output
wait "$ssh_pid"; ssh_rc=$? # let ssh finish and read its $?
echo "mktemp exited with status $mktemp_rc" >&2
echo "find exited with status $find_rc" >&2
echo "grep exited with status $grep_rc" >&2
echo "ssh exited with status $ssh_rc" >&2
How does this work?
exec {fd_var_name}< <(...) uses the bash 4.1 automatic file descriptor allocation feature to generate a file descriptor number, and associate it with content read from the process substitution running ....
In bash 4.4 or newer, process substitutions also set $!, so their PIDs can be captured, to later wait for them and collect their exit status; this is what we're storing in ssh_pid.
IFS= read -r -d '' varname reads from stdin up to the next NUL (in read -d '', the first character of '' is treated as the end of input; as an empty string in a C-derived language, the first byte of the string is its NUL terminator).
This could theoretically be made easier by writing the output before the exit status values -- you wouldn't need a temporary file on the remote machine that way -- but the caveat there is that if there were a NUL anywhere in the find | grep output, then some of that output could be picked up by the reads. (Similarly, you could store output in a variable instead of a temporary file, but again, that would destroy any NULs in the stream's output).

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

output of dns-sd browse command not redirected to file in busybox shell(ash)

To check if mdnsd is in probing mode we are using below command to browse for service and redirect its output a file, and hostname of the device is found in the command we decide that mdnsd is in probing mode.
command used for publishing service
dns-sd -R "Test status" "_mytest._tcp." "local." "22"
To browse the service following command is used (Running in background)
dns-sd -lo -Z _mytest._tcp > /tmp/myfile &
To display the content of the file used cat.
cat /tmp/myfile
myfile is empty, if > replaced with tee , I see output on console myfile remains empty.
I am unable to understand what is going on.
Is there any pointer, help
EDIT
Just for completeness adding output, which i missed adding before.
# dns-sd -lo -Z _mytest._tcp local
Using LocalOnly
Using interface -1
Browsing for _mytest._tcp
DATE: ---Tue 25 Apr 2017---
11:09:24.775 ...STARTING...
; To direct clients to browse a different domain, substitute that domain in place of '#'
lb._dns-sd._udp PTR #
; In the list of services below, the SRV records will typically reference dot-local Multicast DNS names.
; When transferring this zone file data to your unicast DNS server, you'll need to replace those dot-local
; names with the correct fully-qualified (unicast) domain name of the target host offering the service.
_mytest._tcp PTR Test\032status._mytest._tcp
Test\032status._mytest._tcp SRV 0 0 22 DevBoard.local. ; Replace with unicast FQDN of target host
Test\032status._mytest._tcp TXT ""
You appear to have a program with behavior that differs based on whether its output is to a TTY. One workaround is to use a tool such as unbuffer or script to simulate a TTY.
Moreover, inasmuch as the use of a file at all is done as a workaround, I suggest using a FIFO to actually capture the line you want without needing to write to a file and poll that file's contents.
#!/bin/sh
newline='
'
# Let's define some helpers...
cleanup() {
[ -e /proc/self/fd/3 ] && exec 3<&- ## close FD 3 if it's open
rm -f "fifo.$$" ## delete the FIFO from disk
if [ -n "$pid" ] && kill -0 "$pid" 2>/dev/null; then ## if our pid is still running...
kill "$pid" ## ...then shut it down.
fi
}
die() { cleanup; echo "$*" >&2; exit 1; }
# Create a FIFO, and start `dns-sd` in the background *thinking* it's writing to a TTY
# but actually writing to that FIFO
mkfifo "fifo.$$"
script -q -f -c 'dns-sd -lo -Z _mytest._tcp local' /proc/self/fd/1 |
tr -d '\r' >"fifo.$$" & pid=$!
exec 3<"fifo.$$"
while read -t 1 -r line <&3; do
case $line in
"Script started on"*|";"*|"") continue;;
"Using "*|DATE:*|[[:digit:]]*) continue;;
*) result="${result}${line}${newline}"; break
esac
done
if [ -z "$result" ]; then
die "Timeout before receiving a non-boilerplate line"
fi
printf '%s\n' "Retrieved a result:" "$result"
cleanup

Resources