Best way to use Unix domain socket in bash script - bash

I'm working on a simple bash script daemon that uses Unix domain sockets. I have a loop like this:
#!/bin/bash
while true
do
rm /var/run/mysock.sock
command=`nc -Ul /var/run/mysock.sock`
echo $command > /tmp/command
done
I'm echoing the command out to /tmp/command just for debugging purposes.
Is this the best way to do this?

Looks like I'm late to the party. Anyway, here is my suggestion I employ successfully for one-shot messages with response:
INPUT=$(mktemp -u)
mkfifo -m 600 "$INPUT"
OUTPUT=$(mktemp -u)
mkfifo -m 600 "$OUTPUT"
(cat "$INPUT" | nc -U "$SKT_PATH" > "$OUTPUT") &
NCPID=$!
exec 4>"$INPUT"
exec 5<"$OUTPUT"
echo "$POST_LINE" >&4
read -u 5 -r RESPONSE;
echo "Response: '$RESPONSE'"
Here I use two FIFOs to talk to nc (1) and fetch it's response.

You can use a single file also to use bidirectional.
mkfifo communicate_pipe
exec 3<> communicate_pipe
cat communicate_pipe - | python socket.py 127.0.0.1:8002 | while read line; do
cmd="./something.sh '${line}' > communicate_pipe";
eval $cmd;
fi;
done

Related

Redirect stdout and stderr separately over a socket connection

I am trying to run a script on the other side of a unix-socket connection. For this I am trying to use socat. The script is
#!/bin/bash
read MESSAGE1
echo "PID: $$"
echo "$MESSAGE1"
sleep 2
read MESSAGE2
echo "$MESSAGE2" 1>&2
As the listener for socat I have
socat unix-listen:my_socket,fork exec:./getmsg.sh,stderr
as the client I use:
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket 2> stderr.txt
and I get the output
PID: 57248
message 1
message 2
whereas the file stderr.txt is empty.
My expectation however was that
stdout from the script on the listener side would be piped to stdout on the client and
stderr on the listener side to stderr on the client side.
That is the file stderr.txt should have had the content message 2 instead of being empty.
Any idea on how I can achieve it that stdout and stderr are transferred separately and not combined?
Thanks
If the input and output are just text with reasonably finite line lengths, then you can easily write muxing and demuxing commands in pure Bash.
The only issue is how socat (mis)handles stderr; it basically either forces it to be the same file as stdout or ignores it completely. At which point it is better to use one’s own file descriptor convention in the handler script, with unusual file descriptors that don’t conflict with 0, 1 or 2.
Let’s pick 11 for stdout and 12 for stderr, for example. For stdin we can just keep using 0 as usual.
getmsg.sh
#!/bin/bash
set -e -o pipefail
read message
echo "PID: $$" 1>&11 # to stdout
echo "$message" 1>&11 # to stdout
sleep 2
read message
echo "$message" 1>&12 # to stderr
mux.sh
#!/bin/bash
"$#" \
11> >(while read line; do printf '%s\n' "stdout: ${line}"; done) \
12> >(while read line; do printf '%s\n' "stderr: ${line}"; done)
demux.sh
#!/bin/bash
set -e -o pipefail
declare -ri stdout="${1:-1}"
declare -ri stderr="${2:-2}"
while IFS= read -r line; do
if [[ "$line" = 'stderr: '* ]]; then
printf '%s\n' "${line#stderr: }" 1>&"$((stderr))"
elif [[ "$line" = 'stdout: '* ]]; then
printf '%s\n' "${line#stdout: }" 1>&"$((stdout))"
else
exit 3 # report malformed stream
fi
done
A few examples
#!/bin/bash
set -e -o pipefail
socat unix-listen:my_socket,fork exec:'./mux.sh ./getmsg.sh' &
declare -ir server_pid="$!"
trap 'kill "$((server_pid))"
wait -n "$((server_pid))" || :' EXIT
until [[ -S my_socket ]]; do :; done # ugly
echo '================= raw data from the socket ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket
echo '================= normal mode of operation ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./demux.sh
echo '================= demux / mux test for fun ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./mux.sh ./demux.sh 11 12
My expectation however
There is only one socket and via one socket one stream of data can be sent, not two. You can't send stdout and stderr (two streams) via one handle (I mean, without like inter-mixing them, i.e. without loosing information what data is from which stream). Also see explanation of stderr flag with exec in man socat and see man dup. Both stderr and stdout of the script redirect to the same output.
The expectation would be that stderr.txt is empty, because socat does not write anything to stderr.
how I can achieve it that stdout and stderr are transferred separately and not combined?
Use two sockets separately for each stream.
Transfer messages using a protocol that would differentiate two streams. For example a simple line-based protocol that prefixes the messages:
# script.sh
echo "stdout: this is stdout"
echo "stderr: this is stderr"
# client
... | socat ... | tee >(sed -n 's/stderr: //p' >&2) | sed -n 's/stdout: //p'

How to not show on the terminal my program

I use GROMACS. I think about how can I make my script faster.
This is my script
#!/bin/bash
number=1
for var1 in {1095000..1100000}
do
gmx hbond -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1 <<EOF
8 45
EOF
number=$((number+1))
done
My script makes 5000 files and it shows on the screen program GROMACS and wants me to choose two numbers (that is why I use <<EOF 8 45 EOF to make this automatically for every file).
But I heard that showing something on the screen takes time, so how to not showing gromacs program in the terminal?
I try to use '>' but still, I see most part of the program (I do not see the part when I should choose the number
#!/bin/bash
number=1
for var1 in {1095000..1100000}
do
gmx hbond >/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1 >/dev/null <<EOF
8 45
EOF
>/dev/null
number=$((number+1))
done
Terminals have 3 file descriptors by default:
Standard input, a.k.a. stdin
Standard output, a.k.a. stdout
Standard error, a.k.a. stderr
When redirecting >/dev/null it actually redirects the standard output to /dev/null, which is strictly equivalent to 1>/dev/null
However the program may also output to the standard error, in which case you may want to add 2>/dev/null to suppress stderr messages:
gmx hbond >/dev/null 2>/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1
In Bash you may use &> to redirect both stdout and stderr at the same time:
gmx hbond &>/dev/null -f luteina_wertykalna_1095_1100.xtc -s sim_prz_lut_d.tpr -n kuba_index_sim_prz_lut_d.ndx -hbn eq3_test_$number.ndx -r 0.4 -contact yes -b $var1 -e $var1

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

create read/write environment using named pipes

I am using RedHat EL 4. I am using Bash 3.00.15.
I am writing SystemVerilog and I want to emulate stdin and stdout. I can only use files as the normal stdin and stdout is not supported in the environment. I would like to use named pipes to emulate stdin and stdout.
I understand how to create a to_sv and from_sv file using mkpipe, and how to open them and use them in SystemVerilog.
By using "cat > to_sv" I can output strings to the SystemVerilog simulation. But that also outputs what I'm typing in the shell.
I would like, if possible, a single shell where it acts almost like a UART terminal. Whatever I type goes directly out to "to_sv", and whatever is written to "from_sv" gets printed out.
If I am going about this completely wrong, then by all means suggest the correct way! Thank you so much,
Nachum Kanovsky
Edit: You can output to a named pipe and read from an other one in the same terminal. You can also disable keys to be echoed to the terminal using stty -echo.
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
cat /tmp/from & cat > /tmp/to
Whit this command everything you write goes to /tmp/to and is not echoed and everything written to /tmp/from will be echoed.
Update: I have found a way to send every chars inputed to the /tmp/to one at a time. Instead of cat > /tmp/to use this command:
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to;
fi;
printf "%s" "$c" >> /tmp/to;
done
You probably want to use exec as in:
exec > to_sv
exec < from_sv
See sections 19.1. and 19.2. in the Advanced Bash-Scripting Guide - I/O Redirection
Instead of cat /tmp/from & you may use tail -f /tmp/from & (at least here on Mac OS X 10.6.7 this prevented a deadlock if I echo more than once to /tmp/from).
Based on Lynch's code:
# terminal window 1
(
rm -f /tmp/from /tmp/to
mkfifo /tmp/from
mkfifo /tmp/to
stty -echo
#cat -u /tmp/from &
tail -f /tmp/from &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
while IFS= read -n1 c;
do
if [ -z "$c" ]; then
printf "\n" >> /tmp/to
fi;
printf "%s" "$c" >> /tmp/to
done
)
# terminal window 2
(
tail -f /tmp/to &
bgpid=$!
trap "kill -TERM ${bgpid}; stty echo; exit" 1 2 3 13 15
wait
)
# terminal window 3
echo "hello from /tmp/from" > /tmp/from

Resources