Write and read from a fifo from two different script - bash

I have two bash script.
One script write in a fifo. The second one read from the fifo, but AFTER the first one end to write.
But something does not work. I do not understand where the problem is. Here the code.
The first script is (the writer):
#!/bin/bash
fifo_name="myfifo";
# Se non esiste, crea la fifo;
[ -p $fifo_name ] || mkfifo $fifo_name;
exec 3<> $fifo_name;
echo "foo" > $fifo_name;
echo "bar" > $fifo_name;
The second script is (the reader):
#!/bin/bash
fifo_name="myfifo";
while true
do
if read line <$fifo_name; then
# if [[ "$line" == 'ar' ]]; then
# break
#fi
echo $line
fi
done
Can anyone help me please?
Thank you

Replace the second script with:
#!/bin/bash
fifo_name="myfifo"
while true
do
if read line; then
echo $line
fi
done <"$fifo_name"
This opens the fifo only once and reads every line from it.

The problem with your setup is that you have fifo creation in the wrong script if you wish to control fifo access to time when the reader is actually running. In order to correct the problem you will need to do something like this:
reader: fifo_read.sh
#!/bin/bash
fifo_name="/tmp/myfifo" # fifo name
trap "rm -f $fifo_name" EXIT # set trap to rm fifo_name at exit
[ -p "$fifo_name" ] || mkfifo "$fifo_name" # if fifo not found, create
exec 3< $fifo_name # redirect fifo_name to fd 3
# (not required, but makes read clearer)
while :; do
if read -r -u 3 line; then # read line from fifo_name
if [ "$line" = 'quit' ]; then # if line is quit, quit
printf "%s: 'quit' command received\n" "$fifo_name"
break
fi
printf "%s: %s\n" "$fifo_name" "$line" # print line read
fi
done
exec 3<&- # reset fd 3 redirection
exit 0
writer: fifo_write.sh
#!/bin/bash
fifo_name="/tmp/myfifo"
# Se non esiste, exit :);
[ -p "$fifo_name" ] || {
printf "\n Error fifo '%s' not found.\n\n" "$fifo_name"
exit 1
}
[ -n "$1" ] &&
printf "%s\n" "$1" > "$fifo_name" ||
printf "pid: '%s' writing to fifo\n" "$$" > "$fifo_name"
exit 0
operation: (start reader in 1st terminal)
$ ./fifo_read.sh # you can background with & at end
(launch writer in second terminal)
$ ./fifo_write.sh "message from writer" # second terminal
$ ./fifo_write.sh
$ ./fifo_write.sh quit
output in 1st terminal:
$ ./fifo_read.sh
/tmp/myfifo: message from writer
/tmp/myfifo: pid: '28698' writing to fifo
/tmp/myfifo: 'quit' command received

The following script should do the job:
#!/bin/bash
FIFO="/tmp/fifo"
if [ ! -e "$FIFO" ]; then
mkfifo "$FIFO"
fi
for script in "$#"; do
echo $script > $FIFO &
done
while read script; do
/bin/bash -c $script
done < $FIFO
Given two script a.sh and b.sh where both scripts pass "a" and "b" to stdout, respectively, one will get the following result (given that the script above is called test.sh):
./test.sh /tmp/a.sh /tmp/b.sh
a
b
Best,
Julian

Related

/bin/sh: capture stderr into a variable

I am assigning the output of a command to variable A:
A=$(some_command)
How can I "capture" stderr into a variable B ?
I have tried some variations with 2>&1 and read but that does not work:
A=$(some_command) 2>&1 | read B
echo $B
Here's a code snippet that might help you
# capture stderr into a variable and print it
echo "capture stderr into a variable and print it"
var=$(lt -l /tmp 2>&1)
echo $var
capture stderr into a variable and print it
zsh: command not found: lt
# capture stdout into a variable and print it
echo "capture stdout into a variable and print it"
var=$(ls -l /tmp)
echo $var
# capture both stderr and stdout into a variable and print it
echo "capture both stderr and stdout into a variable and print it"
var=$(ls -l /tmp 2>&1)
echo $var
# more classic way of executing a command which I always follow is as follows. This way I am always in control of what is going on and can act accordingly
if somecommand ; then
echo "command succeeded"
else
echo "command failed"
fi
If you have to capture the output and stderr in different variables, then the following might help as well
## create a file using file descriptor for stdout
exec 3> stdout.txt
# create a file using file descriptor for stderr
exec 4> stderr.txt
A=$($1 /tmp 2>&4 >&3);
## close file descriptor
exec 3>&-
exec 4>&-
## open file descriptor for reading
exec 3< stdout.txt
exec 4< stderr.txt
## read from file using file descriptor
read line <&3
read line2 <&4
## close file descriptor
exec 3<&-
exec 4<&-
## print line read from file
echo "stdout: $line"
echo "stderr: $line2"
## delete file
rm stdout.txt
rm stderr.txt
You can try running it with the following
╰─ bash test.sh pwd
stdout: /tmp/somedir
stderr:
╰─ bash test.sh pwdd
stdout:
stderr: test.sh: line 8: pwdd: command not found
As noted in a comment your use case may be better served in other scripting languages. An example: in Perl you can achieve what you want quite simple:
#!/usr/bin/env perl
use v5.26; # or earlier versions
use Capture::Tiny 'capture'; # library is not in core
my $cmd = 'date';
my #arg = ('-R', '-u');
my ($stdout, $stderr, $exit) = capture {
system( $cmd, #arg );
};
say "STDOUT: $stdout";
say "STDERR: $stderr";
say "EXIT: $exit";
I'm sure similar solutions are available in python, ruby, and all the rest.
I gave it another try using process substitution and came up with this:
# command with no error
date +%b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
September
# command with error
date b > >(read A; if [ "$A" = 'Sep' ]; then echo 'September'; fi ) 2> >(read B; if [ ! -z "$B" ]; then echo "$B"; fi >&2)
date: invalid date “b“
# command with both at the same time should work too
I had no success "exporting" the variables from the subprocesses back to the original script. It might be possible though. I just couldn't figure it out.
But this gives you at least access to stdout and stderr as a variable. This means you can do whatever processing you want on them as variables. It depends on your use case if this is of any help to you. Good luck :-)

Test whether stdout has been written to

I have a script that prints in a loop. I want the loop to print differently the first time from all other times (i.e., it should print differently if anything has been printed at all). I am thinking a simple way would be to check whether anything has been printed yet (i.e., stdout has been written to). Is there any way to determine that?
I know I could also write to a variable and test whether it's empty, but I'd like to avoid a variable if I can.
I think that will do what you need. If you echo something between # THE SCRIPT ITSELF and # END, THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT will be printed STDOUT HAS NOT BEEN TOUCHED else...
#!/bin/bash
readonly TMP=$(mktemp /tmp/test_XXXXXX)
exec 3<> "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
echo Hello World
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
So, output of the script as is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
Hello World
and if you remove the echo Hello World line:
STDOUT HAS NOT BEEN TOUCHED
And if you really want to test that while running the script itself, you can do that, too :-)
#!/bin/bash
#FIRST ELSE
function echo_fl() {
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo $2
else
echo $1
fi
}
TMP=$(mktemp /tmp/test_XXXXXX)
exec 3 "$TMP" # open tmp file as fd 3
exec 4>&1 # save current value of stdout as fd 4
exec >&3 # redirect stdout to fd 3 (tmp file)
# THE SCRIPT ITSELF
for f in fst snd trd; do
echo_fl "$(echo $f | tr a-z A-Z)" "$f"
done
# END
exec >&4 # restore save stdout
exec 3>&- # close tmp file
TMP_SIZE=$(stat -f %z "$TMP")
if [ $TMP_SIZE -gt 0 ]; then
echo "THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT"
echo
cat "$TMP"
else
echo "STDOUT HAS NOT BEEN TOUCHED"
fi
rm "$TMP"
output is:
THE FOLLOWING DATA HAS BEEN WRITTEN TO STDOUT
FST
snd
trd
as you can see: Only the first line (FST) has caps on. That's what the echo_fl function does for you: If it's the first line of output, if echoes the first argument, if it's not it echoes the second argument :-)
It's hard to tell what you are trying to do here, but if your script is printing to stdout, you could simply pipe it to perl:
yourcommand | perl -pe 'if ($. == 1) { print "First line is: $_" }'
It all depends on what kind of changes you are attempting to do.
You cannot use the -f option with %z. The line TMP_SIZE=$(stat -f %z "$TMP") produces a long string that fails the test in if [ $TMP_SIZE -gt 0 ].

logging blocks of code to log files in bash

I have a huge bash script and I want to log specific blocks of code to a specific & small log files (instead of just one huge log file).
I have the following two methods:
# in this case, 'log' is a bash function
# Using code block & piping
{
# ... bash code ...
} | log "file name"
# Using Process Substitution
log "file name" < <(
# ... bash code ...
)
Both methods may interfere with the proper execution of the bash script, e.g. when assigning values to a variable (like the problem presented here).
How do you suggest to log the output of commands to log files?
Edit:
This is what I tried to do (besides many other variations), but doesn't work as expected:
function log()
{
if [ -z "$counter" ]; then
counter=1
echo "" >> "./General_Log_File" # Create the summary log file
else
(( ++counter ))
fi
echo "" > "./${counter}_log_file" # Create specific log file
# Display text-to-be-logged on screen & add it to the summary log file
# & write text-to-be-logged to it's corresponding log file
exec 1> >(tee "./${counter}_log_file" | tee -a "./General_Log_File") 2>&1
}
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
The results of executions varies: sometimes the log files are created and sometimes they don't (which raise an error).
You could try something like this:
function log()
{
local logfile=$1
local errfile=$2
exec > $logfile
exec 2> $errfile # if $errfile is not an empty string
}
log $fileA $errfileA
echo stuff
log $fileB $errfileB
echo more stuff
This would redirect all stdout/stderr from current process to a file without any subprocesses.
Edit: The below might be a good solution then, but not tested:
pipe=$(mktemp)
mknod $pipe p
exec 1>$pipe
function log()
{
if ! [[ -z "$teepid2" ]]; then
kill $teepid2
else
tee <$pipe general_log_file &
teepid1=$!
count=1
fi
tee <$pipe ${count}_logfile &
teepid2=$!
(( ++count ))
}
log
echo stuff
log
echo stuff2
if ! [[ -z "$teepid1" ]]; then kill $teepid1; fi
Thanks to Sahas, I managed to achieve the following solution:
function log()
{
[ -z "$counter" ] && counter=1 || (( ++counter ))
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
# Display text-to-be-logged on screen and
# write it to the summary log & to it's corresponding log file
( tee "${counter}.log" < "$pipe" | tee -a "Summary.log" 1>&4 ) &
teepid=$!
exec 1>"$pipe" 2>&1 # redirect stdout & stderr to the pipe
}
# Create temporary FIFO/pipe
pipe_dir=$(mktemp -d)
pipe="${pipe_dir}/cmds_output"
mkfifo "$pipe"
exec 4<&1 # save value of FD1 to FD4
log # Logs the following code block
{
# ... Many bash commands ...
}
log # Logs the following code block
{
# ... Many bash commands ...
}
if [ -n "$teepid" ]; then
exec 1>&- 2>&- # close file descriptors to signal EOF to the `tee`
# command in the bg process
wait $teepid # wait for bg process to exit
fi
It works - I tested it.
References:
Force bash script to use tee without piping from the command line # superuser.com - helped a lot
I/O Redirection # tldp.org
$! - PID Variable # tldp.org
TEST Operators: Binary Comparison # tldp.org
For simple redirection of bash code block, without using a dedicated function, do:
(
echo "log this block of code"
# commands ...
# ...
# ...
) &> output.log

Send commands to a GNU screen

I have a GNU screen named demo, I want to send commands to it. How do I do this?
screen -S demo -X /home/aa/scripts/outputs.sh
yeilds No screen session found.
and doing screen -ls shows that it isn't running.
If the Screen session isn't running, you won't be able to send things to it. Start it first.
Once you've got a session, you need to distinguish between Screen commands and keyboard input. screen -X expects a Screen command. The stuff command sends input, and if you want to run that program from a shell prompt, you'll have to pass a newline as well.
screen -S demo -X stuff '/home/aa/scripts/outputs.sh
'
Note that this may be the wrong approach. Are you sure you want to type into whatever is active in that session? To direct the input at a particular window, use
screen -S demo -p 1 -X stuff '/home/aa/scripts/outputs.sh
'
where 1 is the window number (you can use its title instead).
To start a new window in that session, use the screen command instead. (That's the screen Screen command, not the screen shell command.)
screen -S demo -p 1 -X screen '/home/aa/scripts/outputs.sh'
I put this together to capture the output from the commands. It also handles stdin if you want to pipe some input.
function xscreen {
# Usage: xscreen <screen-name> command...
local SCREEN_NAME=$1
shift
# Create screen if it doesn't exist
if ! screen -list | grep $SCREEN_NAME >/dev/null ; then
screen -dmS $SCREEN_NAME
fi
# Create I/O pipes
local DIR=$( mktemp -d )
local STDIN=$DIR/stdin
local STDOUT=$DIR/stdout
local STDERR=$DIR/stderr
mkfifo $STDIN $STDOUT $STDERR
trap 'rm -f $STDIN $STDOUT $STDERR; rmdir $DIR' RETURN
# Print output and kill stdin when both pipes are closed
{ cat $STDERR >&2 & cat $STDOUT & wait ; fuser -s -PIPE -k -w $STDIN ; } &
# Start the command (Clear line ^A^K, enter command with redirects, run with ^O)
screen -S $SCREEN_NAME -p0 -X stuff "$(echo -ne '\001\013') { $* ; } <$STDIN 1> >(tee $STDOUT) 2> >(tee $STDERR >&2)$(echo -ne '\015')"
# Forward stdin
cat > $STDIN
# Just in case stdin is closed
wait
}
Taking it a step further, it can be useful to call this function over ssh:
ssh user#host -n xscreen somename 'echo hello world'
Maybe combine it with something like ssh user#host "$(typeset -f xscreen); xscreen ..." so you don't have to have the function already defined on the remote host.
A longer version in a bash script that handles the return status and syntax errors:
#!/bin/bash
function usage {
echo "$(basename $0) [[user#]server:[port]] <screen-name> command..." >&2
exit 1
}
[[ $# -ge 2 ]] || usage
SERVER=
SERVERPORT="-p 22"
SERVERPAT='^(([a-z]+#)?([A-Za-z0-9.]+)):([0-9]+)?$'
if [[ "$1" =~ $SERVERPAT ]]; then
SERVER="${BASH_REMATCH[1]}"
[[ -n "${BASH_REMATCH[4]}" ]] && SERVERPORT="-p ${BASH_REMATCH[4]}"
shift
fi
function xscreen {
# Usage: xscreen <screen-name> command...
local SCREEN_NAME=$1
shift
if ! screen -list | grep $SCREEN_NAME >/dev/null ; then
echo "Screen $SCREEN_NAME not found." >&2
return 124
# Create screen if it doesn't exist
#screen -dmS $SCREEN_NAME
fi
# Create I/O pipes
local DIR=$( mktemp -d )
mkfifo $DIR/stdin $DIR/stdout $DIR/stderr
echo 123 > $DIR/status
trap 'rm -f $DIR/{stdin,stdout,stderr,status}; rmdir $DIR' RETURN
# Forward ^C to screen
trap "screen -S $SCREEN_NAME -p0 -X stuff $'\003'" INT
# Print output and kill stdin when both pipes are closed
{
cat $DIR/stderr >&2 &
cat $DIR/stdout &
wait
[[ -e $DIR/stdin ]] && fuser -s -PIPE -k -w $DIR/stdin
} &
READER_PID=$!
# Close all the pipes if the command fails to start (e.g. syntax error)
{
# Kill the sleep when this subshell is killed. Ugh.. bash.
trap 'kill $(jobs -p)' EXIT
# Try to write nothing to stdin. This will block until something reads.
echo -n > $DIR/stdin &
TEST_PID=$!
sleep 2.0
# If the write failed and we're not killed, it probably didn't start
if [[ -e $DIR/stdin ]] && kill $TEST_PID 2>/dev/null; then
echo 'xscreen timeout' >&2
wait $TEST_PID 2>/dev/null
# Send ^C to clear any half-written command (e.g. no closing braces)
screen -S $SCREEN_NAME -p0 -X stuff $'\003'
# Write nothing to output, triggers SIGPIPE
echo -n 1> $DIR/stdout 2> $DIR/stderr
# Stop stdin by creating a fake reader and sending SIGPIPE
cat $DIR/stdin >/dev/null &
fuser -s -PIPE -k -w $DIR/stdin
fi
} &
CHECKER_PID=$!
# Start the command (Clear line ^A^K, enter command with redirects, run with ^O)
screen -S $SCREEN_NAME -p0 -X stuff "$(echo -ne '\001\013') { $* ; echo \$? > $DIR/status ; } <$DIR/stdin 1> >(tee $DIR/stdout) 2> >(tee $DIR/stderr >&2)$(echo -ne '\015')"
# Forward stdin
cat > $DIR/stdin
kill $CHECKER_PID 2>/dev/null && wait $CHECKER_PID 2>/dev/null
# Just in case stdin is closed early, wait for output to finish
wait $READER_PID 2>/dev/null
trap - INT
return $(cat $DIR/status)
}
if [[ -n $SERVER ]]; then
ssh $SERVER $SERVERPORT "$(typeset -f xscreen); xscreen $#"
RET=$?
if [[ $RET == 124 ]]; then
echo "To start screen: ssh $SERVER $SERVERPORT \"screen -dmS $1\"" >&2
fi
exit $RET
else
xscreen "$1" "${#:2}"
fi

Bash: How to end infinite loop with any key pressed?

I need to write an infinite loop that stops when any key is pressed.
Unfortunately this one loops only when a key is pressed.
Ideas please?
#!/bin/bash
count=0
while : ; do
# dummy action
echo -n "$a "
let "a+=1"
# detect any key press
read -n 1 keypress
echo $keypress
done
echo "Thanks for using this script."
exit 0
You need to put the standard input in non-blocking mode. Here is an example that works:
#!/bin/bash
if [ -t 0 ]; then
SAVED_STTY="`stty --save`"
stty -echo -icanon -icrnl time 0 min 0
fi
count=0
keypress=''
while [ "x$keypress" = "x" ]; do
let count+=1
echo -ne $count'\r'
keypress="`cat -v`"
done
if [ -t 0 ]; then stty "$SAVED_STTY"; fi
echo "You pressed '$keypress' after $count loop iterations"
echo "Thanks for using this script."
exit 0
Edit 2014/12/09: Add the -icrnl flag to stty to properly catch the Return key, use cat -v instead of read in order to catch Space.
It is possible that cat reads more than one character if it is fed data fast enough; if not the desired behaviour, replace cat -v with dd bs=1 count=1 status=none | cat -v.
Edit 2019/09/05: Use stty --save to restore the TTY settings.
read has a number of characters parameter -n and a timeout parameter -t which could be used.
From bash manual:
-n nchars
read returns after reading nchars characters rather than waiting for a complete line of input, but honors a delimiter if fewer than nchars characters are read before the delimiter.
-t timeout
Cause read to time out and return failure if a complete line of input (or a specified number of characters) is not read within timeout seconds. timeout may be a decimal number with a fractional portion following the decimal point. This option is only effective if read is reading input from a terminal, pipe, or other special file; it has no effect when reading from regular files. If read times out, read saves any partial input read into the specified variable name. If timeout is 0, read returns immediately, without trying to read any data. The exit status is 0 if input is available on the specified file descriptor, non-zero otherwise. The exit status is greater than 128 if the timeout is exceeded.
However, the read builtin uses the terminal which has its own settings. So as other answers have pointed out we need to set the flags for the terminal using stty.
#!/bin/bash
old_tty=$(stty --save)
# Minimum required changes to terminal. Add -echo to avoid output to screen.
stty -icanon min 0;
while true ; do
if read -t 0; then # Input ready
read -n 1 char
echo -e "\nRead: ${char}\n"
break
else # No input
echo -n '.'
sleep 1
fi
done
stty $old_tty
Usually I don't mind breaking a bash infinite loop with a simple CTRL-C. This is the traditional way for terminating a tail -f for instance.
Pure bash: unattended user input over loop
I've done this without having to play with stty:
loop=true
while $loop; do
trapKey=
if IFS= read -d '' -rsn 1 -t .002 str; then
while IFS= read -d '' -rsn 1 -t .002 chr; do
str+="$chr"
done
case $str in
$'\E[A') trapKey=UP ;;
$'\E[B') trapKey=DOWN ;;
$'\E[C') trapKey=RIGHT ;;
$'\E[D') trapKey=LEFT ;;
q | $'\E') loop=false ;;
esac
fi
if [ "$trapKey" ] ;then
printf "\nDoing something with '%s'.\n" $trapKey
fi
echo -n .
done
This will
loop with a very small footprint (max 2 millisecond)
react to keys cursor left, cursor right, cursor up and cursor down
exit loop with key Escape or q.
Here is another solution. It works for any key pressed, including space, enter, arrows, etc.
The original solution tested in bash:
IFS=''
if [ -t 0 ]; then stty -echo -icanon raw time 0 min 0; fi
while [ -z "$key" ]; do
read key
done
if [ -t 0 ]; then stty sane; fi
An improved solution tested in bash and dash:
if [ -t 0 ]; then
old_tty=$(stty --save)
stty raw -echo min 0
fi
while
IFS= read -r REPLY
[ -z "$REPLY" ]
do :; done
if [ -t 0 ]; then stty "$old_tty"; fi
In bash you could even leave out REPLY variable for the read command, because it is the default variable there.
I found this forum post and rewrote era's post into this pretty general use format:
# stuff before main function
printf "INIT\n\n"; sleep 2
INIT(){
starting="MAIN loop starting"; ending="MAIN loop success"
runMAIN=1; i=1; echo "0"
}; INIT
# exit script when MAIN is done, if ever (in this case counting out 4 seconds)
exitScript(){
trap - SIGINT SIGTERM SIGTERM # clear the trap
kill -- -$$ # Send SIGTERM to child/sub processes
kill $( jobs -p ) # kill any remaining processes
}; trap exitScript SIGINT SIGTERM # set trap
MAIN(){
echo "$starting"
sleep 1
echo "$i"; let "i++"
if (($i > 4)); then printf "\nexiting\n"; exitScript; fi
echo "$ending"; echo
}
# main loop running in subshell due to the '&'' after 'done'
{ while ((runMAIN)); do
if ! MAIN; then runMain=0; fi
done; } &
# --------------------------------------------------
tput smso
# echo "Press any key to return \c"
tput rmso
oldstty=`stty -g`
stty -icanon -echo min 1 time 0
dd bs=1 count=1 >/dev/null 2>&1
stty "$oldstty"
# --------------------------------------------------
# everything after this point will occur after user inputs any key
printf "\nYou pressed a key!\n\nGoodbye!\n"
Run this script

Resources