Bash script redirecting stdin to program and its output to another program - bash

I'm just learning bash scripting. It's the first time I have to redirect output to another program and I don't know how to do it.
I have to write a script which connects a GUI program, and zero, one or two programs - I need two players, both can be computer or human. GUI gets output from both programs (or humans, I mean from stdin).
Let's assume that there is one human and one comp_player. Human gives command using stdin, this command has to be redirected to running GUI program and running comp_player, both expecting input. Then, comp_player's output has to be redirected to GUI (if there were second computer player, it would also be necessary to redirect this output to second computer player's input). The turn ends.
I know how to create a file to read and write and redirect input or output from it. For example:
echo "anything" >&3
exec 3<>sometextfile
read line <&3
echo $line
But what I don't know is how to redirect, for example, line I just read to running program who expects input and capture its output, which I can redirect to GUI and another program.
I know it isn't as simple as code above and I that have to use something called named pipes, but I tried to read some tutorials and I failed to write working script.
Can you give me an example of fragment of a script which, say:
(gui program and computer player program are running)
-reads line from stdin
-"sends" the line to gui program's and comp_player's inputs
-"reads" output from comp_player and writes it to stdout and also "sends" it to gui input

Named pipes are a special kind of files used to connect the input and output of two completely separate programs. Think of it as a temporary buffer, or an array that's shared between two programs that don't know about each other. This makes them an awesome tool to share messages between the two programs and get them to communicate very effectively.
As a simple test to see how a named pipe works, open two terminals in the same directory, and type mkfifo mypipe in the first one to create the file. Now, to use it just write something to it, for example: echo "A very important message" > mypipe
Now the message is stored in the pipe file, you will see the terminal is blocked, as if the echo hadn't finish. Go to the second terminal and get the contents of the pipe using: cat mypipe
You will print out the "very important message" you stored in the piped from the first terminal. Notice the pipe is empty now, and you simply can't get the message again from it.
Now that you know how named pipes work, here's a very simple example of how three players would communicate. Notice that we can't use a single file for all of them, instead we will create separate pipes to communicate player1 and player2, player1 and gui, and player2 and gui. I'm guessing the gui program is written in another language, but I will leave that to you.
PLAYER 1 (HUMAN)
player2pipe="pipe1"
guipipe="pipe2"
#First make sure we have our files
if [ ! -p $player2pipe ]; then
mkfifo $player2pipe
fi
if [ ! -p $guipipe ]; then
mkfifo $guipipe
fi
while true; do #Or until the game ends
echo -n "Do something: "
read move
# Send our move to the other two players
echo $move > $player2pipe
echo $move > $guipipe
playermove=$(cat $player2pipe) # Read other player's move from the pipe file. The execution will pause until there's something to read
# Do something about that move here
done
PLAYER2 (COMPUTER)
player1pipe="pipe1"
guipipe="pipe3"
if [ ! -p $player1pipe ]; then
mkfifo $player1pipe
fi
if [ ! -p $guipipe ]; then
mkfifo $guipipe
fi
while true; do
playermove=$(cat $player1pipe)
# Do something about that move here
move="A very good move made by a computer" #Obviously you will have to generate a new move
echo $move > $player1pipe
echo $move > $guipipe
done
GUI
player1pipe="pipe2"
player2pipe="pipe3"
if [ ! -p $player1pipe ]; then
mkfifo $player1pipe
fi
if [ ! -p $player1pipe ]; then
mkfifo $player1pipe
fi
while true; do #Or until the game ends
# Read other players' move from the pipe files. Notice the order here, if player2 moved before player1 the execution would be locked until the pipe is emptied
player1move=$(cat $player1pipe)
player2move=$(cat $player2pipe)
#Print out their move or whatever you need to do with it.
echo $player1move
echo $player2move
# Do whatever else you need to do about those moves
done
Save the three files in the same directory and execute them from three different terminals to see how they work.
Hope I helped.

Related

How can I start a subscript within a perpetually running bash script after a specific string has been printed in the terminal output?

Specifics:
I'm trying to build a bash script which needs to do a couple of things.
Firstly, it needs to run a third party script that I cannot manipulate. This script will build a project and then start a node server which outputs data to the terminal continually. This process needs to continue indefinitely so I can't have any exit codes.
Secondly, I need to wait for a specific line of output from the first script, namely 'Started your app.'.
Once that line has been output to the terminal, I need to launch a separate set of commands, either from another subscript or from an if or while block, which will change a few lines of code in the project that was built by the first script to resolve some dependencies for a later step.
So, how can I capture the output of the first subscript and use that to run another set of commands when a particular line is output to the terminal, all while allowing the first script to run in the terminal, and without using timers and without creating a huge file from the output of subscript1 as it will run indefinitely?
Pseudo-code:
#!/usr/bin/env bash
# This script needs to stay running & will output to the terminal (at some point)
# a string that we need to wait/watch for to launch subscript2
sh subscript1
# This can't run until subscript1 has output a particular string to the terminal
# This could be another script, or an if or while block
sh subscript2
I have been beating my head against my desk for hours trying to get this to work. Any help would be appreciated!
I think this is a bad idea — much better to have subscript1 changed to be automation-friendly — but in theory you can write:
sh subscript1 \
| {
while IFS= read -r line ; do
printf '%s\n' "$line"
if [[ "$line" = 'Started your app.' ]] ; then
sh subscript2 &
break
fi
done
cat
}

Background process appears to hang

Editor's note: The OP is ultimately looking to package the code from this answer
as a script. Said code creates a stay-open FIFO from which a background command reads data to process as it arrives.
It works if I type it in the terminal, but it won't work if I enter those commands in a script file and run it.
#!/bin/bash
cat >a&
pid=$!
it seems that the program is stuck at cat>a&
$pid has no value after running the script, but the cat process seems to exist.
cdarke's answer contains the crucial pointer: your script mustn't run in a child process, so you have to source it.
Based on the question you linked to, it sounds like you're trying to do the following:
Open a FIFO (named pipe).
Keep that FIFO open indefinitely.
Make a background command read from that FIFO whenever new data is sent to it.
See bottom for a working solution.
As for an explanation of your symptoms:
Running your script NOT sourced (NOT with .) means that the script runs in a child process, which has the following implications:
Variables defined in the script are only visible inside that script, and the variables cease to exist altogether when the script finishes running.
That's why you didn't see the script's $myPid variable after running the script.
When the script finishes running, its background tasks (cat >a&) are killed (as cdarke explains, the SIGHUP signal is sent to them; any process that doesn't explicitly trap that signal is terminated).
This contradicts your claim that the cat process continues to exist, but my guess is that you mistook an interactively started cat process for one started by a script.
By contrast, any FIFO created by your script (with mkfifo) does persist after the script exits (a FIFO behaves like a file - it persists until you explicitly delete it).
However, when you write to that FIFO without another process reading from it, the writing command will block and thus appear to hang (the writing process blocks until another process reads the data from the FIFO).
That's probably what happened in your case: because the script's background processes were killed, no one was reading from the FIFO, causing an attempt to write to it to block. You incorrectly surmised that it was the cat >a& command that was getting "stuck".
The following script, when sourced, adds functions to the current shell for setting up and cleaning up a stay-open FIFO with a background command that processes data as it arrives. Save it as file bgfifo_funcs:
#!/usr/bin/env bash
[[ $0 != "$BASH_SOURCE" ]] || { echo "ERROR: This script must be SOURCED." >&2; exit 2; }
# Set up a background FIFO with a command listening for input.
# E.g.:
# bgfifo_setup bgfifo "sed 's/^/# /'"
# echo 'hi' > bgfifo # -> '# hi'
# bgfifo_cleanup
bgfifo_setup() {
(( $# == 2 )) || { echo "ERROR: usage: bgfifo_setup <fifo-file> <command>" >&2; return 2; }
local fifoFile=$1 cmd=$2
# Create the FIFO file.
mkfifo "$fifoFile" || return
# Use a dummy background command that keeps the FIFO *open*.
# Without this, it would be closed after the first time you write to it.
# NOTE: This call inevitably outputs a job control message that looks
# something like this:
# [1]+ Stopped cat > ...
{ cat > "$fifoFile" & } 2>/dev/null
# Note: The keep-the-FIFO-open `cat` PID is the only one we need to save for
# later cleanup.
# The background processing command launched below will terminate
# automatically then FIFO is closed when the `cat` process is killed.
__bgfifo_pid=$!
# Now launch the actual background command that should read from the FIFO
# whenever data is sent.
{ eval "$cmd" < "$fifoFile" & } 2>/dev/null || return
# Save the *full* path of the FIFO file in a global variable for reliable
# cleanup later.
__bgfifo_file=$fifoFile
[[ $__bgfifo_file == /* ]] || __bgfifo_file="$PWD/$__bgfifo_file"
echo "FIFO '$fifoFile' set up, awaiting input for: $cmd"
echo "(Ignore the '[1]+ Stopped ...' message below.)"
}
# Cleanup function that you must call when done, to remove
# the FIFO file and kill the background commands.
bgfifo_cleanup() {
[[ -n $__bgfifo_file ]] || { echo "(Nothing to clean up.)"; return 0; }
echo "Removing FIFO '$__bgfifo_file' and terminating associated background processes..."
rm "$__bgfifo_file"
kill $__bgfifo_pid # Note: We let the job control messages display.
unset __bgfifo_file __bgfifo_pid
return 0
}
Then, source script bgfifo_funcs, using the . shell builtin:
. bgfifo_funcs
Sourcing executes the script in the current shell (rather than in a child process that terminates after the script has run), and thus makes the script's functions and variables available to the current shell. Functions by definition run in the current shell, so any background commands started from functions stay alive.
Now you can set up a stay-open FIFO with a background process that processes input as it arrives as follows:
# Set up FIFO 'bgfifo in the current dir. and process lines sent to it
# with a sample Sed command that simply prepends '# ' to every line.
$ bgfifo_setup bgfifo "sed 's/^/# /'"
# Send sample data to the FIFO.
$ echo 'Hi.' > bgfifo
# Hi.
# ...
$ echo 'Hi again.' > bgfifo
# Hi again.
# ...
# Clean up when done.
$ bgfifo_cleanup
The reason that cat >a "hangs" is because it is reading from the standard input stream (stdin, file descriptor zero), which defaults to the keyboard.
Adding the & causes it to run in background, which disconnects from the keyboard. Normally that would leave a suspended job in background, but, since you exit your script, its background tasks are killed (sends a SIGHUP signal).
EDIT: although I followed the link in the question, it was not stated originally that the OP was actually using a FIFO at that stage. So thanks to #mklement0.
I don't understand what you are trying to do here, but I suspect you need to run it as a "sourced" file, as follows:
. gash.sh
Where gash.sh is the name of your script. Note the preceding .
You need to specify a file with "cat":
#!/bin/bash
cat SOMEFILE >a &
pid=$!
echo PID $pid
Although that seems a bit silly - why not just "cp" the file (cp SOMEFILE a)?
Q: What exactly are you trying to accomplish?

Bash: How to redirect stdin, stdout and stderr to a log file for an install log

I currently have a bunch of installer scripts which log stderr/stdout to a log file that works well but I need to also redirect stdin for user responses to the same log file. The install scripts sometime call functions in a shared library (an include), which may also read user input. I thought about adding a custom read function but this will require altering the shared library and wondered if there's a way to do this from the calling script.
At the moment the scripts are similar to this:
#!/usr/bin/bash
. ./libInstall
INSTALL_LOG="./install.log"
( (
echo "INFO: Installing..."
# Run some arbitrary commands...
# Read some input...
read ANSWER1
read ANSWER2
# Call function in libInstall which will prompt the user...
funcWhichAsksAQuestion ANSWER3
echo "INFO: Installation Complete"
) 2>&1 ) | tee -a "${INSTALL_LOG}"
If I change "( (" to reflect the line below I can tee off stdin to the log file:
cat - 2> /dev/null | tee -a ${INSTALL_LOG} | ( (
This works but requires 2 carriage returns once the script ends, presumably because the pipe is broken.
It's almost there but I'd it to work without having to press enter twice at the end to get back to the shell prompt.
These scripts have to be fairly portable to work on RHEL >=5, AIX >=5.1, Solaris >=9 with the lowest bash version being v2.05 I believe.
Any ideas how I can achieve this?
Thanks
Why not just add 'echo "\n\n"' after your "installation complete" line? Granted, you'll have two extra lines in your log file, but those seem relatively harmless.
I believe you have to return twice because of how tee is implemented. It "uses" one return by itself, and the other two come from the 'read' calls (well, one read, one funcWhichAsksAQuestion).

How can I have output from one named pipe fed back into another named pipe?

I'm adding some custom logging functionality to a bash script, and can't figure out why it won't take the output from one named pipe and feed it back into another named pipe.
Here is a basic version of the script (http://pastebin.com/RMt1FYPc):
#!/bin/bash
PROGNAME=$(basename $(readlink -f $0))
LOG="$PROGNAME.log"
PIPE_LOG="$PROGNAME-$$-log"
PIPE_ECHO="$PROGNAME-$$-echo"
# program output to log file and optionally echo to screen (if $1 is "-e")
log () {
if [ "$1" = '-e' ]; then
shift
$# > $PIPE_ECHO 2>&1
else
$# > $PIPE_LOG 2>&1
fi
}
# create named pipes if not exist
if [[ ! -p $PIPE_LOG ]]; then
mkfifo -m 600 $PIPE_LOG
fi
if [[ ! -p $PIPE_ECHO ]]; then
mkfifo -m 600 $PIPE_ECHO
fi
# cat pipe data to log file
while read data; do
echo -e "$PROGNAME: $data" >> $LOG
done < $PIPE_LOG &
# cat pipe data to log file & echo output to screen
while read data; do
echo -e "$PROGNAME: $data"
log echo $data # this doesn't work
echo -e $data > $PIPE_LOG 2>&1 # and neither does this
echo -e "$PROGNAME: $data" >> $LOG # so I have to do this
done < $PIPE_ECHO &
# clean up temp files & pipes
clean_up () {
# remove named pipes
rm -f $PIPE_LOG
rm -f $PIPE_ECHO
}
#execute "clean_up" on exit
trap "clean_up" EXIT
log echo "Log File Only"
log -e echo "Echo & Log File"
I thought the commands on line 34 & 35 would take the $data from $PIPE_ECHO and output it to the $PIPE_LOG. But, it doesn't work. Instead I have to send that output directly to the log file, without going through the $PIPE_LOG.
Why is this not working as I expect?
EDIT: I changed the shebang to "bash". The problem is the same, though.
SOLUTION: A.H.'s answer helped me understand that I wasn't using named pipes correctly. I have since solved my problem by not even using named pipes. That solution is here: http://pastebin.com/VFLjZpC3
it seems to me, you do not understand what a named pipe really is. A named pipe is not one stream like normal pipes. It is a series of normal pipes, because a named pipe can be closed and a close on the producer side is might be shown as a close on the consumer side.
The might be part is this: The consumer will read data until there is no more data. No more data means, that at the time of the read call no producer has the named pipe open. This means that multiple producer can feed one consumer only when there is no point in time without at least one producer. Think of it of door which closes automatically: If there is a steady stream of people keeping the door always open either by handing the doorknob to the next one or by squeezing multiple people through it at the same time, the door is open. But once the door is closed it stays closed.
A little demonstration should make the difference a little clearer:
Open three shells. First shell:
1> mkfifo xxx
1> cat xxx
no output is shown because cat has opened the named pipe and is waiting for data.
Second shell:
2> cat > xxx
no output, because this cat is a producer which keeps the named pipe open until we tell him to close it explicitly.
Third shell:
3> echo Hello > xxx
3>
This producer immediately returns.
First shell:
Hello
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Third shell
3> echo World > xxx
3>
First shell:
World
The consumer received data, wrote it and - since one more consumer keeps the door open, continues to wait.
Second Shell: write into the cat > xxx window:
And good bye!
(control-d key)
2>
First shell
And good bye!
1>
The ^D key closed the last producer, the cat > xxx, and hence the consumer exits also.
In your case which means:
Your log function will try to open and close the pipes multiple times. Not a good idea.
Both your while loops exit earlier than you think. (check this with (while ... done < $PIPE_X; echo FINISHED; ) &
Depending on the scheduling of your various producers and consumers the door might by slam shut sometimes and sometimes not - you have a race condition built in. (For testing you can add a sleep 1 at the end of the log function.)
You "testcases" only tries each possibility once - try to use them multiple times (you will block, especially with the sleeps ), because your producer might not find any consumer.
So I can explain the problems in your code but I cannot tell you a solution because it is unclear what the edges of your requirements are.
It seems the problem is in the "cat pipe data to log file" part.
Let's see: you use a "&" to put the loop in the background, I guess you mean it must run in parallel with the second loop.
But the problem is you don't even need the "&", because as soon as no more data is available in the fifo, the while..read stops. (still you've got to have some at first for the first read to work). The next read doesn't hang if no more data is available (which would pose another problem: how does your program stops ?).
I guess the while read checks if more data is available in the file before doing the read and stops if it's not the case.
You can check with this sample:
mkfifo foo
while read data; do echo $data; done < foo
This script will hang, until you write anything from another shell (or bg the first one). But it ends as soon as a read works.
Edit:
I've tested on RHEL 6.2 and it works as you say (eg : bad!).
The problem is that, after running the script (let's say script "a"), you've got an "a" process remaining. So, yes, in some way the script hangs as I wrote before (not that stupid answer as I thought then :) ). Except if you write only one log (be it log file only or echo,in this case it works).
(It's the read loop from PIPE_ECHO that hangs when writing to PIPE_LOG and leaves a process running each time).
I've added a few debug messages, and here is what I see:
only one line is read from PIPE_LOG and after that, the loop ends
then a second message is sent to the PIPE_LOG (after been received from the PIPE_ECHO), but the process no longer reads from PIPE_LOG => the write hangs.
When you ls -l /proc/[pid]/fd, you can see that the fifo is still open (but deleted).
If fact, the script exits and removes the fifos, but there is still one process using it.
If you don't remove the log fifo at the cleanup and cat it, it will free the hanging process.
Hope it will help...

How to extend bash shell?

would like to add new functionality to the bash shell. I need to have a queue for executions.
What is the easy way to add new functionality to the bash shell keeping all native functions?
I would like to process the command line, then let the bash to execute them. For users it should be transparent.
Thanks Arman
EDIT
I just discovered prll.sourceforge.net it does exactly what I need.
Its easier than it seems:
#!/bin/sh
yourfunctiona(){ ...; }
...
yourfunctionz(){ ...; }
. /path/to/file/with/more/functions
while read COMMANDS; do
eval "$COMMANDS"
done
you can use read -p if you need a prompt or -t if you want it to timeout ... or if you wanted you could even use your favorite dialog program in place of read and pipe the output to a tailbox
touch /tmp/mycmdline
Xdialog --tailbox /tmp/mycmdline 0 0 &
COMMANDS="echo "
while ([ "$COMMANDS" != "" ]); do
COMMANDS=`Xdialog --stdout --inputbox "Text here" 0 0`
eval "$COMMANDS"
done >>/tmp/mycmdline &
To execute commands in threads you can use the following in place of eval $COMMANDS
#this will need to be before the loope
NUMCORES=$(awk '/cpu cores/{sum += $4}END{print sum}' /proc/cpuinfo)
for i in {1..$NUMCORES};do
if [ $i -eq $NUMCORES ] && #see comments below
if [ -d /proc/$threadarray[$i] ]; then #this core already has a thread
#note: each process gets a directory named /proc/<its_pid> - hacky, but works
continue
else #this core is free
$COMMAND &
threadarray[$i]=$!
break
fi
done
Then there is the case where you fill up all threads.
You can either put the whole thing in a while loop and add continues and breaks,
or you can pick a core to wait for (probably the last) and wait for it
to wait for a single thread to complete use:
wait $threadarray[$i]
to wait for all threads to complete use:
wait
#I ended up using this to keep my load from getting to high for too long
another note: you may find that some commands don't like to be threaded, if so you can put the whole thing in a case statement
I'll try to do some cleanup on this soon to put all of the little blocks together (sorry, I'm cobbling this together from random notes that I used to implement this exact thing, but can't seem to find)

Resources