How can I run a non-blocking bash command and block on its output? - bash

I'm trying to write a script that will run a program, wait for a certain output from the program, then continue execution (and leave the program running.)
My current code doesn't seem to ever output anything, sed never returns true.
This echos "Peerflix started" but that's it.
exec 3< <(peerflix $1 -p 8888)
echo "Peerflix started."
sed '/server$/q' <&3
echo 'Matched'

Use pipes!
Use mkfifo to created a pipe and stream the program's output to it in a non-blocking command. Then use your blocking sed to read from that pipe.
Something like(did not test - I don't have peerflix):
mkfifo myfifo
peerflix $1 -p 8888 > myfifo &
echo "Peerflix started."
sed '/server$/q' myfifo
echo 'Matched'
rm myfifo

Related

How can I test whether stdout matches a pattern, but still print it?

I would like to do something like the log analyzer of the running process. Let's say I run a server, stdout passes through the pipe to the bash script, where is IF statement. IF the string "somethings" appears in the output, then the script kills the server. If not then it normally prints stdout and still is running.
Example:
./server | if.bash
The contents of if.bash:
if grep 'somethings'; then
     kill app
else
     echo server output
fi
The above code successfully runs the test, but doesn't print the original stdout. How can I ensure that content is still printed?
Read the output in a loop:
while read -r line; do
if [[ $line =~ something ]];
then
kill app
break
else
printf "%s\n" "$line"
fi
done
Another option is to use tee when running the script:
./server | tee /dev/tty | if.bash
tee will output the messages on the terminal and also send them to the pipe.

Read full stdin until EOF when stdin comes from `cat` bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?
I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Redirect stdin in a script to another process

Say I have a bash script that get some input via stdin. Now in that script I want to launch another process and have that process get the same data via its stdin.
#!/bin/bash
echo STDIN | somecommand
Now the "echo STDIN" thing above is obviously bogus, the question is how to do that? I could use read to read each line from stdin, append it into a temp file, then
cat my_temp_file | somecommand
but that is somehow kludgy.
When you write a bash script, the standard input is automatically inherited by any command within it that tries to read it, so, for example, if you have a script myscript.sh containing:
#!/bin/bash
echo "this is my cat"
cat
echo "I'm done catting"
And you type:
$ myscript.sh < myfile
You obtain:
this is my cat
<... contents of my file...>
I'm done catting
Can tee help you?
echo 123 | (tee >( sed s/1/a/ ) >(sed s/3/c/) >/dev/null )

nice way to kill piped process?

I want to process each stdout-line for a shell, the moment it is created. I want to grab the output of test.sh (a long process). My current approach is this:
./test.sh >tmp.txt &
PID=$!
tail -f tmp.txt | while read line; do
echo $line
ps ${PID} > /dev/null
if [ $? -ne 0 ]; then
echo "exiting.."
fi
done;
But unfortunately, this will print "exiting" and then wait, as the tail -f is still running. I tried both break and exit
I run this on FreeBSD, so I cannot use the --pid= option of some linux tails.
I can use ps and grep to get the pid of the tail and kill it, but thats seems very ugly to me.
Any hints?
why do you need the tail process?
Could you instead do something along the lines of
./test.sh | while read line; do
# process $line
done
or, if you want to keep the output in tmp.txt :
./test.sh | tee tmp.txt | while read line; do
# process $line
done
If you still want to use an intermediate tail -f process, maybe you could use a named pipe (fifo) instead of a regular pipe, to allow detaching the tail process and getting its pid:
./test.sh >tmp.txt &
PID=$!
mkfifo tmp.fifo
tail -f tmp.txt >tmp.fifo &
PID_OF_TAIL=$!
while read line; do
# process $line
kill -0 ${PID} >/dev/null || kill ${PID_OF_TAIL}
done <tmp.fifo
rm tmp.fifo
I should however mention that such a solution presents several heavy problems of race conditions :
the PID of test.sh could be reused by another process;
if the test.sh process is still alive when you read the last line, you won't have any other occasion to detect its death afterwards and your loop will hang.

Resources