How can I test whether stdout matches a pattern, but still print it? - bash

I would like to do something like the log analyzer of the running process. Let's say I run a server, stdout passes through the pipe to the bash script, where is IF statement. IF the string "somethings" appears in the output, then the script kills the server. If not then it normally prints stdout and still is running.
Example:
./server | if.bash
The contents of if.bash:
if grep 'somethings'; then
     kill app
else
     echo server output
fi
The above code successfully runs the test, but doesn't print the original stdout. How can I ensure that content is still printed?

Read the output in a loop:
while read -r line; do
if [[ $line =~ something ]];
then
kill app
break
else
printf "%s\n" "$line"
fi
done
Another option is to use tee when running the script:
./server | tee /dev/tty | if.bash
tee will output the messages on the terminal and also send them to the pipe.

Related

How to monitore the stdout of a command with a timer?

I'd like to know when an application hasn't print a line in stdout for N seconds.
Here is a reproducible example:
#!/bin/bash
dmesg -w | {
while IFS= read -t 3 -r line
do
echo "$line"
done
echo "NO NEW LINE"
}
echo "END"
I can see the NO NEW LINE but the pipe doesn't stop and the bash doesn't continue. END is never displayed.
How to exit from the braces' code?
Source: https://unix.stackexchange.com/questions/117501/in-bash-script-how-to-capture-stdout-line-by-line
How to exit from the brackets' code?
Not all commands exit when they can't write to output or receive SIGPIPE, and they will not exit until they actually notice they can't write to output. Instead, run the command in the background. If the intention is not to wait on the process, in bash you could just use process substitution:
{
while IFS= read -t 3 -r line; do
printf "%s\n" "$line"
done
echo "end"
} < <(dmesg -w)
You could also use coprocess. Or just run the command in the background with a pipe and kill it when done with it.

Kill dbus monitor script when application exits?

I am using a simple dbus-monitor script for gnote. The script starts when gnote starts. I modified exec parameter of desktop file to achieve this.
The problem is that I didn't find any way to kill my script after the application(i.e gnote) exits. If the application itself exits there is no point to keep script running in the background as it is not going to fetch any output.
The script looks like this:
#!/bin/bash
OJECT="'org.gnome.Gnote'"
IFACE="'org.gnome.Gnote.RemoteControl'"
DPATH="'/org/gnome/Gnote/RemoteControl'"
echo $IFACE
WATCH1="type='signal',sender=${OJECT},interface=${IFACE},path=${DPATH},member='NoteAdded'"
WATCH2="type='signal',sender=${OJECT},interface=${IFACE},path=${DPATH},member='NoteSaved'"
WATCH3="type='signal', sender=${OJECT}, interface=${IFACE}, path=${DPATH}, member='NoteDeleted'"
dbus-monitor ${WATCH2} |
while read LINE; do
echo $LINE | grep "note://"
done
I tried to modify it like this :
dbus-monitor ${WATCH2} |
while read LINE; do
echo $LINE | grep "note://"
if pgrep "gnote" > /dev/null; then
echo ""
else
break;
fi
done
pid=`pidof -x $(basename $0)`
kill $pid
But it didn't work. I also tried using trap as explained in this question but without success.

Read full stdin until EOF when stdin comes from `cat` bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?
I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

How can I run a non-blocking bash command and block on its output?

I'm trying to write a script that will run a program, wait for a certain output from the program, then continue execution (and leave the program running.)
My current code doesn't seem to ever output anything, sed never returns true.
This echos "Peerflix started" but that's it.
exec 3< <(peerflix $1 -p 8888)
echo "Peerflix started."
sed '/server$/q' <&3
echo 'Matched'
Use pipes!
Use mkfifo to created a pipe and stream the program's output to it in a non-blocking command. Then use your blocking sed to read from that pipe.
Something like(did not test - I don't have peerflix):
mkfifo myfifo
peerflix $1 -p 8888 > myfifo &
echo "Peerflix started."
sed '/server$/q' myfifo
echo 'Matched'
rm myfifo

Append text to stderr redirects in bash

Right now I'm using exec to redirect stderr to an error log with
exec 2>> ${errorLog}
The only downside is that I have to start each run with a timestamp since exec just pushes the text straight into the log file. Is there a way to redirect stderr but allow me to append text to it, such as a time stamp?
This is very interesting. I've asked a guy who knows bash quite well, and he told me this way:
foo() { while IFS='' read -r line; do echo "$(date) $line" >> file.txt; done; };
First, that creates a function reading one line of raw input from stdin, while the assignment to IFS makes it doesn't ignore blanks. Having read one line, it outputs it with the appropriate data prepended. Then you have to tell bash to redirect stderr into that function:
exec 2> >(foo)
Everything you write into stderr will now go through the foo function. Note when you do it in an interactive shell, you won't see the prompt anymore, because it's printed to stderr, and the read in foo is line buffered :)
You could simple just use:
exec 1> >( sed "s/^/$(date '+[%F %T]'): /" | tee -a ${LOGFILE}) 2>&1
This will not completely solve your Problem regarding Prompt not shown (itt will show after a short time, but not realtime, since the pipe will cache some data...), but will display the output 1:1 on stdout as well as in the file.
The only problem is, that I could not solve, is, to do this from a function, since that opens a subshell, where the exec is useless for the main program...
This example redirects stdout and stderr without loosing the original stdout and stderr. Also errors in the stdout handler are logged to the stderr handler. The file descriptors are saved in variables and closed in the child processes. Bash takes care, that no collisions occur.
#! /bin/bash
stamp ()
{
local LINE
while IFS='' read -r LINE; do
echo "$(date '+%Y-%m-%d %H:%M:%S,%N %z') $$ $LINE"
done
}
exec {STDOUT}>&1
exec {STDERR}>&2
exec 2> >(exec {STDOUT}>&-; exec {STDERR}>&-; exec &>> stderr.log; stamp)
exec > >(exec {STDOUT}>&-; exec {STDERR}>&-; exec >> stdout.log; stamp)
for n in $(seq 3); do
echo loop $n >&$STDOUT
echo o$n
echo e$n >&2
done
This requires a current Bash version but thanks to Shellshock one can rely on this nowadays.
cat q23123 2> tmp_file ;cat tmp_file | sed -e "s/^/$(date '+[%F %T]'): /g" >> output.log; rm -f tmp_file

Resources