How to continuously monitor output of shell command? Should I use Bash or Ruby? - ruby

I am using ios-webkit-proxy-debug remote server which usually shuts down or just disconnects.
I want to restart server if last line of out contains "Disconnected" or command is not running at all.

The problem may be that the output is buffered. You may have luck with the util "stdbuf" to disable the buffer. (another tool is "unbuffer") You can fully disable all buffers with:
stdbuf -i0 -o0 -e0 [command] # 0 is unbuffered and L is line-buffered
Your command might look like this:
stdbuf -oL -eL ios_webkit_debug_proxy |& tee -a proxy.log
tail -f -n0 proxy.log | grep --line-buffered "Disconnected" | while read line ; do [restart server] ; done
I tested it with this:
# This is one terminal
cd /tmp
echo > log
# This in another terminal
cd /tmp
tail -f -n0 log | grep --line-buffered "disconnect" | while read line ; do echo "found disconnect" ; done
# Then, in the first terminal
echo "test" >> log # second terminal does nothing
echo "disconnect" >> log # second terminal echos "found disconnect"
The tail -n0 is because if the tail reads a disconnect in a log file that already exists, it will restart the server as soon as you run that command.
EDIT:
stdbuf is overridden by tee (see man tee). You may have more luck in a different format but some stuff to play around with:
stdbuf -oL -eL ios_webkit_debug_proxy 2>&1 >> proxy.log
# or
unbuffer ios_webkit_debug_proxy |& tee -a proxy.log | grep --line-buffered "Disconnected" | while read line ; do [restart server] ; done

Related

watch dmesg, exit after first occurrence

I have a script which watches dmesg and kills a process after a specific log message
#!/bin/bash
while sleep 1;
do
# dmesg -w | grep --max-count=1 -q 'protocol'
dmesg -w | sed '/protocol/Q'
mkdir -p /home/user/dmesg/
eval "dmesg -T > /home/user/dmesg/dmesg-`date +%d_%m_%Y-%H:%M`.log";
eval "dmesg -c";
pkill -x -9 programm
done
The Problem is that sed as well as grep only trigger after two messages.
So the script will not continue after only one message.
Is there anything I am missing?
You have a script that periodically executes dmesg. Instead, write a script that watches the output of dmesg.
dmesg | while IFS= read -r line; do
case "$line" in
*protocol*)
echo "do something when line has protocol"
;;
esac
done
Consider reading https://mywiki.wooledge.org/BashFAQ/001 .

How to take a single line of input from a long running command, then kill it?

Is there a way to take one line of input from a stream, pass it on as an argument and kill the stream?
In pseudo-bash code:
tail -f stream | filter | take-one-and-kill-tail | xargs use-value
Edit: actual script so far is:
i3-msg -t subscribe -m '["window"]'| stdbuf -o0 -e0 jq -r 'select(.change == "new") | "\(.container.window)\n"' | head -0
and it has following (undesirable) behaviour:
$ i3-msg -t subscribe -m '["window"]'| stdbuf -oL -eL jq -r 'select(.change == "new") | "\(.container.window)\n\n"' | head -1
# first event happens, window id is printed
79691787
# second event happens, head -1 quits
$
You could run the command in subshell and kill that shell.
In this example I'm killing the stream after the first info message:
#!/bin/bash
( sudo stdbuf -oL tail -f /var/log/syslog | stdbuf -oL grep -m1 info ; kill $$ )
Note: ( pipeline ) will run the pipeline in a subshell. $$ contains the pid of the current shell. (Which is the subshell in the above example)
In the above example grep -m1 is ensuring that only one line of ouput is read/written before killing the pipe.
If your filter program does not support such an option like -m1, you could pipe to awk and exit awk after the first line of input. The remaining concept stays the same:
( sudo stdbuf -oL tail -f /var/log/syslog \
| stdbuf -oL grep info \
| awk '{print;exit}' ; kill $$)

tee command piped in a grep and redirected to file

I would like to use the following command in bash:
(while true; do date; sleep 1;done) | tee out.out 2>&1 | grep ^[A-Z] >log.log 2>&1 &
unfortunately, until it is finished (by killing the ppid of sleep command for example), the file log.log is empty but the file out.out has the expected content.
I first want to understand what's happening
I would like to fix this.
In order to fix this, you need to make grep line-buffered. This might depend on the implementation, but on BSD grep (shipped with Mac OS X), you simply need to add the --line-buffered option to grep:
(while true; do date; sleep 1;done) | tee out.out 2>&1 | grep --line-buffered ^[A-Z] >log.log 2>&1 &
From the grep man page:
--line-buffered
Force output to be line buffered. By default, output is line buffered when standard output is a terminal and block buffered otherwise.
You can actually validate that behavior by outputting to STDOUT instead:
(while true; do date; sleep 1;done) | tee out.out 2>&1 | grep ^[A-Z] 2>&1 &
In that case, you don't need to buffer by line explicitly, because that's the default. However, when you redirect to a file, you must explicitly set that behaviour.

bash command to grep something on stderr and save the result in a file

I am running a program called stm. I want to save only those stderr messages that contain the text "ERROR" in a text file. I also want the messages on the console.
How do I do that in bash?
Use the following pipeline if only messages containing ERROR should be displayed on the console (stderr):
stm |& grep ERROR | tee -a /path/to/logfile
Use the following command if all messages should be displayed on the console (stderr):
stm |& tee /dev/stderr | grep ERROR >> /path/to/logfile
Edit: Versions without connecting standard output and standard error:
stm 2> >( grep --line-buffered ERROR | tee -a /path/to/logfile >&2 )
stm 2> >( tee /dev/stderr | grep --line-buffered ERROR >> /path/to/logfile )
This looks like a duplicate of How to pipe stderr, and not stdout?
Redirect stderr to "&1", which means "the same place where stdout is going".
Then redirect stdout to /dev/null. Then use a normal pipe.
$ date -g
date: invalid option -- 'g'
Try `date --help' for more information.
$
$ (echo invent ; date -g)
invent (stdout)
date: invalid option -- 'g' (stderr)
Try `date --help' for more information. (stderr)
$
$ (echo invent ; date -g) 2>&1 >/dev/null | grep inv
date: invalid option -- 'g'
$
To copy the output from the above command to a file, you can use a > redirection or "tee". The tee command will print one copy of the output to the console and second copy to the file.
$ stm 2>&1 >/dev/null | grep ERROR > errors.txt
or
$ stm 2>&1 >/dev/null | grep ERROR | tee errors.txt
Are you saying that you want both stderr and stdout to appear in the console, but only stderr (not stdout) that contains "ERROR" to be logged to a file? It is that last condition that makes it difficult to find an elegant solution. If that is what you are looking for, here is my very ugly solution:
touch stm.out stm.err
stm 1>stm.out 2>stm.err & tail -f stm.out & tail -f stm.err & \
wait `pgrep stm`; pkill tail; grep ERROR stm.err > error.log; rm stm.err stm.out
I warned you about it being ugly. You could hide it in a function, use mktemp to create the temporary filenames, etc. If you don't want to wait for stm to exit before logging the ERROR text to a file, you could add tail -f stm.err | grep ERROR > error.log & after the other tail commands, and remove the grep command from the last line.

How to do "tail this file until that process stops" in Bash?

I have a couple of scripts to control some applications (start/stop/list/etc). Currently my "stop" script just sends an interrupt signal to an application, but I'd like to have more feedback about what application does when it is shutting down. Ideally, I'd like to start tailing its log, then send an interrupt signal and then keep tailing that log until the application stops.
How to do this with a shell script?
For just tailing a log file until a certain process stops (using tail from GNU coreutils):
do_something > logfile &
tail --pid $! -f logfile
UPDATE The above contains a race condition: In case do_something spews many lines into logfile, the tail will skip all of them but the last few. To avoid that and always have tail print the complete logfile, add a -n +1 parameter to the tail call (that is even POSIX tail(1)):
do_something > logfile &
tail --pid $! -n +1 -f logfile
Here's a Bash script that works without --pid. Change $log_file and $p_name to suit your need:
#!/bin/bash
log_file="/var/log/messages"
p_name="firefox"
tail -n10 $log_file
curr_line="$(tail -n1 $log_file)"
last_line="$(tail -n1 $log_file)"
while [ $(ps aux | grep $p_name | grep -v grep | wc -l) -gt 0 ]
do
curr_line="$(tail -n1 $log_file)"
if [ "$curr_line" != "$last_line" ]
then
echo $curr_line
last_line=$curr_line
fi
done
echo "[*] $p_name exited !!"
If you need to tail log until process exited, but watch stdout / sdterr at the same time, try this:
# Run some process in bg (background):
some_process &
# Get process id:
pid=$!
# Tail the log once it is created, but watch process stdout/stderr at the same time:
tail --pid=$pid -f --retry log_file_path &
# Since tail is running in bg also - wait until the process has completed:
tail --pid=$pid -f /dev/null

Resources