How to take a single line of input from a long running command, then kill it? - bash

Is there a way to take one line of input from a stream, pass it on as an argument and kill the stream?
In pseudo-bash code:
tail -f stream | filter | take-one-and-kill-tail | xargs use-value
Edit: actual script so far is:
i3-msg -t subscribe -m '["window"]'| stdbuf -o0 -e0 jq -r 'select(.change == "new") | "\(.container.window)\n"' | head -0
and it has following (undesirable) behaviour:
$ i3-msg -t subscribe -m '["window"]'| stdbuf -oL -eL jq -r 'select(.change == "new") | "\(.container.window)\n\n"' | head -1
# first event happens, window id is printed
79691787
# second event happens, head -1 quits
$

You could run the command in subshell and kill that shell.
In this example I'm killing the stream after the first info message:
#!/bin/bash
( sudo stdbuf -oL tail -f /var/log/syslog | stdbuf -oL grep -m1 info ; kill $$ )
Note: ( pipeline ) will run the pipeline in a subshell. $$ contains the pid of the current shell. (Which is the subshell in the above example)
In the above example grep -m1 is ensuring that only one line of ouput is read/written before killing the pipe.
If your filter program does not support such an option like -m1, you could pipe to awk and exit awk after the first line of input. The remaining concept stays the same:
( sudo stdbuf -oL tail -f /var/log/syslog \
| stdbuf -oL grep info \
| awk '{print;exit}' ; kill $$)

Related

Bash Script - tail to file

I have the following in a bash script file watcher.sh.
grep ERROR $ExampleLogFile > $ErrorLogFile
When I run this, it copied the lines from ExampleLogFile to ErrorLogFile that contain ERROR successfully.
I need to make it so it continually monitors the ExampleLogFile for changes and writes those to the ErrorLogFile.
I was thinking of doing the following, but this doesn't work:
tail -f grep ERROR $ExampleLogFile > $ErrorLogFile
It does write some of the lines, but its not the ones containing ERROR.
tail: grep: No such file or directory
tail: ERROR: No such file or directory
Any advise please.
You can use tee command here.
tail -f $ExampleLogFile | grep --line-buffered ERROR | tee $ErrorLogFile
It will store and print to stdout at the same time.
You need:
while :; do grep ERROR $ExampleLogFile > $ErrorLogFile; sleep 2; done
This should achieve what you want without needing the tail command.
If the file will ever be cleared though this will not work as you might expect because the grep will pull only current entries in the $ErrorLogFile.
You can arrange the tail/grep in a pipe
tail -f $ExampleLogFile | grep ERROR > $ErrorLogFile
Remember that this command will never exit by itself (tail will continue to look for additional data). You will have to arrange for some other exit condition (e.g., timeout, explicit kill, etc).
tail -f $ExampleLogFile | grep --line-buffered ERROR > $ErrorLogFile
or paranoic:
stdbuf -oL tail -f $ExampleLogFile | stdbuf -oL grep --line-buffered ERROR > $ErrorLogFile
But most probably you want to include existing lines too. In that case:
tail -n +1 -f $ExampleLogFile | grep --line-buffered ERROR > $ErrorLogFile

How to output bash command to stdout and pipe to another command at the same time?

I'm working on a server and to show detailed GPU information I use these commands:
nvidia-smi
ps -up `nvidia-smi |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3`
However as you can see, nvidia-smi is called twice. How can I make the output of nvidia-smi go to output and pipe to another command at the same time?
Use tee:
ps -up `nvidia-smi |tee /dev/stderr |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3`
Since stdout is piped, you can't make a copy to it, so I picked stderr to show output.
If /dev/stderr is not available, use /proc/self/fd/2.

How to continuously monitor output of shell command? Should I use Bash or Ruby?

I am using ios-webkit-proxy-debug remote server which usually shuts down or just disconnects.
I want to restart server if last line of out contains "Disconnected" or command is not running at all.
The problem may be that the output is buffered. You may have luck with the util "stdbuf" to disable the buffer. (another tool is "unbuffer") You can fully disable all buffers with:
stdbuf -i0 -o0 -e0 [command] # 0 is unbuffered and L is line-buffered
Your command might look like this:
stdbuf -oL -eL ios_webkit_debug_proxy |& tee -a proxy.log
tail -f -n0 proxy.log | grep --line-buffered "Disconnected" | while read line ; do [restart server] ; done
I tested it with this:
# This is one terminal
cd /tmp
echo > log
# This in another terminal
cd /tmp
tail -f -n0 log | grep --line-buffered "disconnect" | while read line ; do echo "found disconnect" ; done
# Then, in the first terminal
echo "test" >> log # second terminal does nothing
echo "disconnect" >> log # second terminal echos "found disconnect"
The tail -n0 is because if the tail reads a disconnect in a log file that already exists, it will restart the server as soon as you run that command.
EDIT:
stdbuf is overridden by tee (see man tee). You may have more luck in a different format but some stuff to play around with:
stdbuf -oL -eL ios_webkit_debug_proxy 2>&1 >> proxy.log
# or
unbuffer ios_webkit_debug_proxy |& tee -a proxy.log | grep --line-buffered "Disconnected" | while read line ; do [restart server] ; done

bash output redirect prob

I want to count the number of lines output from a command in a bash script. i.e.
COUNT=ls | wc -l
But I also want the script to output the original output from ls. How to get this done? (My actual command is not ls and it has side effects. So I can't run it twice.)
The tee(1) utility may be helpful:
$ ls | tee /dev/tty | wc -l
CHANGES
qpi.doc
qpi.lib
qpi.s
4
info coreutils "tee invocation" includes this following example, which might be more instructive of tee(1)'s power:
wget -O - http://example.com/dvd.iso \
| tee >(sha1sum > dvd.sha1) \
>(md5sum > dvd.md5) \
> dvd.iso
That downloads the file once, sends output through two child processes (as started via bash(1) process substitution) and also tee(1)'s stdout, which is redirected to a file.
ls | tee tmpfile | first command
cat tmpfile | second command
Tee is a good way to do that, but you can make something simpler:
ls > __tmpfile
cat __tmpfile | wc -l
cat __tmpfile
rm __tmpfile

Do a tail -F until matching a pattern

I want to do a tail -F on a file until matching a pattern. I found a way using awk, but IMHO my command is not really clean. The problem is that I need to do it in only one line, because of some limitations.
tail -n +0 -F /tmp/foo | \
awk -W interactive '{if ($1 == "EOF") exit; print} END {system("echo EOF >> /tmp/foo")}'
The tail will block until EOF appears in the file. It works pretty well. The END block is mandatory because awk's exit does not exit right away. It makes awk to eval the END block before quitting. The END block hangs on a read call (because of tail), so the last thing I need to do, is to write another line in the file to force tail to exit.
Does someone know a better way to do that?
Use tail's --pid option and tail will stop when the shell dies. No need to add extra to the tailed file.
sh -c 'tail -n +0 --pid=$$ -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
Try this:
sh -c 'tail -n +0 -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
The whole command-line will exit as soon as the "EOF" string is seen in /tmp/foo.
There is one side-effect: the tail process will be left running (in the background) until anything is written to /tmp/foo.
I've not results with the solution:
sh -c 'tail -n +0 -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
There is some issue related with the buffer because if there aren't more lines appended to the file, then sed will not read the input. So, with a little more research i came up with this:
sed '/EOF/q' <(tail -n 0 -f /tmp/foo)
The script is in https://gist.github.com/2377029
This is something Tcl is quite good at. If the following is "tail_until.tcl",
#!/usr/bin/env tclsh
proc main {filename pattern} {
set pipe [open "| tail -n +0 -F $filename"]
set pid [pid $pipe]
fileevent $pipe readable [list handler $pipe $pattern]
vwait ::until_found
catch {exec kill $pid}
}
proc handler {pipe pattern} {
if {[gets $pipe line] == -1} {
if {[eof $pipe]} {
set ::until_found 1
}
} else {
puts $line
if {[string first $pattern $line] != -1} {
set ::until_found 1
}
}
}
main {*}$argv
Then you'd do tail_until.tcl /tmp/foo EOF
Does this work for you?
tail -n +0 -F /tmp/foo | sed '/EOF/q'
I'm assuming that 'EOF' is the pattern you're looking for. The sed command quits when it finds it, which means that the tail should quit the next time it writes.
I suppose that there is an outside chance that tail would hang around if the pattern is found at about the end of the file, waiting for more output to appear in the file which will never appear. If that's really a concern, you could probably arrange to kill it - the pipeline as a whole will terminate when sed terminates (unless you're using a funny shell that decides that isn't the correct behaviour).
Grump about Bash
As feared, bash (on MacOS X, at least, but probably everywhere) is a shell that thinks it needs to hang around waiting for tail to finish even though sed quit. Sometimes - more often than I like - I prefer the behaviour of good old Bourne shell which wasn't so clever and therefore guessed wrong less often than Bash does. dribbler is a program which dribbles out messages one per second ('1: Hello' etc in the example), with the output going to standard output. In Bash, this command sequence hangs until I did 'echo pqr >>/tmp/foo' in a separate window.
date
{ timeout -t 2m dribbler -t -m Hello; echo EOF; } >/tmp/foo &
echo Hi
sleep 1 # Ensure /tmp/foo is created
tail -n +0 -F /tmp/foo | sed '/EOF/q'
date
Sadly, I don't immediately see an option to control this behaviour. I did find shopt lithist, but that's unrelated to this problem.
Hooray for Korn Shell
I note that when I run that script using Korn shell, it works as I'd expect - leaving a tail lurking around to be killed somehow. What works there is 'echo pqr >> /tmp/foo' after the second date command completes.
Here's an extended version of Jon's solution which uses sed instead of grep so that the output of tail goes to stdout:
sed -r '/EOF/q' <( exec tail -n +0 -f /tmp/foo ); kill $! 2> /dev/null
This works because sed gets created before tail so $! holds the PID of tail
The main advantage of this over the sh -c solutions is that killing a sh seems to print something to the output such as 'Terminated' which is unwelcome
sh -c 'tail -n +0 --pid=$$ -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'
Here the main problem is with $$.
If you run command as is, $$ is set not to sh but to the PID of the current shell where command is run.
To make kill work you need to change kill $$ to kill \$$
After that you can safely get rid of --pid=$$ passed to tail command.
Summarising, following will work just fine:
/bin/sh -c 'tail -n 0 -f /tmp/foo | { sed "/EOF/ q" && kill \$$ ;}
Optionally you can pass -n to sed to keep it quiet :)
To kill the dangling tail process as well you may execute the tail command in a (Bash) process substituion context which can later be killed as if it had been a backgrounded process. (Code taken from How to read one line from 'tail -f' through a pipeline, and then terminate?).
: > /tmp/foo
grep -m 1 EOF <( exec tail -f /tmp/foo ); kill $! 2> /dev/null
echo EOF > /tmp/foo # terminal window 2
As an alternative you could use a named pipe.
(
: > /tmp/foo
rm -f pidfifo
mkfifo pidfifo
sh -c '(tail -n +0 -f /tmp/foo & echo $! > pidfifo) |
{ sed "/EOF/ q" && kill $(cat pidfifo) && kill $$ ;}'
)
echo EOF > /tmp/foo # terminal window 2
ready to use for tomcat =
sh -c 'tail -f --pid=$$ catalina.out | { grep -i -m 1 "Server startup in" && kill $$ ;}'
for above scenario :
sh -c 'tail -f --pid=$$ /tmp/foo | { grep -i -m 1 EOF && kill $$ ;}'
tail -f <filename> | grep -q "<pattern>"

Resources