I'm interested in following several remote files at the same time and simultanteously aggregating stats over the file. So far, I'm doing this as follows:
mkfifo mfifo
ssh -ft host1 'tail -f /path/to/log | grep something' > mfifo &
ssh -ft host2 'tail -f /path/to/log | grep something' > mfifo &
ssh -ft host3 'tail -f /path/to/log | grep something' > mfifo &
cat mfifo | awk '{x += $4; print $3} END {printf "total: %d", x}'
This pretty much works as expected, with an aggregation of the grepped logs streamed through awk. However, I'm not sure how to get the final total to be printed. I gather that I need to close the writers of the fifo, but I'm not sure how to do this. Any suggestions regarding how to do this without storing the whole stream as a file?
Killing FIFO Writers
You can use fuser to kill the processes writing to a file. For example:
fuser -TERM -k -w mfifo; sleep 5; fuser -k -w mfifo
Note that fuser defaults to sending SIGKILL, so the example given sends an explicit SIGTERM, then waits five seconds before forcefully terminating the process. This should allow your processes to clean up after themselves, but feel free to adjust the invocation to suit.
Also, note that we're passing the -w flag, so that fuser only kills processes with write access. Without this flag, you'll also be killing cat and awk.
Related
mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
I am new to bash, I am trying to understand this piece of "code".
Why a while loop is not needed? How can this work? Is it itself a loop? Why? How?
Also, cat filePipe by itself ONLY PRINTS ONE LINE, and then exits (I just tested it), and to make cat not to exit I do: while cat pipeFile ; do : ; done. So how does that above work?
I don't get the order of execution... at the beginning /tmp/f is empty, so cat /tmp/f should "send" an empty stream to /bin/bash which just send it to nc which opens a connection and "sends" the interactive bash to whoever connects... and the response of the client is sent to /tmp/f ... and then? What? How can it can go back and do the same things again?
When bash parses the line mkfifo /tmp/f ; cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f, several things happen. First, the fifo is created. Then, in no particular order, 3 things happen: cat is started, bash is started and nc is started with its output stream connected to /tmp/f. cat is now going to block until some other process opens /tmp/f for writing; the nc is about to do that (or already did it, but we don't know if cat will start before nc or if nc starts before cat, nor do we know in which order they will open the fifo, but whoever does it first will block until the other completes the operation). Once all 3 processes start, they will just sit there waiting for some data. Eventually, some external process connects to port 1234 and sends some data into nc, which writes into /tmp/f. cat (eventually) reads that data and sends it downstream to bash, which processes the input and (probably) writes some data into nc, which sends it back across the socket connection.
If you have a test case in which cat /tmp/f only writes one line of data, that is simply because whatever process you used to write into /tmp/f only wrote a single line. Try: printf 'foo\nbar\nbaz\n' > /tmp/f & cat /tmp/f or while sleep 1; do date; done > /tmp/f & cat /tmp/f
/tmp/f is NOT empty, but a fifo, a bi-directional link.
Someone connects to port 1234, type something, which nc will forward to fifo which then feeds into bash.
bash runs the command and sends results back to nc.
.1 You misunderstand what happen when you echo "string" >/path/fifo
.a) When you just echo something >/path/to/somewhere, you
(test accessibility, then) open target somewhere for writting
write someting in openned file descriptor (fd)
close (relax) accessed file.
.b) A fifo (The First In is the First Out.) is not a file.
Try this:
# Window 1:
mkfifo /tmp/fifotest
cat /tmp/fifotest
# Window 2:
exec {fd2fifo}>/tmp/fifotest
echo >&$fd2fifo Foo bar
You will see cat not terminating.
echo >&$fd2fifo Baz
exec {fd2fifo}>&-
Now, cat will close
So there is no need of any loop!
.2 command cat /tmp/f | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/f
could be written (avoid useless use of cat):
bash -i 2>&1 </tmp/f | nc -l -p 1234 > /tmp/f
but you could do same operation but from different point of vue:
nc -l -p 1234 </tmp/f | bash -i >/tmp/f 2>&1
The goal is
to drive bash's STDIN from nc's STDOUT and
connect back bash's STDOUT and STDERR to nc's STDIN.
.3 The more: bashism
Under bash, you could avoid creating fifo by using unnamed fifo:
coproc nc -l -p 1234; bash -i >&${COPROC[1]} 2>&1 <&${COPROC[0]}
or
exec {ncin}<> <(:); nc -l -p 1234 <&$ncin | bash -i >&$ncin 2>&1
I am trying to output to two different files using tee. My first file will basically be tail -f /myfile and my second output will be a subset of the first file. I have looked online that they were saying we can use `|
tee >(proc1) >(proc2)
I have tried the above but both my files are blank.
Here is what i have so far:
myscript.sh
ssh root#server 'tail -f /my/dir/text.log' | tee >(/mydir/my.log) >(grep 'string' /mydir/my.log > /mydir/mysecond.log)
myexpect.sh
#!/usr/bin/expect -f
set pass password
spawn /my/dir/myexpect.sh
expect {
"key fingerprint" {send "yes/r"; exp_contiue}
"assword: " {send "$pass\r"}
}
interact
In your script, there are some problems in the usage of tee,
tee >(/mydir/my.log): can be substitute with tee /mydir/my.log, since tee would write to stdout and files, i.e. /mydir/my.log
grep 'string' /mydir/my.log > /mydir/mysecond.log: as I mentioned, tee would also write to stdout, so no need to grep the string from file, you can grep from stdout directly. Use pipeline to do it.
So the whole command shall be modified as followed,
ssh root#server 'tail -f /my/dir/text.log | tee /mydir/my.log | grep --line-buffered "string" > /mydir/mysecond.log'
Edit:
For your further question
The command would hang because of tail -f was still waiting for output the growing file. If you don't want the command hanged, try to remove -f for tail.
Depends on the option -f existed for tail, you shall use two different way to allow the grep write file.
For tail case: grep can successfully write file
For tail -f case: --line-buffered for grep would use line buffering on output
What is the cleanest (simplest, most efficient, shortest, quickest, easiest, most elegant) way to create a non-linear pipeline like this in Bash?
I have three commands: mksock, irclogin, and ircpingpong. I want to pipe stdin, irclogin, and ircpingpong into mksock, and pipe mksock into stdout and ircpingpong. This means that mksock and ircpingpong are in a loop. I drew a diagram:
irclogin only needs to be run once and be the first input into mksock. After that, ircpingpong, and stdin should be accepted at any time. I am currently using pipes and a temporary file like this:
#!/bin/bash
server=127.0.0.1
port=6667
infifo=/tmp/ircin
outfifo=/tmp/ircout
pongfifo=/tmp/ircpong
rm $infifo
rm $outfifo
rm $pongfifo
mkfifo $infifo
mkfifo $outfifo
touch $pongfifo
( irclogin | cat - $infifo & tail -f $pongfifo; ) | mksock $server $port | tee $outfifo | stdbuf -oL ircpingpong > $pongfifo &
cat < $outfifo &
cat > $infifo
pkill tail
This works, but I want to know if there is a better way to do this. It bothers me that I am using a file rather than a pipe for looping back from ircpingpong to mksock using tail. Using a pipe didn't work because, to my understanding, something is written to the pipe before tail -f starts reading from it, and so it misses it.
It also bothers me that I have to kill tail at the end of the script, because it doesn't stop on it's own and would leave the socket connected even after the script has ended.
I can suggest a version without temporary files, and with two fifo-s:
fifo1=/tmp/fifo1
fifo2=/tmp/fifo2
rm $fifo1
rm $fifo2
mkfifo $fifo1
mkfifo $fifo2
ircpingpong < $fifo2 > $fifo1 &
(mksock <$fifo1|tee $fifo2 )&
irclogin >$fifo1 &
cat >$fifo1
The idea is to run all programs separately, and only ensure that input and output of each program is redirected properly according to this diagram:
Of course, ircpingpong must read stdin and write to stdout, irclogin must write to stdout, and mksock must read from stdin and write to stdout.
Here's something that uses just one fifo.
fifo=/tmp/myfifo
rm $fifo
mkfifo $fifo
((ircpingpong < $fifo &) && irclogin && cat) | mksock | tee $fifo
Add stdbuf as needed.
I don't know whether you will get your "something doesn't die on its own" problem; when I ctrl-c'ed the script, everything seemed to die.
As described in the title, I want to know the process id of tail and nc.
It is easy to use $! to get the pid of nc, but how about tail?
From comments - you're wanting to terminate these processes once a separate event occurs.
Try a subshell. (cat -v in the below for my testing)
e.g.
( tail -f /path_to_file/ | cat -v ) & echo $!
This will give you a pid of a spawned subshell, which you can kill and get your sub processes at the same time.
I have a couple of scripts to control some applications (start/stop/list/etc). Currently my "stop" script just sends an interrupt signal to an application, but I'd like to have more feedback about what application does when it is shutting down. Ideally, I'd like to start tailing its log, then send an interrupt signal and then keep tailing that log until the application stops.
How to do this with a shell script?
For just tailing a log file until a certain process stops (using tail from GNU coreutils):
do_something > logfile &
tail --pid $! -f logfile
UPDATE The above contains a race condition: In case do_something spews many lines into logfile, the tail will skip all of them but the last few. To avoid that and always have tail print the complete logfile, add a -n +1 parameter to the tail call (that is even POSIX tail(1)):
do_something > logfile &
tail --pid $! -n +1 -f logfile
Here's a Bash script that works without --pid. Change $log_file and $p_name to suit your need:
#!/bin/bash
log_file="/var/log/messages"
p_name="firefox"
tail -n10 $log_file
curr_line="$(tail -n1 $log_file)"
last_line="$(tail -n1 $log_file)"
while [ $(ps aux | grep $p_name | grep -v grep | wc -l) -gt 0 ]
do
curr_line="$(tail -n1 $log_file)"
if [ "$curr_line" != "$last_line" ]
then
echo $curr_line
last_line=$curr_line
fi
done
echo "[*] $p_name exited !!"
If you need to tail log until process exited, but watch stdout / sdterr at the same time, try this:
# Run some process in bg (background):
some_process &
# Get process id:
pid=$!
# Tail the log once it is created, but watch process stdout/stderr at the same time:
tail --pid=$pid -f --retry log_file_path &
# Since tail is running in bg also - wait until the process has completed:
tail --pid=$pid -f /dev/null