watch dmesg, exit after first occurrence - bash

I have a script which watches dmesg and kills a process after a specific log message
#!/bin/bash
while sleep 1;
do
# dmesg -w | grep --max-count=1 -q 'protocol'
dmesg -w | sed '/protocol/Q'
mkdir -p /home/user/dmesg/
eval "dmesg -T > /home/user/dmesg/dmesg-`date +%d_%m_%Y-%H:%M`.log";
eval "dmesg -c";
pkill -x -9 programm
done
The Problem is that sed as well as grep only trigger after two messages.
So the script will not continue after only one message.
Is there anything I am missing?

You have a script that periodically executes dmesg. Instead, write a script that watches the output of dmesg.
dmesg | while IFS= read -r line; do
case "$line" in
*protocol*)
echo "do something when line has protocol"
;;
esac
done
Consider reading https://mywiki.wooledge.org/BashFAQ/001 .

Related

How do I prevent my bash script (tailing a file) from repeatedly acting on the same line?

I was working on a script that would keep monitoring login to my server or laptop via ssh.
this was the code that I was working with.
slackmessenger() {
curl -X POST -H 'Content-type: application/json' --data '{"text":"'"$1"'"}' myapilinkwashere
## removed it the api link due to slack restriction
}
while true
do
tail /var/log/auth.log | grep sshd | head -n 1 | while read LREAD
do
echo ${LREAD}
var=$(tail -f /var/log/auth.log | grep sshd | head -n 1)
slackmessenger "$var"
done
done
The issue I'm facing is that it keeps sending the old logs due to the while loop. can there be a condition that the loop only sends the new entries/updated enter as opposed to sending the old one over and over again. could not think of a condition that would skip the old entries and only shows old one.
Instead of using head -n 1 to extract a line at a time, iterate over the filtered output of tail -f /var/log/auth.log | grep sshd and process each line once as it comes through.
#!/usr/bin/env bash
# ^^^^- this needs to be a bash script, not a sh script!
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
while IFS= read -r line; do
printf '%s\n' "$line"
slackmessenger "$line"
done < <(tail -f /var/log/auth.log | grep --line-buffered sshd)
See BashFAQ #9 describing why --line-buffered is necessary.
You could also write this as:
#!/usr/bin/env bash
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
tail -f /var/log/auth.log |
grep --line-buffered sshd |
tee >(xargs -d $'\n' -n 1 slackmessenger)

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

Kill dbus monitor script when application exits?

I am using a simple dbus-monitor script for gnote. The script starts when gnote starts. I modified exec parameter of desktop file to achieve this.
The problem is that I didn't find any way to kill my script after the application(i.e gnote) exits. If the application itself exits there is no point to keep script running in the background as it is not going to fetch any output.
The script looks like this:
#!/bin/bash
OJECT="'org.gnome.Gnote'"
IFACE="'org.gnome.Gnote.RemoteControl'"
DPATH="'/org/gnome/Gnote/RemoteControl'"
echo $IFACE
WATCH1="type='signal',sender=${OJECT},interface=${IFACE},path=${DPATH},member='NoteAdded'"
WATCH2="type='signal',sender=${OJECT},interface=${IFACE},path=${DPATH},member='NoteSaved'"
WATCH3="type='signal', sender=${OJECT}, interface=${IFACE}, path=${DPATH}, member='NoteDeleted'"
dbus-monitor ${WATCH2} |
while read LINE; do
echo $LINE | grep "note://"
done
I tried to modify it like this :
dbus-monitor ${WATCH2} |
while read LINE; do
echo $LINE | grep "note://"
if pgrep "gnote" > /dev/null; then
echo ""
else
break;
fi
done
pid=`pidof -x $(basename $0)`
kill $pid
But it didn't work. I also tried using trap as explained in this question but without success.

Fails to read lines from running process in bash

Using process substitution, we can get every lines of output of a command .
# Echoes every seconds using process substitution
while read line; do
echo $line
done < <(for i in $(seq 1 10); do echo $i && sleep 1; done)
By the same way above, I want to get the stdout output of 'wpa_supplicant' command, while discarding stderr.
But nothing can be seen on screen!
while read line; do
echo $line
done < <(wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null)
I confirmed that typing the same command in prompt shows its output normaly.
$ wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null
What is the mistake? Any help would be appreciated.
Finally I found the answer here!
The problem was easy... the buffering. Using stdbuf (and piping), the original code will be modified as below.
stdbuf -oL wpa_supplicant -iwlan1 -Dwext -c${MY_CONFIG_FILE} | while read line; do
echo "! $line"
done
'stdbuf -oL' make the stream line buffered, so I can get every each line from the running process.

nice way to kill piped process?

I want to process each stdout-line for a shell, the moment it is created. I want to grab the output of test.sh (a long process). My current approach is this:
./test.sh >tmp.txt &
PID=$!
tail -f tmp.txt | while read line; do
echo $line
ps ${PID} > /dev/null
if [ $? -ne 0 ]; then
echo "exiting.."
fi
done;
But unfortunately, this will print "exiting" and then wait, as the tail -f is still running. I tried both break and exit
I run this on FreeBSD, so I cannot use the --pid= option of some linux tails.
I can use ps and grep to get the pid of the tail and kill it, but thats seems very ugly to me.
Any hints?
why do you need the tail process?
Could you instead do something along the lines of
./test.sh | while read line; do
# process $line
done
or, if you want to keep the output in tmp.txt :
./test.sh | tee tmp.txt | while read line; do
# process $line
done
If you still want to use an intermediate tail -f process, maybe you could use a named pipe (fifo) instead of a regular pipe, to allow detaching the tail process and getting its pid:
./test.sh >tmp.txt &
PID=$!
mkfifo tmp.fifo
tail -f tmp.txt >tmp.fifo &
PID_OF_TAIL=$!
while read line; do
# process $line
kill -0 ${PID} >/dev/null || kill ${PID_OF_TAIL}
done <tmp.fifo
rm tmp.fifo
I should however mention that such a solution presents several heavy problems of race conditions :
the PID of test.sh could be reused by another process;
if the test.sh process is still alive when you read the last line, you won't have any other occasion to detect its death afterwards and your loop will hang.

Resources