script does not stop when arguments are passed - bash

I have the following which works perfectly.
#!/bin/bash
killall java
#program USB
make iris install.1 mib510,/dev/ttyUSB0
#listen serial port and write to file
java net.tinyos.tools.PrintfClient -comm serial#/dev/ttyUSB1:iris > foo.txt &
sleep 2
#if "Erase done" is printed to file, stop
if tail -f foo.txt | grep -n "Erase done" -q; then echo "Write ok";fi
killall java
But when I change my script to receive arguments below (sh test.sh USB0 USB1 foo.txt), it does not end. Although it writes the file, the process does not end
#!/bin/bash
killall java
#program USB
make iris install.1 mib510,/dev/tty$1
#listen serial port and write to file
java net.tinyos.tools.PrintfClient -comm serial#/dev/tty$2:iris > $3 &
sleep 2
#if "Erase done" is printed to file, stop
if tail -f $3 | grep -n "Erase done" -q; then echo "Write ok";fi
killall java
Am I doing something wrong?

It appears tail -f will quit when grep quits. So the problem might be with:
if tail -f $3 | grep -n "Erase done" -q; then echo "Write ok";fi
You can replace it with the following:
tail -f $3 | while read LOGLINE
do
[[ "${LOGLINE}" == *"Erase done"* ]] && echo "Write ok" && pkill -P $$ tail
done

Related

watch dmesg, exit after first occurrence

I have a script which watches dmesg and kills a process after a specific log message
#!/bin/bash
while sleep 1;
do
# dmesg -w | grep --max-count=1 -q 'protocol'
dmesg -w | sed '/protocol/Q'
mkdir -p /home/user/dmesg/
eval "dmesg -T > /home/user/dmesg/dmesg-`date +%d_%m_%Y-%H:%M`.log";
eval "dmesg -c";
pkill -x -9 programm
done
The Problem is that sed as well as grep only trigger after two messages.
So the script will not continue after only one message.
Is there anything I am missing?
You have a script that periodically executes dmesg. Instead, write a script that watches the output of dmesg.
dmesg | while IFS= read -r line; do
case "$line" in
*protocol*)
echo "do something when line has protocol"
;;
esac
done
Consider reading https://mywiki.wooledge.org/BashFAQ/001 .

bash get command that was used before pipe symbol

For a half-finished script that already uses the output of a program I also need the name and the parameters of the program that was used to pipe to my script.
So I run it like this:
yay something | ./myscript
Now I need to store "yay something" into a variable.
There is a way to to get previous runned commands or the current one by using set -o history -o histexpand and echo !! or echo $0 but that doesn't include what I wrote right before the pipe.
Maybe you would suggest to pass the name of the program and it's parameter to my script as parameters and then run it there but I don't want this (pass a command as an argument to bash script).
UPDATED SOLUTION (old below):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo "$processes"
#filter bin/bash -i
pac=$(echo "$processes" | sed '1,/bin\/bash -i/!d')
pac=$(echo "$pac" | tail -2 | head -1)
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo "$pac" | grep -o -P '(?<=00:00:00).*(?=)')
echo "$delete"
kill -9 "$delete"
#print
echo " "
echo end:
echo "${pac:1}"
Note: When you use echo, man or cat then $pac will be empty.
OLD Text:
Thanks to Charles for his enormous effort and his link that finally led me to processes=$(> >(ps -f)).
Here a working example. You can e.g. use it with vi test | ./testprocesses (or nano or package helpers like yay or trizen but it won't work with echo, man nor with cat):
#!/bin/bash -i
#get processes
processes=$(> >(ps -f))
echo beginning:
echo $processes
#filter
pac=$(echo $processes | grep -o -P '(?<=CM).*(?=testprocesses)' | grep -o -P '(?<=D).*(?=testprocesses)' | grep -o -P "(?<=00:00:00).*(?=$USER)")
#kill
delete=$(echo $pac | grep -oP "(?<=$USER\s)\w+")
pac=$(echo $pac | grep -o -P '(?<=00:00:00).*(?=)')
kill -9 $delete
#print
echo " "
echo end:
echo $pac
The kill part is necessary to kill the vi instance else it will still be running and eventually interfer with future executions of the script.

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

Pass command via variable in shell

I have following code in my build script:
if [ -z "$1" ]; then
make -j10 $1 2>&1 | tee log.txt && notify-send -u critical -t 7 "BUILD DONE"
else
make -j10 $1 2>&1 | tee log.txt | grep -i --color "Error" && notify-send -u critical -t 7 "BUILD DONE"
fi
I tried to optimize it to:
local GREP=""
[[ ! -z "$1" ]] && GREP="| grep -i --color Error" && echo "Grepping for ERRORS"
make -j10 $1 2>&1 | tee log.txt "$GREP" && notify-send -u critical -t 7 "BUILD DONE"
But error thrown in make line if $1 isn't empty. I just can't figure out how to pass command with grep pipe through the variable.
Like others have already pointed out, you cannot, in general, expect a command in a variable to work. This is a FAQ.
What you can do is execute commands conditionally. Like this, for example:
( make -j10 $1 2>&1 && notify-send -u critical -t 7 "BUILD DONE" ) |
tee log.txt |
if [ -z "$1" ]; then
grep -i --color "Error"
else
cat
fi
This has the additional unexpected benefit that the notify-send is actually conditioned on the exit code of make (which is probably what you intended) rather than tee (which I would expect to succeed unless you run out of disk or something).
(Or if you want the notification regardless of the success status, change && to just ; -- I think this probably makes more sense.)
This is one of those rare Useful Uses of cat (although I still feel the urge to try to get rid of it!)
You can't put pipes in command variables:
$ foo='| cat'
$ echo bar $foo
bar | cat
The linked article explains how to do such things very well.
As mentioned in #l0b0's answer, the | will not be interpreted as you are hoping.
If you wanted to cut down on repetition, you could do something like this:
if [ $(make -j10 "$1" 2>&1 > log.txt) ]; then
[ "$1" ] && grep -i --color "error" log.txt
notify-send -u critical -t 7 "BUILD DONE"
fi
The inside of the test is common to both branches. Instead of using tee so that the output can be piped, you can just indirect the output to log.txt. If "$1" isn't empty, grep for any errors in log.txt. Either way, do the notify-send.

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Resources