Bash Logic Check - Repeating While Loop with nested "IF Then" statements - bash

I'm writing a script to monitor my sip trunk and attempt to fix it. If it fails to fix the issue 6 times, then reboot the server. The script is called by cron via #reboot. I first had nested While Loops but that didn't work correctly so I switched to a never ending While Loop with two nested If Loops to perform the functions of the script.
I was wondering if somebody could take a quick look and see if the way I am attacking it makes sense and is logical approach.
Thank You,
Script as it stands:
#!/bin/bash
pwd="/srv/scripts"
count=0
echo "Script Started on $(date -u) Failure.Count=$count" >> "$pwd/failures.count"
start=start
while [ $start = "start" ]; do
sleep 420
var="$(asterisk -rx "pjsip show registrations" | grep -o Registered)"
if [ "$var" != "Registered" ]; then
amportal restart
count=$(( $count + 1 ))
echo "Trunk Failure on $(date -u) Failure.Count=$count" >> "$pwd/failures.count"
fi
if [ "$count" -gt 5 ]; then
echo "Server Reboot due to Failure.Count=$count on $(date -u)" >> "$pwd/reboot.notification"
reboot
fi
done

There is no need to use a variable in the while loop, or to capture the grep output into a variable.
#!/bin/bash
pwd="/srv/scripts"
count=0
echo "Script Started on $(date -u) Failure.Count=$count" >> "$pwd/failures.count"
# No need for a variable here
while true; do
# Fix indentation
sleep 420
# Again, no need for a variable; use grep -q
if ! asterisk -rx "pjsip show registrations" | grep -q Registered
then
amportal restart
count=$(( $count + 1 ))
echo "Trunk Failure on $(date -u) Failure.Count=$count" >> "$pwd/failures.count"
fi
if [ "$count" -gt 5 ]; then
echo "Server Reboot due to Failure.Count=$count on $(date -u)" >> "$pwd/reboot.notification"
reboot
fi
done
I would perhaps also collect all the log notices in a single log file, and use a more traditional log format with a time stamp and the script's name bofore each message.
Should the counter reset to zero if you see a success? Having the server reboot because you disconnected the network cable at the wrong time seems like something you'd want to avoid.

Related

Breaking bash script function after certain time [duplicate]

This question already has answers here:
Execute a shell function with timeout
(11 answers)
Elegant solution to implement timeout for bash commands and functions
(3 answers)
Closed last year.
#!/usr/bin/env bash
exec 3>&1
fun_1(){
urlcount=$(wc -l < list.txt)
loopcount=0
for url in $(cat list.txt);
do
((loopcount++))
echo -e "\nProcessing URL #${loopcount} (of ${urlcount}) [ ${url} ]\n"
#the below curl command is the problem which i need to time it to maximum 5 minuts the continue the loop (because sometime it could take massive time to complete)
curl -s "http://localhost:5555/?url=$url"
# check for api status percentage
until [[ $(curl -s "http://localhost:5555/view/status" | jq -r '.status') == "100" ]]
do
echo -e "\n[-] Waiting for command $url\n"
sleep 5 || break
done
curl -s "http://localhost:5555/results" | jq -r '.results[]' >> results.txt
done
}
for domain in "$#"
do
fun_1 $domain 2>&1 >&3 | tee -a $WORKING_DIR/error_log.txt
done
This script has multiple functions like fun_1 which is execute one after another.
The problem is some functions which have a loop function using for loop or while loop could be running for very long time,
which is exhausting my server (VPS) and of course waste of time.
THE QUESTION is can I time this function to run for a certain time like one or two hour as maximum?
You could check how many seconds the script is running since start time, using the $SECONDS variable, and see if it's greater than some defined deadline.
I have replicated your execution flow while swapping something for dummy functionality.
#!/usr/bin/env bash
exec 3>&1
fun_1() {
domain="$1"
deadline="$2"
for i in {1..100}; do
echo "iteration $i"
# try until deadline is exceeded or curl succeeds
curl_success=0
until [ "$SECONDS" -gt "$deadline" ] || [ "$curl_success" -eq 1 ]; do
echo "retrying..."
sleep 5
done
# if deadline is exceeded, break out of the loop
if [ "$SECONDS" -gt "$deadline" ]; then
echo "Deadline exeeded"
break
fi
# curl results if deadline not exeeded
echo "curling results..."
done
}
deadline=$((SECONDS + 10))
for domain in "$#"; do
# if deadline is exceeded, break out of the loop
if [ "$SECONDS" -gt "$deadline" ]; then
echo "Deadline exeeded"
break
fi
fun_1 "$domain" "$deadline" 2>&1 >&3 | tee -a ./error_log.txt
done
You could also try to do some tricks with timeout. For example, if you can't modify your function to build a timeout into it.

How to detect a non-rolling log file and pattern match in a shell script which is using tail, while, read, and?

I am monitoring a log file and if PATTERN didn't appear in it within THRESHOLD seconds, the script should print "error", otherwise, it should print "clear". The script is working fine, but only if the log is rolling.
I've tried reading 'timeout' but didn't work.
log_file=/tmp/app.log
threshold=120
tail -Fn0 ${log_file} | \
while read line ; do
echo "${line}" | awk '/PATTERN/ { system("touch pattern.tmp") }'
code to calculate how long ago pattern.tmp touched and same is assigned to DIFF
if [ ${diff} -gt ${threshold} ]; then
echo "Error"
else
echo "Clear"
done
It is working as expected only when there is 'any' line printed in the app.log.
If the application got hung for any reason and the log stopped rolling, there won't be any output by the script.
Is there a way to detect the 'no output' of tail and do some command at that time?
It looks like the problem you're having is that the timing calculations inside your while loop never get a chance to run when read is blocking on input. In that case, you can pipe the tail output into a while true loop, inside of which you can do if read -t $timeout:
log_file=/tmp/app.log
threshold=120
timeout=10
tail -Fn0 "$log_file" | while true; do
if read -t $timeout line; then
echo "${line}" | awk '/PATTERN/ { system("touch pattern.tmp") }'
fi
# code to calculate how long ago pattern.tmp touched and same is assigned to diff
if [ ${diff} -gt ${threshold} ]; then
echo "Error"
else
echo "Clear"
fi
done
As Ed Morton pointed out, all caps variable names are not a good idea in bash scripts, so I used lowercase variable names.
How about something simple like:
sleep "$threshold"
grep -q 'PATTERN' "$log_file" && { echo "Clear"; exit; }
echo "Error"
If that's not all you need then edit your question to clarify your requirements. Don't use all upper case for non exported shell variable names btw - google it.
To build further on your idea, it might be beneficial to run the awk part in the background and a continuous loop to do the checking.
#!/usr/bin/env bash
log_file="log.txt"
# threshold in seconds
threshold=10
# run the following process in the background
stdbuf -oL tail -f0n "$log_file" \
| awk '/PATTERN/{system("touch "pattern.tmp") }' &
while true; do
match=$(find . -type f -iname "pattern.tmp" -newermt "-${threshold} seconds")
if [[ -z "${match}" ]]; then
echo "Error"
else
echo "Clear"
fi
done
This looks to me like a watchdog timer. I've implemented something like this by forcing a background process to update my log, so I don't have to worry about read -t. Here's a working example:
#!/usr/bin/env bash
threshold=10
grain=2
errorstate=0
while sleep "$grain"; do
date '+[%F %T] watchdog timer' >> log
done &
trap "kill -HUP $!" 0 HUP INT QUIT TRAP ABRT TERM
printf -v lastseen '%(%s)T'
tail -F log | while read line; do
printf -v now '%(%s)T'
if (( now - lastseen > threshold )); then
echo "ERROR"
errorstate=1
else
if (( errorstate )); then
echo "Recovered, yay"
errorstate=0
fi
fi
if [[ $line =~ .*PATTERN.* ]]; then
lastseen=$now
fi
done
Run this in one window, wait $threshold seconds for it to trigger, then in another window echo PATTERN >> log to see the recovery.
While this can be made as granular as you like (I've set it to 2 seconds in the example), it does pollute your log file.
Oh, and note that printf '%(%s)T' format requires bash version 4 or above.

How to check whether a long time task running properly? / How to launch a function after given time while a command is running?

How to check whether a long time task running properly? (How to launch a function after given time while a command is running)?
I'm writing a bash script to download some files regularly. I'd like to informed while a successful download is started.
But I couldn't make it right.
#!/bin/bash
URL="http://testurl"
FILENAME="/tmp/test"
function is_downloading() {
sleep 11
echo -e "$DOWNLOADING" # 0 wanted here with a failed download but always get empty
if [[ $DOWNLOADING -eq 1 ]]; then
echo "Send Message"
# send_msg
fi
}
while [[ 0 ]]; do
is_downloading &
DOWNLOADING=1
curl --connect-timeout 10 --speed-time 10 --speed-limit 1 --location -o "$FILENAME" "$URL"
DOWNLOADING=0
echo -e "$DOWNLOADING"
sleep 3600
done
is_downloading is running in another process, the best it could see is a copy of our variables at the time it started. Variables are not shared, bash does not support multi-threading (yet).
So you need to arrange some form of Inter-Process Communication (IPC). There are many methods available, I favour a named pipe (also known as a FIFO). Something like this:
function is_downloading() {
thepipe="$1"
while :
do
read -r DOWNLOADING < "$thepipe"
echo "$DOWNLOADING"
if [[ $DOWNLOADING -eq 1 ]]; then
echo "Send Message"
# send_msg
fi
done
}
pipename="/tmp/$0$$"
mkfifo "$pipename"
is_downloading "$pipename" &
trap 'kill %1;rm "$pipename"' INT TERM EXIT
while :
do
DOWNLOADING=1
echo "$DOWNLOADING" > "$pipename"
curl --connect-timeout 10 --speed-time 10 --speed-limit 1 --location -o "$FILENAME" "$URL"
DOWNLOADING=0
echo "$DOWNLOADING" > "$pipename"
sleep 3600
done
Modifications: taken the function call out of the loop. Tidy-up code put into a trap statement.

How to do while not grep line from file in folder exists in unix shell script

Given the following example:
#!/bin/bash
# var timeout = 5min.
while ( ! grep "Start" /var/log/azure/Microsoft.Azure.Extensions.DockerExtension/{ver}/extension.log); do
sleep 5
done
echo "hello world"
how would one change the script such it looped while not finding the line "start" in the extension.log file with a timeout option.
Additional it should also take into account that the {ver} is not static, and is a semver version "2.3.4" ect, it should take the highest version folder that exist.
Tested code:
#! /bin/bash
timeout=300 #seconds
now=$(date +%s)
let deadline=now+timeout
log=/tmp/log.txt
while ( [ ! -f "$log" ] || (! grep "Start" $log) )
do
if [ $(date +%s) -gt "$deadline" ]
then
echo "Timeout"
break
fi
echo "Sleeping"
sleep 3
done
echo "Done waiting"
Just change the log variable and it should work.
You're fairly close except BASH syntax issues as condition is not evaluated in (...).
This should work:
logfile='/var/log/azure/Microsoft.Azure.Extensions.DockerExtension/{ver}/extension.log'
while ! grep -q "Start" "$logfile"; do
sleep 5
done
echo "hello world"

Bash Parallel processes in Netcat -c

I am using netcat in a bash script as a pseudo server in order to run additional bash scripts from inputs entered. It's been something of an enjoyable side project, however I seem to have gotten stuck.
Essentially, The script and code runs perfectly, but output is not displayed until after the server finishes the process; as this can be a 40 hour process, it's not desirable to have the client with a loading screen and no prompt for the entire time.
Simply put, I would like to load a page based off the content to a point, ignoring the output of everything that follows. The code I have thus far is as follows:
#!/bin/bash
while [ $? -eq 0 ]; do
nc -vlp 8080 -c'(
r=read
$r a b c
z=$r
while [ ${#z} -gt 2]; do
$r z
done
f=`echo $b|sed "s/[^a-z0-9_.-]//gi"`
o="HTTP/1.0 200 OK\r\n"
c="Content"
if [ -z "$f" ]; then
f="index.html"
echo "$o$c-Type: `file -ib $f`\n$c-Length: `stat -c%s $f`"
echo
cat $f
elif [ "$f"==Encrypt ]; then
echo $o
echo
echo $(bash ~/webSupport.sh currentEncrypt "$b")
bash ~/webSupport.sh pullVars "$b" &
else
echo -e "HTTP/1.0 404 Not Found\n\n404\n"
fi
)'
done
I've searched around, and cannot find any way to bypass it, any help would be appreciated.
It should probably be enough to redirect the output streams (to /dev/null or a file if you need to keep it)
bash ~/webSupport.sh pullVars "$b" >/dev/null 2>&1 &
or close them:
bash ~/webSupport.sh pullVars "$b" >&- 2>&- &

Resources