Bash init script skips reading commands in if loop - bash

I am trying to create an init script for a program in bash. (rhel6)
It checks for the processes first. If processes are found it will echo that program is already online and if not it'll move on to to start the program as a certain user by using launch script. After doing that it should tail the log file of the program and check for a string of words together. If the words are found it should kill tail and echo that program is online.
Here's the start segment.
prog=someProg
user=someUser
threadCount=$(ps -ef | grep $prog |grep -v 'grep' |awk '{ print $2 }'| wc -l)
startb() {
if [ "$threadCount" -eq 2 ]; then
echo "$prog already online."
else
echo "Bringing $prog online."
su $user -c "/path/to/start/script.sh"
tail -f /path/to/$prog/log/file |
while IFS=$'\n' read line
do
if [[ $line == *started\ up\ and\ registered\ in* ]]; then
pkill tail
echo "$prog now online."
fi
done
fi
}
My problems:
The variable $prog doesn't get picked in $threadcount no
matter how I try. (with single and double quotes)
The logic about tailing the log file works randomly. Some times it
just works perfect. It tails and waits till the string is found
before echoing program is online and at times it just starts script
and then echoes that program is online without the tail or wait.
It's unpredictable. I implemented the same logic in stop segment too to monitor log and then echo but even that works the same way as start. Just random.
I am sure that this might look dumb and broken. This is made by picking pieces here and there with my beginner bash skills.
Thanks in advance for suggestions and help.

I can't reproduce the error you are experiencing with the "grep $prog"...sorry.
But for the other part.
I will assume that the script starting your program, the line with su, is starting something in background and that the script end by itself. If not, your example will wait indefinitely.
Could be a personal preference, but when I'm using something like tail to verify lines, I use a named pipe (mkfifo).
That would give something like :
# Getting the tail in background
tail -f /path/to/$prog/log/file > some_fifo &
# Getting the tail PID
tailPID=$!
while read line; do #You don't need to modify/use IFS here.
if [[ $line == *started\ up\ and\ registered\ in* ]]; then
kill -15 $tailPID #since you know the PID you won't kill another tail
echo "$prog now online."
break # don't like the possibility, even remote, of an infinite loop :)
fi
done < some_fifo #reading from the named pipe
Hope it can help you

Related

Need help in bash script for ffmpeg encoding [duplicate]

I'm writing a simple Bash script that simply the call of HadnBrakeCli for render videos.
I also implemented a simple queue option: the queue file just store the line-command it has to call to start a render.
So I wrote a while-loop to read one line at time, eval $line and repeat untill file ends.
if [[ ${QUEUE_MODE} = 'RUN' ]]; then
QUEUE_LEN=`cat ${CONFIG_DIR}/queue | wc -l`
QUEUE_POS='1'
printf "Queue lenght:\t ${QUEUE_LEN}\n"
while IFS= read line; do
echo "--Running render ${QUEUE_POS} on ${QUEUE_LEN}..."
echo "++" && echo "$line" && echo "++"
eval "${line}"
tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue"
echo "--Render ended"
QUEUE_POS=`expr $QUEUE_POS + 1`
done < "${CONFIG_DIR}/queue"
exit 0
The problem is that any command makes the loop to work fine (empty line, echo "test"...), but as soon a proper render is loaded, it is launched and finished correctly, but also the loops exists.
I am a newbie so I tried some minor changes to see what effect I got, but nothing change the result.
I commented the command tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue" or I added/removed IFS= in the while-loop or removed the -r in read command.
Sorry if the question is trivial, but I'm really missing some major part in how it works, so I have no idea even how to search for the solution.
I'll put a sample of a general render in the queue file.
HandBrakeCLI -i "/home/andrea/Videos/done/Rap dottor male e mini me.mp4" -o "/hdd/Render/Output/Rap dottor male e mini me.mkv" -e x265 -q 23 --encoder-preset faster --all-audio -E av_aac -6 dpl2 --all-subtitles -x pmode:pools='16' --verbose=0 2>/dev/null
HandBrakeCLI reads from standard input, which steals the rest of the queue file before read line can see it. My favorite solution to this is to pass the file over something other than standard input, like file descriptor #3:
...
while IFS= read line <&3; do # The <&3 makes it read from FD #3
...
done 3< "${CONFIG_DIR}/queue" # The 3< redirects the file into FD #3
Another way to avoid the problem is to redirect input to the HandBrakeCLI command:
...
eval "${line}" </dev/null
...
There's some more info about this in BashFAQ #89: I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
Also, I'm not sure I trust the way you're using tail to remove lines from the queue file as they're executed. I'm not sure it's really wrong, it just looks fragile to me. Also, I'd recommend using lower- or mixed-case variable names, since there are a bunch of all-caps names with special meanings, and re-using one of them by mistake can have weird consequences. Finally, I'd recommend running your script through shellcheck.net, as it'll make some other good recommendations.
[BTW, this question is a duplicate of "Bash script do loop exiting early", but that doesn't have any upvoted or accepted answers.]

How do I continuously print a file(to terminal) being written to from another script?

Say I have a script that is reading from a file being printed to. In other words, I have Script A running, which prints its progress in a percentage i.e. 40%. Script A
does this until 100% is reached. The file name being printed to is File_A. Script B looks like the following:
source Script_A &
pid=$!
trap "kill $pid 2> /dev/null" EXIT
percent=$(cat File_A | tail -n 1)
while kill -0 $pid 2> /dev/null ; do printf "\r%s" "$percent" ; sleep 1 ; done
The result I get from Script B is a line that prints the same percentage; the percentage doesn't change. For example, the terminal looks like so:
40%
It stays at 40%(an arbitrary number I picked for the sake of using an example). EVEN THOUGH Script A is still running AND printing to File_A; the percent in the file is
updating, but Script B won't print these new lines.
I am not sure how to get Script B to print the updated lines in File_A. I'm assuming it has something to do with different shell sessions, but I wouldn't know exactly what question to ask in regards to that; So how can I solve this?
Edit: Here is the code that works.
source Script_A &
pid=$!
trap "kill $pid 2> /dev/null" EXIT
while kill -0 $pid 2> /dev/null ; do percent=$(tail -n 1 File_A) ;
printf "\r%s" "$percent" ; sleep 1 ; done
trap - EXIT
The while loop fits on one line; if you have to break the line, use \ like such:
while kill -0 $pid 2> /dev/null ; do percent=$(tail -n 1 File_A)\
; printf "\r%s" "$percent" ; sleep 1 ; done
The primary problem here is that you set percent once at the beginning, and never update it. You'd need to put percent=$(cat File_A | tail -n 1) inside the loop to get it to update each time.
But there's a second problem, which won't prevent it from working, but makes it inefficient (especially if it runs that command over and over and over). The construct cat somefile | somecommand is what's sometimes called a "useless use of cat", because cat isn't doing anything useful here -- the next program in the pipe can read from the file perfectly well by itself, it doesn't need cat to preprocess it. In this case, using cat here isn't just useless, it makes tail less efficient. Compare these two commands:
tail -n 1 File_A # tail reads directly from File_A
cat File_A | tail -n 1 # tail reads from a pipe from cat
In the first version, since tail has direct access to the file, it can basically start reading the file from the end until it has the last line, print that, and be done. In the version with cat, it cannot do this, since cat sends the file in order. So in the second version, tail (and cat) must read through the entire file, just to ignore all but the last line.
So this would be a much better way to do it:
n=1
while [ $n -le 100 ] ; do
percent=$(tail -n 1 File_A)
printf "\r%s" "$percent"
sleep 1
((n++))
done
But there's another method that might work better. Rather than re-checking the file constantly, you could use tail -f to "follow" the file and get updates when it changes. However, there's a difficulty here that it doesn't know when to stop reading (i.e. when the output has finished), so you have to figure out how to detect that and exit the loop. Something like this might work:
tail -n 1 -f File_A | while read percent; do
printf "\r%s" "$percent"
[[ "$percent" = "100" ]] && break
done
(Note that if the actual final output is something like "100%" instead of just "100", you'd have to change that comparison appropriately.)

Bash While loop read file by line exit on certain line

I'm writing a simple Bash script that simply the call of HadnBrakeCli for render videos.
I also implemented a simple queue option: the queue file just store the line-command it has to call to start a render.
So I wrote a while-loop to read one line at time, eval $line and repeat untill file ends.
if [[ ${QUEUE_MODE} = 'RUN' ]]; then
QUEUE_LEN=`cat ${CONFIG_DIR}/queue | wc -l`
QUEUE_POS='1'
printf "Queue lenght:\t ${QUEUE_LEN}\n"
while IFS= read line; do
echo "--Running render ${QUEUE_POS} on ${QUEUE_LEN}..."
echo "++" && echo "$line" && echo "++"
eval "${line}"
tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue"
echo "--Render ended"
QUEUE_POS=`expr $QUEUE_POS + 1`
done < "${CONFIG_DIR}/queue"
exit 0
The problem is that any command makes the loop to work fine (empty line, echo "test"...), but as soon a proper render is loaded, it is launched and finished correctly, but also the loops exists.
I am a newbie so I tried some minor changes to see what effect I got, but nothing change the result.
I commented the command tail -n +2 "${CONFIG_DIR}/queue" > "${CONFIG_DIR}/queue.tmp" && mv "${CONFIG_DIR}/queue.tmp" "${CONFIG_DIR}/queue" or I added/removed IFS= in the while-loop or removed the -r in read command.
Sorry if the question is trivial, but I'm really missing some major part in how it works, so I have no idea even how to search for the solution.
I'll put a sample of a general render in the queue file.
HandBrakeCLI -i "/home/andrea/Videos/done/Rap dottor male e mini me.mp4" -o "/hdd/Render/Output/Rap dottor male e mini me.mkv" -e x265 -q 23 --encoder-preset faster --all-audio -E av_aac -6 dpl2 --all-subtitles -x pmode:pools='16' --verbose=0 2>/dev/null
HandBrakeCLI reads from standard input, which steals the rest of the queue file before read line can see it. My favorite solution to this is to pass the file over something other than standard input, like file descriptor #3:
...
while IFS= read line <&3; do # The <&3 makes it read from FD #3
...
done 3< "${CONFIG_DIR}/queue" # The 3< redirects the file into FD #3
Another way to avoid the problem is to redirect input to the HandBrakeCLI command:
...
eval "${line}" </dev/null
...
There's some more info about this in BashFAQ #89: I'm reading a file line by line and running ssh or ffmpeg, only the first line gets processed!
Also, I'm not sure I trust the way you're using tail to remove lines from the queue file as they're executed. I'm not sure it's really wrong, it just looks fragile to me. Also, I'd recommend using lower- or mixed-case variable names, since there are a bunch of all-caps names with special meanings, and re-using one of them by mistake can have weird consequences. Finally, I'd recommend running your script through shellcheck.net, as it'll make some other good recommendations.
[BTW, this question is a duplicate of "Bash script do loop exiting early", but that doesn't have any upvoted or accepted answers.]

Monitor Change in File That's Not Being Appended To

I would like to write a shell script that monitors the changes of a file. That is, another program I've written writes either a 1 or 0 to a file depending on its state. I would like to create a script that runs indefinitely, and monitors the state of this file. So far, I've found a close solution online by using tail -f. However, this command expects the file to be continually appended to. When I run the following piece of code, I get tail: test.log: file truncated. Also, when I test this program by running echo 1 > test.log and echo 0 > test.log back and forth on another terminal, it seems that periodically it will completely miss a change in the file. Probably related to tail expecting to follow the file as it's being appended rather than just changing a single character (thus thinking the file has been truncated, I suppose).
Here's the code I've tried:
#!/bin/sh
# Monitor changes in file
tail -fn0 test.log | \
while read line; do
if [ $line = 1 ]; then
echo "TRUE!!!"
elif [ $line = 0 ]; then
echo "FALSE!!!"
fi
done
The solution is probably incredibly easy, but I just can't manage to find it.
If you want to capture the state of the file in regular intervals, you could do something like this:
INTERVAL=2
while sleep 2; do
val="$(cat "test.log")"
case "$val" in
...
esac
done
Alternately, if you only want to act on the contents of the file whenever the file changes, you need to work with the "modification time". For example,
mtime () {
ls "$1" -l --time-style=+%s | cut -d' ' -f6
}
FILE="test.log"
LAST_TIME=$(mtime "$FILE")
touch "$FILE" #Force first update
while sleep 2; do
if [[ $(mtime "$FILE") -gt $LAST_TIME ]]; then
LAST_TIME=$(mtime "$FILE")
val="$(cat "$FILE")"
...
fi
done
If 2 seconds is too big of a delay for your purposes, uses a smaller number. Alternately, use true instead of sleep, for virtually zero delay.

Shell Script Help--Accept Input and Run in BackGround?

I have a shell script in which in the first line I ask the user to input how many minutes they want the script to run for:
#!/usr/bin/ksh
echo "How long do you want the script to run for in minutes?:\c"
read scriptduration
loopcnt=0
interval=1
date2=$(date +%H:%M%S)
(( intervalsec = $interval * 1 ))
totalmin=${1:-$scriptduration}
(( loopmax = ${totalmin} * 60 ))
ofile=/home2/s499929/test.log
echo "$date2 total runtime is $totalmin minutes at 2 sec intervals"
while(( $loopmax > $loopcnt ))
do
date1=$(date +%H:%M:%S)
pid=`/usr/local/bin/lsof | grep 16752 | grep LISTEN |awk '{print $2}'` > /dev/null 2>&1
count=$(netstat -an|grep 16752|grep ESTABLISHED|wc -l| sed "s/ //g")
process=$(ps -ef | grep $pid | wc -l | sed "s/ //g")
port=$(netstat -an | grep 16752 | grep LISTEN | wc -l| sed "s/ //g")
echo "$date1 activeTCPcount:$count activePID:$pid activePIDcount=$process listen=$port" >> ${ofile}
sleep $intervalsec
(( loopcnt = loopcnt + 1 ))
done
It works great if I kick it off an input the values manually. But if I want to run this for 3 hours I need to kick off the script to run in the background.
I have tried just running ./scriptname & and I get this:
$ How long do you want the test to run for in minutes:360
ksh: 360: not found.
[2] + Stopped (SIGTTIN) ./test.sh &
And the script dies. Is this possible, any suggestions on how I can accept this one input and then run in the background?? Thanks!!!
You could do something like this:
test.sh arg1 arg2 &
Just refer to arg1 and arg2 as $1 and $2, respectively, in the bash script. ($0 is the name of the script)
So,
test.sh 360 &
will pass 360 as the first argument to the bash or ksh script which can be referred to as $1 in the script.
So the first few lines of your script would now be:
#!/usr/bin/ksh
scriptduration=$1
loopcnt=0
...
...
With bash you can start the script in the foreground and after you finished with the user input, interrupt it by hitting Ctrl-Z.
Then type
$ bg %
and the script will continue to run in the background.
Why You're Getting What You're Getting
When you run the script in the background, it can't take any user input. In fact, the program will freeze if it expects user input until its put back in the foreground. However, output has to go somewhere. Thus, the output goes to the screen (even though the program is running in the background. Thus, you see the prompt.
The prompt you see your program displaying is meaningless because you can't input at the prompt. Instead, you type in 360 and your shell is interpreting it as a command you want because you're not putting it in the program, you're putting it in the command prompt.
You want your program to be in the foreground for the input, but run in the background. You can't do both at once.
Solutions To Your Dilemma
You can have two programs. The first takes the input, and the second runs the actual program in the background.
Something like this:
#! /bin/ksh
read time?"How long in seconds do you want to run the job? "
my_actual_job.ksh $time &
In fact, you could even have a mechanism to run the job in the background if the time is over a certain limit, but otherwise run the job in the foreground.
#! /bin/ksh
readonly MAX_FOREGROUND_TIME=30
read time?"How long in seconds do you want to run the job? "
if [ $time -gt $MAX_FOREGROUND_TIME ]
then
my_actual_job.ksh $time &
else
my_actual_job.ksh $time
fi
Also remember if your job is in the background, it cannot print to the screen. You can redirect the output elsewhere, but if you don't, it'll print to the screen at inopportune times. For example, you could be in VI editing a file, and suddenly have the output appear smack in the middle of your VI session.
I believe there's an easy way to tell if your job is in the background, but I can't remember it offhand. You could find your current process ID by looking at $$, then looking at the output of jobs -p and see if that process ID is in the list. However, I'm sure someone will come up with an easy way to tell.
It is also possible that a program could throw itself into the background via the bg $$ command.
Some Hints
If you're running Kornshell, you might consider taking advantage of many of Kornshell's special features:
print: The print command is more flexible and robust than echo. Take a look at the manpage for Kornshell and see all of its features.
read: You notice that you can use the read var?"prompt" form of the read command.
readonly: Use readonly to declare constants. That way, you don't accidentally change the value of that variable later. Besides, it's good programming technique.
typeset: Take a look at typeset in the ksh manpage. The typeset command can help you declare particular variables as floating point vs. real, and can automatically do things like zero fill, right or left justify, etc.
Some things not specific to Kornshell:
The awk and sed commands can also do what grep does, so there's no reason to filter something through grep and then through awk or sed.
You can combine greps by using the -e parameter. grep foo | grep bar is the same as grep -e foo -e bar.
Hope this helps.
I've tested this with ksh and it worked. The trick is to let the script call itself with the time to wait as parameter:
if [ -z "$1" ]; then
echo "How long do you want the test to run for in minutes:\c"
read scriptduration
echo "running task in background"
$0 $scriptduration &
exit 0
else
scriptduration=$1
fi
loopcnt=0
interval=1
# ... and so on
So are you using bash or ksh? In bash, you can do this:
{ echo 360 | ./test.sh ; } &
It could work for ksh also.

Resources