how can I change SLEEP time in a running bash script - bash

Appendix: the code below runs fine, as Matthias pointed out. The err happened at another place. In short: if you want that sleep is changed during script runtime, e.g. due to a certain event, you might use the code below.
Original description:
My bash script ought to check a certain status - e.g. the existence of a file - every 5 minutes. If the status is as expected, everything is fine. But if the status is otherwise, the checks ought to happen in a shorter frequency, until everything is normal again.
Example:
NORMAL_SLEEP=300
SHORT_SLEEP=30
CUR_SLEEP=''
while :
do
if [ -f /tmp/myfile ]; then
logger "myfile still exists. Next check in 5min"
CUR_SLEEP=$NORMAL_SLEEP
else
logger "myfile disappeared. Check again in 30s!"
CUR_SLEEP=$SHORT_SLEEP
echo "/tmp/myfile was removed. Check this!" \
| mailx -s "alert: myfile missed" johndoe#somewhere.com
fi
trap 'kill $SLEEP_PID; exit 1' 15
sleep $CUR_SLEEP &
SLEEP_PID=$!
wait
done
Problem: the sleep time does not adapt...
Had a look at Bash Script: While-Loop Subshell Dilemma but unfortunately can't see how it could solve my problem.

The code ran fine on my machine. Here's what I ran (changed the time values just to test):
./script.sh --> "myfile disappeared. Check again in 30s!" printed at 2 sec intervals
touch /tmp/myfile
./script.sh --> "myfile still exists. Next check in 5min" printed at 5 sec intervals
The file, script.sh:
#!/bin/bash
NORMAL_SLEEP=5
SHORT_SLEEP=2
CUR_SLEEP=''
while :
do
if [ -f /tmp/myfile ]; then
echo "myfile still exists. Next check in 5min"
CUR_SLEEP=$NORMAL_SLEEP
else
echo "myfile disappeared. Check again in 30s!"
CUR_SLEEP=$SHORT_SLEEP
echo "/tmp/myfile was removed. Check this!" \
| mailx -s "alert: myfile missed" johndoe#somewhere.com
fi
trap 'kill $SLEEP_PID; exit 1' 15
sleep $CUR_SLEEP &
SLEEP_PID=$!
wait
done
And I probably sent some mail to johndoe#somewhere.com but that's okay.

Related

Why is the second bash script not printing its iteration?

I have two bash scripts:
a.sh:
echo "running"
doit=true
if [ $doit = true ];then
./b.sh &
fi
some-long-operation-binary
echo "done"
b.sh:
for i in {0..50}; do
echo "counting";
sleep 1;
done
I get this output:
> ./a.sh
running
counting
Why do I only see the first "counting" from b.sh and then nothing anymore? (Currently some-long-operation-binary just sleep 5 for this example). I first thought that due to setting b.sh in the background, its STDOUT is lost, but why do I see the first output? More importantly: is b.sh still running and doing its thing (its iteration)?
For context:
b.sh is going to poll a service provided by some-long-operation-binary, which is only available after some time the latter has run, and when ready, would write its content to a file.
Apologies if this is just rubbish, it's a bit late...
You should add #!/bin/bash or the like to b.sh that uses a Bash-like expansion, to make sure Bash is actually running the script. Otherwise there may be (indeed) only one loop iteration happening.
When you start a background process, it is usually a good practice to kill it and wait for it, no matter which way the script exits.
#!/bin/bash
set -e -o pipefail
declare -i show_counter=1
counter() {
local -i i
for ((i = 0;; ++i)); do
echo "counting $((i))"
sleep 1
done
}
echo starting
if ((show_counter)); then
counter &
declare -i counter_pid="${!}"
trap 'kill "${counter_pid}"
wait -n "${counter_pid}" || :
echo terminating' EXIT
fi
sleep 10 # long-running process

applescript blocks shell script cmd when writing to pipe

The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0

get the number of seconds left for the sleep command to end in a shell script

I built a shell script that sleeps for a specified amount of minutes and shows notification when it is done.
TIME=$(zenity --scale --title="Next Session in (?) minutes")
sleep $TIME'm'
BEEP="/usr/share/sounds/freedesktop/stereo/complete.oga"
paplay $BEEP
notify-send "Next Session" "Press <Ctrl><Shift><s> to run the script again"
I prevented multiple instance of the program from executing using a file based approach at the beginning of the code. When a user wants to run the script while another instance is running, it shows a notification that the script is already running.
LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
notify-send "Already Running" $SECONDS
exit
fi
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}
and finally remove the temporary file at the end of the script
rm -f ${LOCKFILE}
Now I want to add a text to the notification that tells how many seconds are left for the sleep command in my shell script to end. (changing the already running notification as follows)
notify-send "Already Running" $SECONDS
To implement the sleep command with my own controlled while loop would affect the overall performance of the computer. I think the sleep command is a better option as it optimizes the process by sending itself to a waiting state in the process queue.
Is there any way I can go around the problem?
Store the time when the script is supposed to end in the lock file.
if [ -e "$LOCKFILE" ]; then
read pid endtime < "$LOCKFILE"
if kill -0 "$pid"; then
notify-send "Already running" $(($(date +%s) - $endtime))
exit
fi
fi
trap "rm -f ${LOCKFILE}" EXIT # Use cascaded trap
trap 'exit 127' INT TERM
echo $$ $(($(date +%s) + (60 * $TIME))) >"$LOCKFILE"
There is a race condition here; if two scripts are started at almost the same time, the first could be inside the if but before the echo when the second starts. If you really need to prevent that, use a lock directory instead of a file -- directory creation is atomic, and either succeeds or fails at just a single point in time (but then you'll need to clean out the stale directory in the mystery scenario where the directory exists but is not owned by a file -- maybe after a careless OOM killer or something).
I think Triplee has a fine answer, another way to handle it that can be applied to any running process that may block is to bg the process briefly to grab and save the assigned pid $! to a file then fg the process back.
From there you can do the math and get the seconds via ps:
TIME=$(zenity --scale --title="Next Session in (?) minutes")
SLEEP_PID_FILE="/tmp/__session_ui_sleep_pid__"
sleep $TIME'm' &
echo $! >> "${SLEEP_PID_FILE}"
fg
BEEP="/usr/share/sounds/freedesktop/stereo/complete.oga"
paplay $BEEP
notify-send "Next Session" "Press <Ctrl><Shift><s> to run the script again"
Then afterward you can find the current elapsed time with something like:
notify-send "Already running for $(($(date +%s)-$(date -d"$(ps -o lstart= -p$(< "${SLEEP_PID_FILE}"))" +%s))) seconds..."

In bash: processing every command line without using the debug trap?

I have a complicated mechanism built into my bash environment that requires the execution of a couple scripts when the prompt is generated, but also when the user hits enter to begin processing a command. I'll give an oversimplified description:
The debug trap does this in a fairly limited way: it fires every time a statement is executed.
trap 'echo $BASH_COMMAND' DEBUG # example
Unfortunately, this means that when I type this:
sleep 1; sleep 2; sleep 3
rather than processing a $BASH_COMMAND that contains the entire line, I get the three sleeps in three different traps. Worse yet:
sleep 1 | sleep 2 | sleep 3
fires all three as the pipe is set up - before sleep 1 even starts executing, the output might lead you to believe that sleep 3 is running.
I need a way to execute a script right at the beginning, processing the entire command, and I'd rather it not fire when the prompt command is run, but I can deal with that if I must.
THERE'S A MAJOR PROBLEM WITH THIS SOLUTION. COMMANDS WITH PIPES (|) WILL FINISH EXECUTING THE TRAP, BUT BACKGROUNDING A PROCESS DURING THE TRAP WILL CAUSE THE PROCESSING OF THE COMMAND TO FREEZE - YOU'LL NEVER GET A PROMPT BACK WITHOUT HITTING ^C. THE TRAP COMPLETES, BUT $PROMPT_COMMAND NEVER RUNS. THIS PROBLEM PERSISTS EVEN IF YOU DISOWN THE PROCESS IMMEDIATELY AFTER BACKGROUNDING IT.
This wound up being a little more interesting than I expected:
LOGFILE=~/logfiles/$BASHPID
start_timer() {
if [ ! -e $LOGFILE ]; then
#You may have to adjust this to fit with your history output format:
CMD=`history | tail -1 | tr -s " " | cut -f2-1000 -d" "`
#timer2 keeps updating the status line with how long the cmd has been running
timer2 -p "$PROMPT_BNW $CMD" -u -q & echo $! > $LOGFILE
fi
}
stop_timer() {
#Unfortunately, killing a process always prints that nasty confirmation line,
#and you can't silence it by redirecting stdout and stderr to /dev/null, so you
#have to disown the process before killing it.
disown `cat $LOGFILE`
kill -9 `cat $LOGFILE`
rm -f $LOGFILE
}
trap 'start_timer' DEBUG

Bash script: `exit 0` fails to exit

So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.

Resources