I have a bash loop that looks like this:
something(){
echo "something"
}
while true; do
something
sleep 10s
done | otherCommand
When the loop is in the sleep state, I want to be able to be able to run a function from the terminal that will skip the sleep step and go on to the next iteration of the loop.
For example, if the script has been sleeping for 5 seconds, I want the command to stop the script from sleeping, go on to the next iteration of the loop, run something, and continue sleeping again.
This is not foolproof, but may be robust enough:
To interrupt the sleep command, run:
pkill -f 'sleep 10s'
Note that the script running the sleep command prints something like the following to stderr when sleep is killed: <script-path>: line <line-number>: <pid> Terminated: 15 sleep 10s. Curiously, you cannot suppress this script with sleep 10s 2>/dev/null; to suppress it, you have to either apply 2>/dev/null to the while loop as a whole, or to the script invocation as a whole.
As #miken32 points out in the comments, this command tries to kill all commands whose invocation command line matches sleep 10s, though - unless you're running as root - killing will only succeed for matches among your processed due to lack of permission to kill other users' processes.
To be safer, you can explicitly restrict matches to your own processes:
pkill -u "$(id -u)" -f 'sleep 10s'
Truly safe use, however, requires that you capture your running script's PID (process ID), save it to a file, and use the PID from that file with pkill's -P option, which limits matches to child processes of the specified PID - see this answer (thanks, #miken32).
If you want to skip the sleep, use something like a file you can touch:
if [ ! -f /tmp/skipsleep ]; then
sleep 10
fi
When you want to interrupt the sleep command, kill it!
You could read from a named pipe with read -t 10 instead of sleeping: props for named pipe added by "that other guy"
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
mkfifo ~/.myfifo
while cat ~/.myfifo; do true; done |
while true
do
something
read -t 10 && somethingelse
done
Now whenever another script writes to the fifo with echo > ~/.myfifo, the loop will skip its current wait and continue to the next iteration.
This way, different users or different scripts waiting ten seconds will not interfere with each other.
Solution below works if script is running in foreground and you are waiting. Of course you would need a valid loop exit condition. You can also check the value entered and act on it differently. In this case pressing any key except 'j' will iterate the loop. Pressing 'j' will pipe the output of somethingelse to awk.
something()
{
echo "something"
}
somethingelse()
{
echo "something else"
}
while true; do
something |awk '{print "piping something: " $0 }'
read -t 3 -s -n 1 answer
if [ $? == 0 ]; then
echo "you didn't want to wait!"
fi
if [ "$answer" = "j" ]; then
somethingelse | awk '{print "piping something else: " $0 }'
fi
done
If its interactive then just use "read -t #" instead of "sleep #".
You can then just press enter to skip the timeout.
This may be old, but a very simple, elegant solution is to iterate a non existant, empty or 0 set variable, while checking if it has some value first. In this case the ternary operator works as intended
for((;;)){
# skip an interation
(( i ))&& do something || i=1
# skip 5 iterations
(( n >= 5 ))&& do something ||
((n++))
}
Related
Lets say I have a loop in Bash:
for foo in `some-command`
do
do-something $foo
done
do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something's at once.
The naive approach seems to be:
for foo in `some-command`
do
do-something $foo &
done
This will run all do-somethings at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-somethings are finished.
How would you write this loop so there are always X do-somethings running at once?
Depending on what you want to do xargs also can help (here: converting documents with pdf2ps):
cpus=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w )
find . -name \*.pdf | xargs --max-args=1 --max-procs=$cpus pdf2ps
From the docs:
--max-procs=max-procs
-P max-procs
Run up to max-procs processes at a time; the default is 1.
If max-procs is 0, xargs will run as many processes as possible at a
time. Use the -n option with -P; otherwise chances are that only one
exec will be done.
With GNU Parallel http://www.gnu.org/software/parallel/ you can write:
some-command | parallel do-something
GNU Parallel also supports running jobs on remote computers. This will run one per CPU core on the remote computers - even if they have different number of cores:
some-command | parallel -S server1,server2 do-something
A more advanced example: Here we list of files that we want my_script to run on. Files have extension (maybe .jpeg). We want the output of my_script to be put next to the files in basename.out (e.g. foo.jpeg -> foo.out). We want to run my_script once for each core the computer has and we want to run it on the local computer, too. For the remote computers we want the file to be processed transferred to the given computer. When my_script finishes, we want foo.out transferred back and we then want foo.jpeg and foo.out removed from the remote computer:
cat list_of_files | \
parallel --trc {.}.out -S server1,server2,: \
"my_script {} > {.}.out"
GNU Parallel makes sure the output from each job does not mix, so you can use the output as input for another program:
some-command | parallel do-something | postprocess
See the videos for more examples: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
maxjobs=4
parallelize () {
while [ $# -gt 0 ] ; do
jobcnt=(`jobs -p`)
if [ ${#jobcnt[#]} -lt $maxjobs ] ; then
do-something $1 &
shift
else
sleep 1
fi
done
wait
}
parallelize arg1 arg2 "5 args to third job" arg4 ...
Here an alternative solution that can be inserted into .bashrc and used for everyday one liner:
function pwait() {
while [ $(jobs -p | wc -l) -ge $1 ]; do
sleep 1
done
}
To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes:
for i in *; do
do_something $i &
pwait 10
done
It would be nicer to use wait instead of busy waiting on the output of jobs -p, but there doesn't seem to be an obvious solution to wait till any of the given jobs is finished instead of a all of them.
Instead of a plain bash, use a Makefile, then specify number of simultaneous jobs with make -jX where X is the number of jobs to run at once.
Or you can use wait ("man wait"): launch several child processes, call wait - it will exit when the child processes finish.
maxjobs = 10
foreach line in `cat file.txt` {
jobsrunning = 0
while jobsrunning < maxjobs {
do job &
jobsrunning += 1
}
wait
}
job ( ){
...
}
If you need to store the job's result, then assign their result to a variable. After wait you just check what the variable contains.
If you're familiar with the make command, most of the time you can express the list of commands you want to run as a a makefile. For example, if you need to run $SOME_COMMAND on files *.input each of which produces *.output, you can use the makefile
INPUT = a.input b.input
OUTPUT = $(INPUT:.input=.output)
%.output : %.input
$(SOME_COMMAND) $< $#
all: $(OUTPUT)
and then just run
make -j<NUMBER>
to run at most NUMBER commands in parallel.
While doing this right in bash is probably impossible, you can do a semi-right fairly easily. bstark gave a fair approximation of right but his has the following flaws:
Word splitting: You can't pass any jobs to it that use any of the following characters in their arguments: spaces, tabs, newlines, stars, question marks. If you do, things will break, possibly unexpectedly.
It relies on the rest of your script to not background anything. If you do, or later you add something to the script that gets sent in the background because you forgot you weren't allowed to use backgrounded jobs because of his snippet, things will break.
Another approximation which doesn't have these flaws is the following:
scheduleAll() {
local job i=0 max=4 pids=()
for job; do
(( ++i % max == 0 )) && {
wait "${pids[#]}"
pids=()
}
bash -c "$job" & pids+=("$!")
done
wait "${pids[#]}"
}
Note that this one is easily adaptable to also check the exit code of each job as it ends so you can warn the user if a job fails or set an exit code for scheduleAll according to the amount of jobs that failed, or something.
The problem with this code is just that:
It schedules four (in this case) jobs at a time and then waits for all four to end. Some might be done sooner than others which will cause the next batch of four jobs to wait until the longest of the previous batch is done.
A solution that takes care of this last issue would have to use kill -0 to poll whether any of the processes have disappeared instead of the wait and schedule the next job. However, that introduces a small new problem: you have a race condition between a job ending, and the kill -0 checking whether it's ended. If the job ended and another process on your system starts up at the same time, taking a random PID which happens to be that of the job that just finished, the kill -0 won't notice your job having finished and things will break again.
A perfect solution isn't possible in bash.
Maybe try a parallelizing utility instead rewriting the loop? I'm a big fan of xjobs. I use xjobs all the time to mass copy files across our network, usually when setting up a new database server.
http://www.maier-komor.de/xjobs.html
function for bash:
parallel ()
{
awk "BEGIN{print \"all: ALL_TARGETS\\n\"}{print \"TARGET_\"NR\":\\n\\t#-\"\$0\"\\n\"}END{printf \"ALL_TARGETS:\";for(i=1;i<=NR;i++){printf \" TARGET_%d\",i};print\"\\n\"}" | make $# -f - all
}
using:
cat my_commands | parallel -j 4
Really late to the party here, but here's another solution.
A lot of solutions don't handle spaces/special characters in the commands, don't keep N jobs running at all times, eat cpu in busy loops, or rely on external dependencies (e.g. GNU parallel).
With inspiration for dead/zombie process handling, here's a pure bash solution:
function run_parallel_jobs {
local concurrent_max=$1
local callback=$2
local cmds=("${#:3}")
local jobs=( )
while [[ "${#cmds[#]}" -gt 0 ]] || [[ "${#jobs[#]}" -gt 0 ]]; do
while [[ "${#jobs[#]}" -lt $concurrent_max ]] && [[ "${#cmds[#]}" -gt 0 ]]; do
local cmd="${cmds[0]}"
cmds=("${cmds[#]:1}")
bash -c "$cmd" &
jobs+=($!)
done
local job="${jobs[0]}"
jobs=("${jobs[#]:1}")
local state="$(ps -p $job -o state= 2>/dev/null)"
if [[ "$state" == "D" ]] || [[ "$state" == "Z" ]]; then
$callback $job
else
wait $job
$callback $job $?
fi
done
}
And sample usage:
function job_done {
if [[ $# -lt 2 ]]; then
echo "PID $1 died unexpectedly"
else
echo "PID $1 exited $2"
fi
}
cmds=( \
"echo 1; sleep 1; exit 1" \
"echo 2; sleep 2; exit 2" \
"echo 3; sleep 3; exit 3" \
"echo 4; sleep 4; exit 4" \
"echo 5; sleep 5; exit 5" \
)
# cpus="$(getconf _NPROCESSORS_ONLN)"
cpus=3
run_parallel_jobs $cpus "job_done" "${cmds[#]}"
The output:
1
2
3
PID 56712 exited 1
4
PID 56713 exited 2
5
PID 56714 exited 3
PID 56720 exited 4
PID 56724 exited 5
For per-process output handling $$ could be used to log to a file, for example:
function job_done {
cat "$1.log"
}
cmds=( \
"echo 1 \$\$ >\$\$.log" \
"echo 2 \$\$ >\$\$.log" \
)
run_parallel_jobs 2 "job_done" "${cmds[#]}"
Output:
1 56871
2 56872
The project I work on uses the wait command to control parallel shell (ksh actually) processes. To address your concerns about IO, on a modern OS, it's possible parallel execution will actually increase efficiency. If all processes are reading the same blocks on disk, only the first process will have to hit the physical hardware. The other processes will often be able to retrieve the block from OS's disk cache in memory. Obviously, reading from memory is several orders of magnitude quicker than reading from disk. Also, the benefit requires no coding changes.
This might be good enough for most purposes, but is not optimal.
#!/bin/bash
n=0
maxjobs=10
for i in *.m4a ; do
# ( DO SOMETHING ) &
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished (not optimal, but most times good enough)
echo $n wait
fi
done
Here is how I managed to solve this issue in a bash script:
#! /bin/bash
MAX_JOBS=32
FILE_LIST=($(cat ${1}))
echo Length ${#FILE_LIST[#]}
for ((INDEX=0; INDEX < ${#FILE_LIST[#]}; INDEX=$((${INDEX}+${MAX_JOBS})) ));
do
JOBS_RUNNING=0
while ((JOBS_RUNNING < MAX_JOBS))
do
I=$((${INDEX}+${JOBS_RUNNING}))
FILE=${FILE_LIST[${I}]}
if [ "$FILE" != "" ];then
echo $JOBS_RUNNING $FILE
./M22Checker ${FILE} &
else
echo $JOBS_RUNNING NULL &
fi
JOBS_RUNNING=$((JOBS_RUNNING+1))
done
wait
done
You can use a simple nested for loop (substitute appropriate integers for N and M below):
for i in {1..N}; do
(for j in {1..M}; do do_something; done & );
done
This will execute do_something N*M times in M rounds, each round executing N jobs in parallel. You can make N equal the number of CPUs you have.
My solution to always keep a given number of processes running, keep tracking of errors and handle ubnterruptible / zombie processes:
function log {
echo "$1"
}
# Take a list of commands to run, runs them sequentially with numberOfProcesses commands simultaneously runs
# Returns the number of non zero exit codes from commands
function ParallelExec {
local numberOfProcesses="${1}" # Number of simultaneous commands to run
local commandsArg="${2}" # Semi-colon separated list of commands
local pid
local runningPids=0
local counter=0
local commandsArray
local pidsArray
local newPidsArray
local retval
local retvalAll=0
local pidState
local commandsArrayPid
IFS=';' read -r -a commandsArray <<< "$commandsArg"
log "Runnning ${#commandsArray[#]} commands in $numberOfProcesses simultaneous processes."
while [ $counter -lt "${#commandsArray[#]}" ] || [ ${#pidsArray[#]} -gt 0 ]; do
while [ $counter -lt "${#commandsArray[#]}" ] && [ ${#pidsArray[#]} -lt $numberOfProcesses ]; do
log "Running command [${commandsArray[$counter]}]."
eval "${commandsArray[$counter]}" &
pid=$!
pidsArray+=($pid)
commandsArrayPid[$pid]="${commandsArray[$counter]}"
counter=$((counter+1))
done
newPidsArray=()
for pid in "${pidsArray[#]}"; do
# Handle uninterruptible sleep state or zombies by ommiting them from running process array (How to kill that is already dead ? :)
if kill -0 $pid > /dev/null 2>&1; then
pidState=$(ps -p$pid -o state= 2 > /dev/null)
if [ "$pidState" != "D" ] && [ "$pidState" != "Z" ]; then
newPidsArray+=($pid)
fi
else
# pid is dead, get it's exit code from wait command
wait $pid
retval=$?
if [ $retval -ne 0 ]; then
log "Command [${commandsArrayPid[$pid]}] failed with exit code [$retval]."
retvalAll=$((retvalAll+1))
fi
fi
done
pidsArray=("${newPidsArray[#]}")
# Add a trivial sleep time so bash won't eat all CPU
sleep .05
done
return $retvalAll
}
Usage:
cmds="du -csh /var;du -csh /tmp;sleep 3;du -csh /root;sleep 10; du -csh /home"
# Execute 2 processes at a time
ParallelExec 2 "$cmds"
# Execute 4 processes at a time
ParallelExec 4 "$cmds"
$DOMAINS = "list of some domain in commands"
for foo in some-command
do
eval `some-command for $DOMAINS` &
job[$i]=$!
i=$(( i + 1))
done
Ndomains=echo $DOMAINS |wc -w
for i in $(seq 1 1 $Ndomains)
do
echo "wait for ${job[$i]}"
wait "${job[$i]}"
done
in this concept will work for the parallelize. important thing is last line of eval is '&'
which will put the commands to backgrounds.
I have the following line in bash.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; sleep 30) | nc localhost 2222 \
| grep -m1 "baz"
This prints "baz" (if/when the other end of the TCP connection sends it) and exits after 32 seconds.
What I want it to do is to exit the sleep 30 early if it sees "baz". The -m flag exits grep, but does not kill the whole line.
How could I achieve this in bash (without using expect if possible)?
Update: the code above does quit, if and only if, the server tries to send something after baz. This does not solve this problem, as the server may not send anything for minutes.
If you like esoteric sides of Bash, you can use coproc for that.
coproc { { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
grep -m1 baz <&${COPROC[0]}
[[ $COPROC_PID ]] && kill $COPROC_PID
Here, we're using coproc to run
{ { sleep 1; echo "foo"; sleep 1; echo "bar"; sleep 30; } | nc localhost 2222; }
in the background. coproc takes care to redirect the standard output and standard input of this compound command in the file descriptors set in ${COPROC[0]} and ${COPROC[1]}. Moreover, the PID of this job is in COPROC_PID. We then feed grep with the standard output of the background job. It's then easy to kill the job when we're done.
You can catch the pid of the subshell you are opening. Then, something like this should make:
( echo "start"; sleep 1; echo $BASHPID > /tmp/subpid; echo "hello"; sleep 20; ) \
| ( sleep 1; subpid=$(cat /tmp/subpid); grep -m1 hello && kill $subpid )
That is, you store the id of the subshell in a temp file and then continue with the descripting.
On the other side of the pipe, you read the content of the file (sleep 1 is to make sure it has been written in the file by the initial subshell) and, when you find the content with grep, you kill it.
From man bash:
BASHPID
Expands to the process ID of the current bash process. This differs
from $$ under certain circumstances, such as subshells that do not
require bash to be re-initialized.
Credits to:
Get pid of current subshell
How to get the process id of a bash subprocess on command line.
Suddenly found a solution based on Jidder`s comment.
(sleep 1 ; echo "foo" ; sleep 1 ; echo "bar" ; for i in `seq 1 30`; do echo -n '.'; sleep 1; done) | grep -m1 "bar"
Just sleeping in a loop does not work. But after adding echo -n '.' it works. It seems that an attempt to write to a closed pipe leads to abort. Though I have tested without nc.
I believe you really need to use expect ( http://expect.sourceforge.net/ , and there are packages for most OSes and distributions ).
Otherwise you'll have a hard time handling some cases and getting rid of buffering, etc. Expect does it for you (... well, once you wrote the right except script that handles all (or most) cases) (For a first draft, you can use autoexpect (http://linux.die.net/man/1/autoexpect) but you'll need to add variations (handling "wrong password" messages, etc))
Expect is an old tool (and is based, iirc, on Tcl), but there is not really a best tool for the job of "sending input and waiting for outputs (and reacting differently depending on outputs)"
I'm currently writing a bash script to do tasks automatically. In my script I want it to display progress message when it is doing a task.
For example:
user#ubuntu:~$ Configure something
->
Configure something .
->
Configure something ..
->
Configure something ...
->
Configure something ... done
All the progress message should appear in the same line.
Below is my workaround so far:
echo -n "Configure something "
exec "configure something 2>&1 /dev/null"
//pseudo code for progress message
echo -n "." and sleep 1 if the previous exec of configure something not done
echo " done" if exec of the command finished successfully
echo " failed" otherwise
Will exec wait for the command to finish and then continue with the script lines later?
If so, then how can I echo message at the same time the exec of configure something is taking place?
How do I know when exec finishes the previous command and return true? use $? ?
Just to put the editorial hat on, what if something goes wrong? How are you, or a user of your script going to know what went wrong? This is probably not the answer you're looking for but having your script just execute each build step individually may turn out to be better overall, especially for troubleshooting. Why not define a function to validate your build steps:
function validateCmd()
{
CODE=$1
COMMAND=$2
MODULE=$3
if [ ${CODE} -ne 0 ]; then
echo "ERROR Executing Command: \"${COMMAND}\" in Module: ${MODULE}"
echo "Exiting."
exit 1;
fi
}
./configure
validateCmd $? "./configure" "Configuration of something"
Anyways, yes as you probably noticed above, use $? to determine what the result of the last command was. For example:
rm -rf ${TMP_DIR}
if [ $? -ne 0 ]; then
echo "ERROR Removing directory: ${TMP_DIR}"
exit 1;
fi
To answer your first question, you can use:
echo -ne "\b"
To delete a character on the same line. So to count to ten on one line, you can do something like:
for i in $(seq -w 1 10); do
echo -en "\b\b${i}"
sleep .25
done
echo
The trick with that is you'll have to know how much to delete, but I'm sure you can figure that out.
You cannot call exec like that; exec never returns, and the lines after an exec will not execute. The standard way to print progress updates on a single line is to simply use \r instead of \n at the end of each line. For example:
#!/bin/bash
i=0
sleep 5 & # Start some command
pid=$! # Save the pid of the command
while sleep 1; do # Produce progress reports
printf '\rcontinuing in %d seconds...' $(( 5 - ++i ))
test $i -eq 5 && break
done
if wait $pid; then echo done; else echo failed; fi
Here's another example:
#!/bin/bash
execute() {
eval "$#" & # Execute the command
pid=$!
# Invoke a shell to print status. If you just invoke
# the while loop directly, killing it will generate a
# notification. By trapping SIGTERM, we suppress the notice.
sh -c 'trap exit SIGTERM
while printf "\r%3d:%s..." $((++i)) "$*"; do sleep 1
done' 0 "$#" &
last_report=$!
if wait $pid; then echo done; else echo failed; fi
kill $last_report
}
execute sleep 3
execute sleep 2 \| false # Execute a command that will fail
execute sleep 1
I have a command that should take less than 1 minute to execute, but for some reason has an extremely long built-in timeout mechanism. I want some bash that does the following:
success = False
try(my_command)
while(!(success))
wait 1 min
if my command not finished
retry(my_command)
else
success = True
end while
How can I do this in Bash?
Look at the GNU timeout command. This kills the process if it has not completed in a given time; you'd simply wrap a loop around this to wait for the timeout to complete successfully, with delays between retries as appropriate, etc.
while timeout -k 70 60 -- my_command; [ $? = 124 ]
do sleep 2 # Pause before retry
done
If you must do it in pure bash (which is not really feasible - bash uses lots of other commands), then you are in for a world of pain and frustration with signal handlers and all sorts of issues.
Please expand on your answer a little. -k 70 is --kill-after= 70 seconds, 124 exit on timeout; what is the 60?
The linked documentation does explain the command; I don't really plan to repeat it all here. The synopsis is timeout [options] duration command [arg]...; one of the options is -k duration. The -k duration says "if the command does not die after the SIGTERM signal is sent at 60 seconds, send a SIGKILL signal at 70 seconds" (and the command should die then). There are a number of documented exit statuses; 124 indicates that the command timed out; 137 that it died after being sent the SIGKILL signal, and so on. You can't tell if the command itself exits with one of the documented statuses.
I found a script from:
http://fahdshariff.blogspot.com/2014/02/retrying-commands-in-shell-scripts.html
#!/bin/bash
# Retries a command on failure.
# $1 - the max number of attempts
# $2... - the command to run
retry() {
local -r -i max_attempts="$1"; shift
local -i attempt_num=1
until "$#"
do
if ((attempt_num==max_attempts))
then
echo "Attempt $attempt_num failed and there are no more attempts left!"
return 1
else
echo "Attempt $attempt_num failed! Trying again in $attempt_num seconds..."
sleep $((attempt_num++))
fi
done
}
# example usage:
retry 5 ls -ltr foo
I liked #Jonathan's answer, but tried to make it more straight forward for future use:
until timeout 1 sleep 2
do
echo "Happening after 1s of sleep"
done
Adapting #Shin's answer to use kill -0 rather than jobs so that this should work even with classic Bourne shell, and allow for other background jobs. You may have to experiment with kill and wait depending on how my_command responds to those.
while true ; do
my_command &
sleep 60
if kill -0 $! 2>/dev/null; then
# Job took too long
kill $!
else
echo "Job is done"
# Reap exit status
wait $!
break
fi
done
You can run a command and retain control with the & background operator. Run your command in the background, sleep for as long as you wish in the foreground, and then, if the background job hasn't terminated, kill it and start over.
while true ; do
my_command &
sleep 60
if [[ $(jobs -r) == "" ]] ; then
echo "Job is done"
break
fi
# Job took too long
kill -9 $!
done
# Retries a given command given number of times and outputs to given variable
# $1 : Command to be passed : handles both simple, piped commands
# $2 : Final output of the command(if successfull)
# $3 : Number of retrial attempts[Default 5]
function retry_function() {
echo "Command to be executed : $1"
echo "Final output variable : $2"
echo "Total trials [Default:5] : $3"
counter=${3:-5}
local _my_output_=$2 #make sure passed variable is not same as this
i=1
while [ $i -le $counter ]; do
local my_result=$(eval "$1")
# this tests if output variable is populated and accordingly retries,
# Not possible to provide error status/logs(STDIN,STDERR)-owing to subshell execution of command
# if error logs are needed, execute the same code, outside function in same shell
if test -z "$my_result"
then
echo "Trial[$i/$counter]: Execution failed"
else
echo "Trial[$i/$counter]: Successfull execution"
eval $_my_output_="'$my_result'"
break
fi
let i+=1
done
}
retry_function "ping -c 4 google.com | grep \"min/avg/max\" | awk -F\"/\" '{print \$5}'" avg_rtt_time
echo $avg_rtt_time
- To pass in a lengthy command, pass a method echoing the content. Take care of method expansion accordingly in a subshell at appropriate place.
- Wait time can be added too - just before the increment!
- For a complex command, youll have to take care of stringifying it(Good luck)
Good day! Is there any way to include a timer (timestamp?or whatever term it is) in a script using bash? Like for instance; every 60 seconds, a specific function checks if the internet is down, if it is, then it connects to the wifi device instead and vice versa. In short, the program checks the internet connection from time to time.
Any suggestions/answers will be much appreciated. =)
Blunt version
while sleep 60; do
if ! check_internet; then
if is_wifi; then
set_wired
else
set_wifi
fi
fi
done
Using the sleep itself as loop condition allows you to break out of the loop by killing the sleep (i.e. if it's a foreground process, ctrl-c will do).
If we're talking minutes or hours intervals, cron will probably do a better job, as Montecristo pointed out.
You may want to do a man cron.
Or if you just have to stick to bash just put the function call inside a loop and sleep 60 inside the iteration.
Please find here a script that you can use, first add an entry to your cron job like this:
$ sudo crontab -e
* * * * * /path/to/your/switcher
This is simple method that reside on pinging an alive server continuously every minute, if the server is not reachable, it will switch to the second router defined bellow.
surely there are better way, to exploit this issue.
$ cat > switcher
#!/bin/sh
route=`which route`
ip=`which ip`
# define your email here
mail="user#domain.tld"
# We define our pingable target like 'yahoo' or whatever, note that the host have to be
# reachable every time
target="www.yahoo.com"
# log file
file="/var/log/updown.log"
# your routers here
router1="192.168.0.1"
router2="192.168.0.254"
# default router
default=$($ip route | awk '/default/ { print $3 }')
# ping command
ping -c 2 ${target}
if [ $? -eq 0 ]; then
echo "`date +%Y%m%d-%H:%M:%S`: up" >> ${file}
else
echo "`date +%Y%m%d-%H:%M:%S`: down" >> ${file}
if [ ${default}==${router1} ]; then
${route} del default gw ${router1}
${route} add default gw ${router2}
elif [ ${default}==${router2} ]; then
${route} del default gw ${router2}
${route} add default gw ${router1}
fi
# sending a notification by mail or may be by sms
echo "Connection problem" |mail -s "Changing Routing table" ${mail}
fi
I liked William's answer, because it does not need polling. So I implemented the following script based on his idea. It works around the problem that control has to return to the shell.
#!/bin/sh
someproc()
{
sleep $1
return $2
}
run_or_timeout()
{
timeout=$1
shift
{
trap 'exit 0' 15
"$#"
} &
proc=$!
trap "kill $proc" ALRM
{
trap 'exit 0' 15
sleep $timeout
kill -ALRM $$
} &
alarm=$!
wait $proc
ret=$?
# cleanup
kill $alarm
trap - ALRM
return $ret
}
run_or_timeout 0 someproc 1 0
echo "exit: $? (expected: 142)"
run_or_timeout 1 someproc 0 0
echo "exit: $? (expected: 0)"
run_or_timeout 1 someproc 0 1
echo "exit: $? (expected: 1)"
You can do something like the following, but it is not reliable:
#!/bin/sh
trap handle_timer USR1
set_timer() { (sleep 2; kill -USR1 $$)& }
handle_timer() {
printf "%s:%s\n" "timer expired" "$(date)";
set_timer
}
set_timer
while true; do sleep 1; date; done
One problem with this technique is that the trap will not take effect until the current task returns to the shell (eg, replace the sleep 1 with sleep 10). If the shell is in control most of the time (eg if all the commands it calls will terminate quickly), this can work. One option of course, is to run everything in the background.
Create a bash script that checks once if internet connection is down and add the script in a crontab task that runs every 60 seconds.