I've seen a lot of methods to repeat a bash command once every n seconds, but none to repeat a command for n seconds.
Is there a way to repeat a command for n seconds? So if my command takes one second to execute, it'll run ten times. If it takes two seconds, it'll run five times.
If it takes seven seconds, it would execute two times (and no more), or perhaps it would exit the script.
Right now I'm doing it by looking at the amount of time it takes for my script to execute once, and then calculating how many times I need to repeat it for it to execute for n seconds. However, this is slightly unreliable as I've found that the time required to run the script deviates a bit.
timetorun=30 # In seconds
stoptime=$((timetorun + $(date +%s)))
while [ $(date +%s) -lt $stoptime ]; do
something
done
Note that this will keep running the command until timetorun seconds have passed, so generally it'll actually run longer than that. For an extreme example, if timetorun is 30 seconds and the program takes 29 seconds, it'll run twice (and hence take 58 seconds).
You can use this if you know how many times you want it to be executed
num_iter = 15
while [ num_iter > 0 ]
do
sh script.sh
sleep 2
let "num_iter--"
done
Or use watch -n to repeat a command every n seconds. Then you wrap the watch function and kill it after specific amount of time
Related
I have a script that runs through a list of servers to connect to and grabs by SCP over files to store
Occasionally due to various reasons one of the servers crashes and my script gets stuck for around 4 hours before moving on through the list.
I would like to be able to detect a connection issue or a period of time elapsed after script has started and kill that command and move on to next.
I suspect that this would involve a wait or sleep and continue but I am new to loops and bash
#!/bin/bash
#
# Generate a list of backups to grab
df|grep backups|awk -F/ '{ print $NF }'>/tmp/backuplistsmb
# Get each backup in turn
for BACKUP in `cat /tmp/backuplistsmb`
do
cd /srv/backups/$BACKUP
scp -o StrictHostKeyChecking=no $BACKUP:* .
sleep 3h
done
The above script works fine but does get stuck for 4 hours should there be a connection issue. It is worth noting that some of the transfers take 10 mins and some 2.5 hours
Any ideas or help would very appreciated
Try to use the timeout program for that:
Usage:
timeout [OPTION] DURATION COMMAND [ARG]...
E.g. time (timeout 3 sleep 5) will run for 3 secs.
So in your code you can use:
timeout 300 scp -o StrictHostKeyChecking=no $BACKUP:* .
This limits the copy to 5 minutes.
Can I configure bash to report how long each command takes to execute, if it's longer than some threshold?
I thought I recalled some setting for this, but can't find it either in bash(1) or google.
The idea, in case it's not clear, would be something like this:
% SUBCMDTMOUT=30
% sleep 29 # 29 seconds elapse
% sleep 30 # 30 seconds elapse
% sleep 31 # 31 seconds elapse
bash: subcommand `sleep 31' took 31 seconds to complete.
%
Prepend time to your command and then parse the output of time through any condition to get your desired output.
Example:
$ time sleep 15
real 0m15.003s
user 0m0.000s
sys 0m0.002s
#chepner probably has it right: REPORTTIME in zsh (though that only tracks CPU time; I suspect my mystery problem is some kind of network wait). But since I'm not motivated enough to convert my login shell for this, the specific answer to my question is "nope."
I have a simple bash script that runs some tasks which can take varying amounts of time to complete (from 15 mins to 5 hours). The script loops using a for loop, so that I can run it an arbitrary number of times, normally back-to-back.
However, I have been requested to have each iteration of the script start at the top of the hour. Normally, I would use cron and kick it off that way, every hour, but since the runtime of the script is highly variable, that becomes trickier.
It is not allowable for multiple instances of the script to be running at once.
So, I'd like to include the logic to wait for 'top of the hour' within the script, but I'm not sure of the best way to do that, or if there's some way to (ab)use 'at' or something more elegant like that. Any ideas?
You can still use cron. Just make your script use a lock file. With the flock utility you can do:
#!/bin/bash
exec 42> /tmp/myscriptname.lock
flock -n 42 || { echo "Previous instance still running"; exit 1; }
rest of your script here
Now, simply schedule your job every hour in cron, and the new instance will simply exit if the old one's still running. There is no need to clean up any lock files.
This question already has answers here:
Cron jobs and random times, within given hours
(13 answers)
Closed 9 years ago.
Need run a shell script once a day at random time. (so once every day between 00:00-23:59).
I know the sleep command, and the cron too, but
the cron has not random times
and the sleep solution - not very nice - my idea is launch the script every midnight and sleep random time at the start of the script.
Is here something more elegant?
If you have the at command, you can combinte the cron and the at.
Run from a cron every midnight the next script:
#!/bin/bash
script="/tmp/script.sh" #insert the path to your script here
min=$(( 24 * 60 ))
rmin=$(( $RANDOM % $min ))
at -f "$script" now+${rmin}min
The above will run the at command every midnight and will execute your script at random time . You should check your crontab how often is the atrun command started. (The atrun runs the commands stored with the at)
The main benefit in comparison with the sleep method: this "survives" the system reboot.
I would simply launch you script at midnight, and sleep for a random time between 0 and 86400 seconds. Since my bash's $RANDOM returns a number between 0 and 32767:
sleep $(( ($RANDOM % 1440)*60 + ($RANDOM % 60) ))
The best alternative to cron is probably at
See at man page
Usually, at reads commands from standard input, but you can give a file of jobs with -f.
Time wise, you can specify many formats. Maybe in your case the most convenient would be
at -f jobs now + xxx minutes
where your scripts gives xxx as a random value from 1 to 1440 (1440 minutes in a day), and jobs contains the commands you want to be executed.
Nothing prevents you from running sed to patch your crontab as the last thing your program does and just changing the next start time. I wouldn't sleep well though.
You can use cron to launch bash script, which generates pseudorandom timestamp and gives it to unix program at
I see you are familiar with bash and cron enough, so at will be a piece of cake for you. Documentation as always "man at" or you can try wiki
http://en.wikipedia.org/wiki/At_(Unix)
Is there a simple way to do the equivalent of this, but run the two processes concurrently with bash?
$ time sleep 5; sleep 8
time should report a total of 8 seconds (or the amount of time of the longest task)
$ time (sleep 5 & sleep 8 & wait)
real 0m8.019s
user 0m0.005s
sys 0m0.005s
Without any arguments, the shell built-in wait waits for all backgrounded jobs to complete.
Using sleeps as examples.
If you want to only time the first process, then
time sleep 10 & sleep 20
If you want to time both processes, then
time (sleep 10 & sleep 20)
time sleep 8 & time sleep 5
The & operator causes the first command to run in the background, which practically means that the two commands will run concurrently.
Sorry my question may not have been exactly clear the first time around, but I think I've found an answer, thanks to some direction given here.
time sleep 5& time sleep 8
will time both processes while they run concurrently, then I'll just take the larger result.
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
time parallel sleep ::: 5 8
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1