This question already has answers here:
How would I get a cron job to run every 30 minutes?
(6 answers)
Closed 7 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
I want to schedule a command like ./example every 6 minutes and when 6 minutes is done it exits the process and runs it again. How would I do that in Bash? I run CentOS.
I would make a cronjob running every sixth minutes and using the timeout command to kill it after, say, 5 minutes and 50 seconds.
This is a sample crontab rule:
*/6 * * * * cd /path/to/your/file && timeout -s9 290s ./example
It changes working directory to where you have your script and then executes the script. Note that I send it signal 9 (SIGKILL) using the -s9 flag which means "terminate immediately". In most cases you might want to consider sending SIGTERM instead, which tells the script to "exit gracefully". If that is the case you can consider giving the script a little bit more time to exit by decreasing the timeout value even more. To send SIGTERM instead of SIGKILL, just remove the -s9 flag.
You edit your crontab by running crontab -e
Replace mycommand in the script below...
#! /bin/bash
## create an example command to launch for demonstration purposes
function mycommand { D=$(date) ; while true ; do echo $D; sleep 1s ; done; }
while true
do
mycommand & PID=$!
sleep 6m
kill $PID ; wait $PID 2>/dev/null
done
Every six minutes, this kills the command then restarts it.
Use Ctrl-C as one way to terminate this sequence.
Related
This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 1 year ago.
In my bash script I run a command that activates a script. I repeat this command many times in a for loop and as such want to wait until the script is finished before running it again. My bash script is as follows
for k in $(seq 1 5)
do
sed_param='s/mu = .*/mu = '${mu}';/'
sed -i "$sed_param" brusselator.c
make brusselator.tst &
done
As far as I know the & at the end lets the script know to wait until the command is finished, but this isn't working. Is there some other way?
Furthermore, sometimes the command can take very very long, in this case I would maximally want to wait 5 seconds. But if the command is done earlier I would't want to wait 5 seconds. Is there some way to achieve this?
There is the timeout command. You would use it like
timeout -k 5 make brusselator.tst
Maybe you would like to see also if it exited successfully, failed or was killed because it timed out.
timeout -k 5 make brusselator.tst && echo OK || echo Failed, status $?
If the command times out, and --preserve-status is not set, then command exits with status 124. Different status would mean that make failed for different reason before timing out.
I basically need to automate running 2 commands in separate terminals.
while :
do
timeout 10 gnome-terminal --geometry=95x56 -e "COMMAND1" &&
timeout 7 gnome-terminal -e "COMMAND2" &&
sleep 30
done
Expected behavior:
A terminal opens, running COMMAND1 for 10 seconds, then closes
A second terminal opens, runs COMMAND2 for 7 seconds, then closes
30 seconds pass
The cycle repeats
Actual behavior:
Both COMMAND1 and COMMAND2 start at the same time
COMMAND1 is displayed in the terminal, but does not actually run.
What's going on here?
Self-answer moved from the question into a Community Wiki answer, per What is the appropriate action when the answer to a question is added to the question itself?:
The following, as intended, lets only the first command stay open, while running the second in a loop:
COMMAND1 &
while :
do
sleep 15
gnome-terminal -- timeout 7 COMMAND2 &&
sleep 30
done
This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 6 years ago.
This is a CentOS 6.x box, on it I have two things that I need to run one right after the other - a shell script and a .sql script.
I want to write a shell script that calls the first script, lets it run and then terminates it after a certain number of hours, and then calls the .sql script (they can't run simultaneously).
I'm unsure how to do the middle part, that is terminating the first script after a certain time limit, any suggestions?
script.sh &
sleep 4h && kill $!
script.sql
This will wait 4 hours then kill the first script and run the second. It always waits 4 hours, even if the script exits early.
If you want to move on immediately, that's a little trickier.
script.sh &
pid=$!
sleep 4h && kill "$pid" 2> /dev/null &
wait "$pid"
This question already has answers here:
Cron jobs and random times, within given hours
(13 answers)
Closed 9 years ago.
Need run a shell script once a day at random time. (so once every day between 00:00-23:59).
I know the sleep command, and the cron too, but
the cron has not random times
and the sleep solution - not very nice - my idea is launch the script every midnight and sleep random time at the start of the script.
Is here something more elegant?
If you have the at command, you can combinte the cron and the at.
Run from a cron every midnight the next script:
#!/bin/bash
script="/tmp/script.sh" #insert the path to your script here
min=$(( 24 * 60 ))
rmin=$(( $RANDOM % $min ))
at -f "$script" now+${rmin}min
The above will run the at command every midnight and will execute your script at random time . You should check your crontab how often is the atrun command started. (The atrun runs the commands stored with the at)
The main benefit in comparison with the sleep method: this "survives" the system reboot.
I would simply launch you script at midnight, and sleep for a random time between 0 and 86400 seconds. Since my bash's $RANDOM returns a number between 0 and 32767:
sleep $(( ($RANDOM % 1440)*60 + ($RANDOM % 60) ))
The best alternative to cron is probably at
See at man page
Usually, at reads commands from standard input, but you can give a file of jobs with -f.
Time wise, you can specify many formats. Maybe in your case the most convenient would be
at -f jobs now + xxx minutes
where your scripts gives xxx as a random value from 1 to 1440 (1440 minutes in a day), and jobs contains the commands you want to be executed.
Nothing prevents you from running sed to patch your crontab as the last thing your program does and just changing the next start time. I wouldn't sleep well though.
You can use cron to launch bash script, which generates pseudorandom timestamp and gives it to unix program at
I see you are familiar with bash and cron enough, so at will be a piece of cake for you. Documentation as always "man at" or you can try wiki
http://en.wikipedia.org/wiki/At_(Unix)
This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 9 years ago.
I'm writing a script and would like to know how to ask one of the commands to exit after few seconds. For eg. let's suppose my script runs 2 application commands in it.
#!/bin/bash
for i in `cat servers`
do
<command 1> $i >> Output_file #Consistency command
<command 2> $i >> Output_file #Communication check
done
These commands are to check consistency & communication to/from application. I want to know how do I make sure that command 1 & 2 runs for only few seconds and if there is no response from particular host, move on to next command.
bash coreutils has got 'timeout` command.
From manual:
DESCRIPTION
Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be "s" for seconds (the default), "m" for
minutes, "h" for hours or "d" for days.
for example:
timeout 5 sleep 6