The problem I want to tackle is as follows. I have a long(1 to 2 hours) running task that has to be run everyday. So the goto option was cron. But the catch is that I have to give a 24 hour gap between successive runs. So using cron now would involve rewriting the cron job file after every run. This might be clear after this example.
The long running job 'LR' starts at 6PM on Monday and finishes at 7:30PM sameday.
On Tuesday it's supposed to start at 7:30 PM and not 6PM (like it did on monday). This is because there has to be a 24hr gap between successive runs.
The obvious option here was to have a process running an infinite loop. start the LR job. Then sleep for 24hr and continue with the loop. This works perfectly too. In my setup there is a bash script which is running this loop.
while [ 1 == 1 ]; do
/bin/jobs/long_run.py
/bin/jobs/cleanup.sh
sleep 86400
done
So my question is what is the total amount of CPU resource spent and what is the RAM usage.
Not sure if this affects the answer in anyway; I'm running this on termux on an android phone.
Also please recommend other light weight options.
There is nothing to worry about resources, while a script executes sleep, it really sleeps. You should worry for if anything happens between two executions, like restart, downtime etc. This structure:
while true; do
sh script.sh
sleep 86400
done
does not resume and you don't save the time for the next execution anywhere. Similar to this structure is to have a wrapper, suppose f() is your jobs
f() {
echo working
}
wrapper() {
f
echo sleeping
sleep 86400
wrapper
}
wrapper
so now you call the wrapper, which works, sleeps and calls itself. You could use just this, if you are ok with what could go wrong, at least print the datetime somewhere.
You can replace the internal sleep and wrapper call with job scheduling with cron or at. Probably at is not a standard packet for all distributions (at least not for mine) while cron is. You could install it. For at the wrapper would be like this:
wrapper() {
f
at now +1 day wrapper
}
With cron, you could edit the crontab, like this but better use a crontab file like this, what you have to do is to parse date command, create the date prefix, update crontab.
Note: There may be other cron jobs for user, existing or added after that, this is considered in the last link.
Related
Can anyone give me a hint how to write a timer shell script so that, e.g. once it's got to tomorrow midnight, it will run other scripts that are already prepared in the same directory. Thanks a lot.
If you don't have the ability to use cron or at, you can do this with a script. The key commands are sleep to kill time and date to get the current time. Something like (untested sh script):
while sleep 600; do
time=$(date +%H)
if [ ${time} = '00' ]; then
echo Now is the time
break
fi
done
This technique allows running scripts periodically on systems where cron and at access is disabled. The break is for one time use. Adjust the sleep time to meet your needs. date can return any number of variables that can be used to decide if the desired time has arrived. For simple perodic runs the if statement can be removed.
I have a simple bash script that runs some tasks which can take varying amounts of time to complete (from 15 mins to 5 hours). The script loops using a for loop, so that I can run it an arbitrary number of times, normally back-to-back.
However, I have been requested to have each iteration of the script start at the top of the hour. Normally, I would use cron and kick it off that way, every hour, but since the runtime of the script is highly variable, that becomes trickier.
It is not allowable for multiple instances of the script to be running at once.
So, I'd like to include the logic to wait for 'top of the hour' within the script, but I'm not sure of the best way to do that, or if there's some way to (ab)use 'at' or something more elegant like that. Any ideas?
You can still use cron. Just make your script use a lock file. With the flock utility you can do:
#!/bin/bash
exec 42> /tmp/myscriptname.lock
flock -n 42 || { echo "Previous instance still running"; exit 1; }
rest of your script here
Now, simply schedule your job every hour in cron, and the new instance will simply exit if the old one's still running. There is no need to clean up any lock files.
I have written a script to initiate multi-processing
for i in `seq 1 $1`
do
/usr/bin/php index.php name&
done
wait
A cron run every min - myscript.sh 3 now three background process get initiated and after some time I see list of process via ps command. I see all the processes are together in "Sleep" or "Running" mode...Now I wanted to achieve that when one goes to sleep other processes must process..how can I achieve it?. Or this is normal.
This is normal. A program that can run will be given time by the operating system... when possible. If all three are sleeping, then the system is most likely busy and time is being given to other processes.
Is there a way to create a bash script that will only run for X hours? I'm currently setting up a cron job to initiate a script every night. This script essentially runs until a certain condition is met, exporting it's status to a holding variable to keep track of 'where it is' after each iteration. The intention is to start-up the process every night, run for a few hours, and then stop, holding the status until the process starts up the next night.
Short of somehow collecting the start time, and checking it against the current time in each iteration of the loop, is there an easier way to do this? Bash scripting is not my forte (I know enough to get things done and be dangerous) and I have not done something like this before. Any help would be appreciated. Thanks.
Use GNU Coreutils
GNU coreutils contains an actual timeout binary, usually invoked like this:
# timeout after 5 seconds when sleeping for 30
/usr/bin/timeout 5s /bin/sleep 30
In your case, you'd want to specify hours instead of seconds, so to timeout in 2 hours use something like 2h instead of 5s. See timeout(1) or info coreutils 'timeout invocation' for additional options.
Hacks and Workarounds
Native timeouts or the GNU timeout command are really the best options. However, see the following for some ideas if you decide to roll your own:
How do I run a command, and have it abort (timeout) after N seconds?
The TMOUT variable using read and process or command substitution.
Do it as you described - it is the cleanest way.
But if for some strange reason want kill the process after a time, can use the next
./long_runner &
(sleep 5; kill $!; wait; exit 0) &
will kill the long_runner after 5 secs.
By using the SIGALRM facility you can rig a signal to be sent after a certain time, but traditionally, this was not easily accessible from shell scripts (people would write small custom C or Perl programs for this). These days, GNU coreutils ships with a timeout command which does this by wrapping your command:
timeout 4h yourprogram
This question already has answers here:
Cron jobs and random times, within given hours
(13 answers)
Closed 9 years ago.
Need run a shell script once a day at random time. (so once every day between 00:00-23:59).
I know the sleep command, and the cron too, but
the cron has not random times
and the sleep solution - not very nice - my idea is launch the script every midnight and sleep random time at the start of the script.
Is here something more elegant?
If you have the at command, you can combinte the cron and the at.
Run from a cron every midnight the next script:
#!/bin/bash
script="/tmp/script.sh" #insert the path to your script here
min=$(( 24 * 60 ))
rmin=$(( $RANDOM % $min ))
at -f "$script" now+${rmin}min
The above will run the at command every midnight and will execute your script at random time . You should check your crontab how often is the atrun command started. (The atrun runs the commands stored with the at)
The main benefit in comparison with the sleep method: this "survives" the system reboot.
I would simply launch you script at midnight, and sleep for a random time between 0 and 86400 seconds. Since my bash's $RANDOM returns a number between 0 and 32767:
sleep $(( ($RANDOM % 1440)*60 + ($RANDOM % 60) ))
The best alternative to cron is probably at
See at man page
Usually, at reads commands from standard input, but you can give a file of jobs with -f.
Time wise, you can specify many formats. Maybe in your case the most convenient would be
at -f jobs now + xxx minutes
where your scripts gives xxx as a random value from 1 to 1440 (1440 minutes in a day), and jobs contains the commands you want to be executed.
Nothing prevents you from running sed to patch your crontab as the last thing your program does and just changing the next start time. I wouldn't sleep well though.
You can use cron to launch bash script, which generates pseudorandom timestamp and gives it to unix program at
I see you are familiar with bash and cron enough, so at will be a piece of cake for you. Documentation as always "man at" or you can try wiki
http://en.wikipedia.org/wiki/At_(Unix)