This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 1 year ago.
In my bash script I run a command that activates a script. I repeat this command many times in a for loop and as such want to wait until the script is finished before running it again. My bash script is as follows
for k in $(seq 1 5)
do
sed_param='s/mu = .*/mu = '${mu}';/'
sed -i "$sed_param" brusselator.c
make brusselator.tst &
done
As far as I know the & at the end lets the script know to wait until the command is finished, but this isn't working. Is there some other way?
Furthermore, sometimes the command can take very very long, in this case I would maximally want to wait 5 seconds. But if the command is done earlier I would't want to wait 5 seconds. Is there some way to achieve this?
There is the timeout command. You would use it like
timeout -k 5 make brusselator.tst
Maybe you would like to see also if it exited successfully, failed or was killed because it timed out.
timeout -k 5 make brusselator.tst && echo OK || echo Failed, status $?
If the command times out, and --preserve-status is not set, then command exits with status 124. Different status would mean that make failed for different reason before timing out.
Related
I'm on Freebsd9.2.(I have to use this operating system) I want to run multiple scripts with at command but I want to ignore running a script in a same time.
For example: I have 3 script files: 1.sh, 2.sh, 3.sh
I have a job to execute 1.sh at today 16:20, when I run the at command with the same time and script, the number of the jobs in /var/at/jobs changed to 2 jobs. I want to ignore this, but the script 2.sh can run with thw same time. Do you have any idea what should I do?
I don't know if I understood correctly the problem, but maybe the command lockf could help.
For example try this in one terminal:
$ lockf -t 0 /tmp/a.lock sleep 5
In another terminal run:
$ lockf -t 0 /tmp/a.lock echo "sleep finished"
In this example until the command sleep 5 doesn't exit, if you try to run another command you will get something like:
lockf: /tmp/a.lock: already locked
A cron example:
15 4 * * * lockf -t 0 /tmp/poudriere.lock /usr/local/etc/poudriere.d/cron 12amd64 default
This prevents running the script/app if the lock exists, so probably you can get an idea of how you could use it with at
This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 6 years ago.
This is a CentOS 6.x box, on it I have two things that I need to run one right after the other - a shell script and a .sql script.
I want to write a shell script that calls the first script, lets it run and then terminates it after a certain number of hours, and then calls the .sql script (they can't run simultaneously).
I'm unsure how to do the middle part, that is terminating the first script after a certain time limit, any suggestions?
script.sh &
sleep 4h && kill $!
script.sql
This will wait 4 hours then kill the first script and run the second. It always waits 4 hours, even if the script exits early.
If you want to move on immediately, that's a little trickier.
script.sh &
pid=$!
sleep 4h && kill "$pid" 2> /dev/null &
wait "$pid"
This question already has answers here:
How would I get a cron job to run every 30 minutes?
(6 answers)
Closed 7 years ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
I want to schedule a command like ./example every 6 minutes and when 6 minutes is done it exits the process and runs it again. How would I do that in Bash? I run CentOS.
I would make a cronjob running every sixth minutes and using the timeout command to kill it after, say, 5 minutes and 50 seconds.
This is a sample crontab rule:
*/6 * * * * cd /path/to/your/file && timeout -s9 290s ./example
It changes working directory to where you have your script and then executes the script. Note that I send it signal 9 (SIGKILL) using the -s9 flag which means "terminate immediately". In most cases you might want to consider sending SIGTERM instead, which tells the script to "exit gracefully". If that is the case you can consider giving the script a little bit more time to exit by decreasing the timeout value even more. To send SIGTERM instead of SIGKILL, just remove the -s9 flag.
You edit your crontab by running crontab -e
Replace mycommand in the script below...
#! /bin/bash
## create an example command to launch for demonstration purposes
function mycommand { D=$(date) ; while true ; do echo $D; sleep 1s ; done; }
while true
do
mycommand & PID=$!
sleep 6m
kill $PID ; wait $PID 2>/dev/null
done
Every six minutes, this kills the command then restarts it.
Use Ctrl-C as one way to terminate this sequence.
I have a bash script with a loop that calls a hard calculation routine every iteration. I use the results from every calculation as input to the next. I need make bash stop the script reading until every calculation is finished.
for i in $(cat calculation-list.txt)
do
./calculation
(other commands)
done
I know the sleep program, and i used to use it, but now the time of the calculations varies greatly.
Thanks for any help you can give.
P.s>
The "./calculation" is another program, and a subprocess is opened. Then the script passes instantly to next step, but I get an error in the calculation because the last is not finished yet.
If your calculation daemon will work with a precreated empty logfile, then the inotify-tools package might serve:
touch $logfile
inotifywait -qqe close $logfile & ipid=$!
./calculation
wait $ipid
(edit: stripped a stray semicolon)
if it closes the file just once.
If it's doing an open/write/close loop, perhaps you can mod the daemon process to wrap some other filesystem event around the execution? `
#!/bin/sh
# Uglier, but handles logfile being closed multiple times before exit:
# Have the ./calculation start this shell script, perhaps by substituting
# this for the program it's starting
trap 'echo >closed-on-calculation-exit' 0 1 2 3 15
./real-calculation-daemon-program
Well, guys, I've solved my problem with a different approach. When the calculation is finished a logfile is created. I wrote then a simple until loop with a sleep command. Although this is very ugly, it works for me and it's enough.
for i in $(cat calculation-list.txt)
do
(calculations routine)
until [[ -f $logfile ]]; do
sleep 60
done
(other commands)
done
Easy. Get the process ID (PID) via some awk magic and then use wait too wait for that PID to end. Here are the details on wait from the advanced Bash scripting guide:
Suspend script execution until all jobs running in background have
terminated, or until the job number or process ID specified as an
option terminates. Returns the exit status of waited-for command.
You may use the wait command to prevent a script from exiting before a
background job finishes executing (this would create a dreaded orphan
process).
And using it within your code should work like this:
for i in $(cat calculation-list.txt)
do
./calculation >/dev/null 2>&1 & CALCULATION_PID=(`jobs -l | awk '{print $2}'`);
wait ${CALCULATION_PID}
(other commands)
done
This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 9 years ago.
I'm writing a script and would like to know how to ask one of the commands to exit after few seconds. For eg. let's suppose my script runs 2 application commands in it.
#!/bin/bash
for i in `cat servers`
do
<command 1> $i >> Output_file #Consistency command
<command 2> $i >> Output_file #Communication check
done
These commands are to check consistency & communication to/from application. I want to know how do I make sure that command 1 & 2 runs for only few seconds and if there is no response from particular host, move on to next command.
bash coreutils has got 'timeout` command.
From manual:
DESCRIPTION
Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be "s" for seconds (the default), "m" for
minutes, "h" for hours or "d" for days.
for example:
timeout 5 sleep 6