Start multiple processes in Bash and time how long they take - bash

How do I start multiple processes in bash and time how long they take?
From this question I know how to start multiple processes in a bash script but using time script.sh doesn't work because the processes spawned end after the script ends.
I tried using wait but that didn't change anything.
Here is the script in its entirety:
for i in `seq $1`
do
( ./client & )
done
wait # This doesn't seem to change anything
I'm trying to get the total time for all the processes to finish and not the time for each process.

Why the parentheses around client invocation? That's going to run the command in a subshell. Since the background job isn't in the top level shell, that's why the wait is ineffective (there's no jobs in this shell to wait for).
Then you can add time back inside the for loop and it should work.

Related

Parallellize bash scripts and conditional interruption

I've seen many questions about parallelizing bash scripts but so far I haven't found one that answer my questions.
I have a bash script that runs two python scripts sequentially (the fact that are python script is not important though, it could be any other bash job):
python script_1.py
python script_2.py
Now, assume that script_1.py takes a certain (unknown) time to finish, while script_2.py has an infinite loop in it.
I'd like to run the two scripts in parallel, and when script_1.py finishes the execution I'd like to kill script_2.py as well.
Note that I'm not interested in doing this within the python scripts, but I'm interested to do this from a bash point of view.
What I thought was to create 2 "sub" bash scripts: bash_1.sh and bash_2.sh, and to run them in parallel from a main_bash.sh script that looks like:
bash_1.sh & bash_2.sh
where each bash_i.sh job runs a script_i.py script.
However, this wouldn't terminate the second infinite loop once the first one is done.
Is there a way of doing this, adding some sort of condition that kills one script when the other one is done?
As an additional (less important) point, I'd be interested in monitoring the terminal output
of the first script, but not of the second one.
If your scripts need to start in that sequence, you could wait for the bash_1 to finish:
bash_1 &
b1=$!
bash_2 &
b2=$!
wait $b1
kill $b2
It's simpler than you think. When bash_2.sh finishes, just kill bash_1.sh. The trick is getting the process id that kill will need to do this.
bash_2.sh &
b2_pid=$!
bash_1.sh
kill $b2_pid
You can also use job control, if enabled.
bash_2.sh &
bash_1.sh
kill %%
Note that you don't need bash script for this; you can run your Python scripts directly in the same fashion:
python script_2.py &
python script_1.py
kill %%

How to make bash interpreter stop until a command is finished?

I have a bash script with a loop that calls a hard calculation routine every iteration. I use the results from every calculation as input to the next. I need make bash stop the script reading until every calculation is finished.
for i in $(cat calculation-list.txt)
do
./calculation
(other commands)
done
I know the sleep program, and i used to use it, but now the time of the calculations varies greatly.
Thanks for any help you can give.
P.s>
The "./calculation" is another program, and a subprocess is opened. Then the script passes instantly to next step, but I get an error in the calculation because the last is not finished yet.
If your calculation daemon will work with a precreated empty logfile, then the inotify-tools package might serve:
touch $logfile
inotifywait -qqe close $logfile & ipid=$!
./calculation
wait $ipid
(edit: stripped a stray semicolon)
if it closes the file just once.
If it's doing an open/write/close loop, perhaps you can mod the daemon process to wrap some other filesystem event around the execution? `
#!/bin/sh
# Uglier, but handles logfile being closed multiple times before exit:
# Have the ./calculation start this shell script, perhaps by substituting
# this for the program it's starting
trap 'echo >closed-on-calculation-exit' 0 1 2 3 15
./real-calculation-daemon-program
Well, guys, I've solved my problem with a different approach. When the calculation is finished a logfile is created. I wrote then a simple until loop with a sleep command. Although this is very ugly, it works for me and it's enough.
for i in $(cat calculation-list.txt)
do
(calculations routine)
until [[ -f $logfile ]]; do
sleep 60
done
(other commands)
done
Easy. Get the process ID (PID) via some awk magic and then use wait too wait for that PID to end. Here are the details on wait from the advanced Bash scripting guide:
Suspend script execution until all jobs running in background have
terminated, or until the job number or process ID specified as an
option terminates. Returns the exit status of waited-for command.
You may use the wait command to prevent a script from exiting before a
background job finishes executing (this would create a dreaded orphan
process).
And using it within your code should work like this:
for i in $(cat calculation-list.txt)
do
./calculation >/dev/null 2>&1 & CALCULATION_PID=(`jobs -l | awk '{print $2}'`);
wait ${CALCULATION_PID}
(other commands)
done

Run variable length bash script at the top of the hour without cron

I have a simple bash script that runs some tasks which can take varying amounts of time to complete (from 15 mins to 5 hours). The script loops using a for loop, so that I can run it an arbitrary number of times, normally back-to-back.
However, I have been requested to have each iteration of the script start at the top of the hour. Normally, I would use cron and kick it off that way, every hour, but since the runtime of the script is highly variable, that becomes trickier.
It is not allowable for multiple instances of the script to be running at once.
So, I'd like to include the logic to wait for 'top of the hour' within the script, but I'm not sure of the best way to do that, or if there's some way to (ab)use 'at' or something more elegant like that. Any ideas?
You can still use cron. Just make your script use a lock file. With the flock utility you can do:
#!/bin/bash
exec 42> /tmp/myscriptname.lock
flock -n 42 || { echo "Previous instance still running"; exit 1; }
rest of your script here
Now, simply schedule your job every hour in cron, and the new instance will simply exit if the old one's still running. There is no need to clean up any lock files.

How to make a shell script wait for another with out using sleep

I want to know how to make a shell script wait till other script finishes its execution with out the help of sleep command.
suppose i have scripts run.sh and kill.sh, where run.sh will make all the processes up(means to start running the image on the box) whereas kill.sh contains just the kill commands to kill all the running processes.
Whenever i have run the run.sh, it will make all the processes up and it will end. Then what happens here is all the running processes becoming orphan(handled by init). Whenever we run kill.sh, some of the processes are becoming zombies.
Means, Orphan processes becoming zombies.
To avoid this, i want to make the run.sh wait till the end of kill.sh script.
So, How to make a shell script wait for another script ? Please provide the comments.
Thanks in Advance
You can use wait to let the first script finish without giving an explicit sleep.
#!/bin/bash
./first_script.sh
wait
./second_script.sh

How to Parse Values from output in BASH

I'm writing a script that should create a rotating series of debug logs as it runs over a period of time. My current problem is that when I ran it with -vx attached, I can see that it stops during the actual debugging process and doesn't proceed through the loop. This is reflective of how the command would run normally. So I thought to continue the process, I want to run with &.
The problem is that this will become exponentially messier over time (since none of the processes are stopping). So what I'm looking for is a way to parse the PID output of the & command into a variable, and then I will add a kill command at the start of the loop pointed at that variable.
Figuring out how to parse the output of commands will also be useful in the other part of my project, which is to terminate the while loop based on a particular % free in a df -h for a select partition
No parsing needed. The PID of the most recent background process is stored in $!.
command & # run command in background
pid=$! # save pid as $pid
...
kill $pid # kill command

Resources