I have 3 shell scripts a.sh, b.sh and c.sh. the scripts a.sh and b.sh are to be run parallely and script c.sh should only run if a.sh and b.sh are run successfully ( with exit code 0).
Below is my code. The parallel execution is working fine but sequential execution of c.sh is out of order. It is executing after completion of a.sh and b.sh even if both the scripts are not returning exit codes 0. Below is the code used.
#!/bin/sh
A.sh &
B.sh &
wait &&
C.sh
How this can be changed to meet my requirement?
#!/bin/bash
./a.sh & a=$!
./b.sh & b=$!
if wait "$a" && wait "$b"; then
./c.sh
fi
Hey i did some testing, in my a.sh i had an exit 255, and in my b.sh i had an exit 0, only if both had an exitcode of 0 it executed c.sh.
you can try this :
.....
wait &&
if [ -z "the scripts a.sh or any commande u need " ]
then
#do something
C.sh
fi
Related
I am trying to run a script where inside I run another script, after it has run I want to retrieve the exit code and do something with it but I only get 0 every time.
I also tried different approaches with sourcing it or wrapping the command in a function and had also given both execution rights with chmod -x.
a.sh
#!/usr/bin/env bash
mkdir data
mkdir -p data/test
b.sh
#!/usr/bin/env bash
set -x
(bash a.sh) &
pid=$!
wait $pid
exitCode=$?
echo $pid
echo $exitCode
Result with bash b.sh:
+ pid=7399
+ wait 7399
+ bash a.sh
mkdir: data: File exists
+ exitCode=0
+ echo 7399
7399
+ echo 0
0
Result with bash a.sh:
mkdir: data: File exists (no exit code)
Result with mkdir data:
mkdir: data: File exists (exit code 1)
I know I can do mkdir -p data but this is just a test to get any exit code that I can work with later in my script.
Bash version: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
Solved it by using: set -e
Reference: https://ss64.com/bash/set.html
This enables immediately exiting when something went wrong. In my case the second command returned 0 and that's why I got 0 every time.
I have scripts a.sh and b.sh in which i have pass the ip as argument.
I tried running it as
sh -x a.sh 172.19.57.21 & b.sh 172.19.57.21 &
But I see only first script runs.
When you run sh -x a.sh 172.19.57.21 & b.sh 172.19.57.21 &:
sh -x a.sh 172.19.57.21 is one command, & sends it to background immediately
b.sh 172.19.57.21 is another command, again & puts it in background
The problem seems to me is that the script b.sh is not executable and as you are not running it as an argument to shell (unlike a.sh), it is failed in the PATH search.
You can run b.sh as shell's argument as well e.g:
sh a.sh 172.19.57.21 & sh b.sh 172.19.57.21 &
Or if both scripts are executables and have proper shebang:
./a.sh 172.19.57.21 & ./b.sh 172.19.57.21 &
I would recommend a wrapper to get the argument IP address once, and call required scripts from the wrapper, something like a tiny function would do:
wrapper() {
/path/to/a.sh "$#" &
/path/to/b.sh "$#" &
}
Now, you can just do e.g.:
wrapper 172.19.57.21
use ; between commands like
sh -x a.sh 172.19.57.21 &; sh -x b.sh 172.19.57.21 &
I have 4 shell scripts, first 3 scripts i want to execute parallel. Later after successful completion of all 3 scripts i want to execute 4th script
Parellelexecution
sh script1.sh,
sh script2.sh,
sh script3.sh
script4.sh should execute after all 3 execution.
bash 4.3 added a -n flag to wait that lets it wait for any one background job to complete. For a fixed number of background jobs, you could do use something like
script1.sh &
script2.sh &
script3.sh &
wait -n && wait -n && wait -n && script4.sh
For a large or variable number of background jobs, Kurt's answer is better.
In bash you can do:
pids=
for s in script1.sh script2.sh script3.sh; do
$s &
pids="$pids $!"
done
JOBS_FAILED=false
for pid in $pids; do
if ! wait $pid; then
# script didn't exit successfully
JOBS_FAILED=true
fi
done
if [[ $JOBS_FAILED == false ]]; then
script4.sh
fi
First it starts all the first 3 scripts in background and collects their pids. Then it runs through each pid waiting for it to exit and checking its return value. If any of the first three scripts fail, $JOBS_FAILED is set to the string true but all the processes are still waited on. Once all the first 3 scripts finish, the script checks if any jobs failed. If not, script4.sh is run.
I've got two working scripts: a.sh, b.sh.
I would like to create one script which will do the following:
1. Run a.sh
2. Running b.sh is dependant on the output of a.sh so wait for a String from a.sh standatd output saying 'a.sh launched' only then run b.sh If this is too trickyto to implement then perhaps simply wait for say 2 minutes before running the second script.
What would be the best way acheiving this?
Continuously reads output of a.sh and when it encounters "a.sh launched" it launches b.sh.
./a.sh | while read line; do
echo $line # if you want to see the output of a.sh
[ "$line" == "a.sh launched" ] && ./b.sh &
done
If you want to match
a.sh lounched at `date`
use advanced bash comparsion
[[ "$line" =~ "a.sh lounched".* ]]
I've a little problem, probably it's a stupid question, but I started learning bash about a week ago...
I have 2 script, a.sh and b.sh. I need to make both running constantly. b.sh should waits for a signal from a.sh
(I'm trying to explain:
a.sh and b.sh run --> a.sh sends a signal to b.sh -> b.sh traps signal, does something --> a.sh does something else and then sends another signal --> b.sh traps signal, does something --> etc.)
This is what I've tried:
a.sh:
#!/bin/bash
./b.sh &;
bpid=$!;
# do something.....
while true
do
#do something....
if [ condition ]
then
kill -SIGUSR1 $bpid;
fi
done
b.sh:
#!/bin/bash
while true
do
trap "echo I'm here;" SIGUSR1;
done
When I run a.sh I get no output from b.sh, even if I redirect the standard output to a file...
However, when I run b.sh in background from my bash shell, it seems to answer to my SIGUSR1 (sent with the same command, directly from shell) (I'm getting the right output)
What I'm missing?
EDIT:
this is a simple example that I'm trying to run:
a.sh:
#!/bin/bash
./b.sh &
lastpid=$!;
if [ "$1" == "something" ]
then
kill -SIGUSR1 $lastpid;
fi
b.sh:
#!/bin/bash
trap "echo testlog 1>temp" SIGUSR1;
while true
do
wait
done
I can't get the file "temp" when running a.sh.
However if I execute ./b.sh & and then kill -SIGUSR1 PIDOFB manually, everything working fine...
One of the possible solutions would be the next one (perhaps, it's dirty one, but it works):
a.sh:
#!/bin/bash
BPIDFILE=b.pid
echo "a.sh: started"
echo "a.sh: starting b.sh.."
./b.sh &
sleep 1
BPID=`cat $BPIDFILE`
echo "a.sh: ok; b.sh pid: $BPID"
if [ "$1" == "something" ]; then
kill -SIGUSR1 $BPID
fi
# cleaning up..
rm $BPIDFILE
echo "a.sh: quitting"
b.sh:
#!/bin/bash
BPIDFILE=b.pid
trap 'echo "got SIGUSR1" > b.log; echo "b.sh: quitting"; exit 0' SIGUSR1
echo "b.sh: started"
echo "b.sh: writing my PID to $BPIDFILE"
echo $$ > $BPIDFILE
while true; do
sleep 3
done
The idea is to simply write down a PID value from within a b (background) script and read it from the a (main) script.