Exiting whole bash script - bash

I have two script, script1 have a while loop, script 2 called in this while loop. in script 2 is case statement with an exiting whole program option. however when the exit 0 called it only exit from script 2 not exit while loop in script 1. Any idea to do that? script details as below
script1.sh
while read -r line;
do
bash script2.sh
done<list.txt
script2.sh
read input </dev/tty
case "input" in
e)
exit 0
;;
*)
echo $input
;;
esac
To be clear what I want the program to run is when I press e I want the whole program stop not just finish one loop.
Thank you for the help in advance

Script 2 is in a different process to script 1, so script 2 can't kill script 1.
2 Solutions:
Make script 2 run in the same process as script run (e.g. . script2)
Make script 2 return a value that script 1 can check for and exit if matched.

Put:
set -e
at the beginning of script 1. And in script 2, replace 'exit 0' with 'exit 1'.

Related

What does ": exit 0" mean in a bash script?

I found in some bash scripts, usually at the end of the file, there is one line of code, like below:
: exit 0
What's the meaning of it? Can I remove it directly?
The bash builtin : is basically a command that returns zero (success) after all its arguments are expanded by the shell(a). In this case, the expansion doesn't really do anything so it's effectively a null operation. I suspect it's there just to indicate the effect of the :(b).
And the effect of that : has to do with what bash scripts return. They basically return the exit status of the last command that was run in the script. The : will therefore force the exit status of the script as a whole to be zero regardless of what the command before it returned.
You can see the effect with the following script:
ls /tmp/nosuchfile 2> /dev/null
If you run that followed by echo $?, you'll see an error code:
pax> ./script.sh ; echo $?
2
If you then change the script to:
ls /tmp/nosuchfile 2> /dev/null
: some arbitrary text
then you will see a success code from the script:
pax> ./script.sh ; echo $?
0
(a) I often use it for infinite loops, such as:
while : ; do somePeriodicThing ; sleep 60 ; done
(b) Of course, it's not quite the same as exit 0 since the exit will exit your current shell which will act differently depending on whether you ran it or sourced it:
./script.sh # runs it, exit will exit that script.
. ./script.sh # sources it, exit will exit your shell.
The : will not exit your current shell in either of those two cases.

Is there a way to stop scripts that are running simultaneously if one of them send an echo?

I need to find if a value (actually it's more complex than that) is in one of 20 servers I have. And I need to do it as fast as possible. Right now I am sending the scripts simultaneously to all the servers. My main script is something like this (but with all the servers):
#!/bin/sh
#mainScript.sh
value=$1
c1=`cat serverList | sed -n '1p'`
c2=`cat serverList | sed -n '2p'`
sh find.sh $value $c1 & sh find.sh $value $c2
#!/bin/sh
#find.sh
#some code here .....
if [ $? -eq 0 ]; then
rm $tempfile
else
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
echo "$myValue"
fi
So the script only returns a response if it finds the value in the server. I would like to know if there is a way to stop executing the other scripts if one of them already return a value.
I tried adding an "exit" on the find.sh script but it won't stop all the scripts. Can somebody please tell me if what I want to do is possible?
I would suggest that you use something that can handle this for you: GNU Parallel. From the linked tutorial:
If you are looking for success instead of failures, you can use success. This will finish as soon as the first job succeeds:
parallel -j2 --halt now,success=1 echo {}\; exit {} ::: 1 2 3 0 4 5 6
Output:
1
2
3
0
parallel: This job succeeded:
echo 0; exit 0
I suggest you start by modifying your find.sh so that its return code depends on its success, that will let us identify a successful call more easily; for instance:
myValue=`sed -n '/VALUE/p' $tempfile | awk 'BEGIN{FS="="} {print substr($2, 8, length($2)-2)}'`
success=$?
echo "$myValue"
exit $success
To terminate all the find.sh processes spawned by your script you can use pkill with a Parent Process ID criteria and a command name criteria :
pkill -P $$ find.sh # $$ refers to the current process' PID
Note that this requires that you start the find.sh script directly rather than passing it as a parameter to sh. Normally that shouldn't be a problem, but if you have a good reason to call sh rather than your script, you can replace find.sh in the pkill command by sh (assuming you're not spawning other scripts you wouldn't want to kill).
Now that find.sh exits with success only when it finds the expected string, you can plug the two actions with && and run the whole thing in background :
{ find.sh $value $c1 && pkill -P $$ find.sh; } &
The first occurrence of find.sh that terminates with success will invoke the pkill command that will terminate all others (those killed processes will have non-zero exit codes and therefore won't run their associated pkill).

Waiting for multiple processes in bash with set -e

I have a bash script where I would like to run two processes in parallel, and have the script fail if either of the processes return non-zero. A minimal example of my initial attempt is:
#!/bin/bash
set -e
(sleep 3 ; true ) &
(sleep 4 ; false ) &
wait %1 && wait %2
echo "Still here, exit code: $?"
As expected this doesn't print the message because wait %1 && wait %2 fails and the script exits due to the set -e. However, if the waits are reversed such that the first one has the non-zero status (wait %2 && wait %1), the message is printed:
$ bash wait_test.sh
Still here, exit code: 1
Putting each wait on its own line works as I want and exits the script if either of the processes fail, but the fact that it doesn't work with && makes me suspect that I'm misunderstanding something here.
Can anyone explain what's going on?
You can achieve what you want quite elegantly with GNU Parallel and its "fail handling".
In general, it will run as many jobs in parallel as you have CPU cores.
In your case, try this, which says "exit with failed status if one or more jobs failed":
#!/bin/bash
cat <<EOF | parallel --halt soon,fail=1
echo Job 1; exit 0
echo Job 2; exit 1
EOF
echo GNU Parallel exit status: $?
Sample Output
Job 1
Job 2
parallel: This job failed:
echo Job 2; exit 1
GNU Parallel exit status: 1
Now run it such that no job fails:
#!/bin/bash
cat <<EOF | parallel --halt soon,fail=1
echo Job 1; exit 0
echo Job 2; exit 0
EOF
echo GNU Parallel exit status: $?
Sample Output
Job 1
Job 2
GNU Parallel exit status: 0
If you dislike the heredoc syntax, you can put the list of jobs in a file called jobs.txt like this:
echo Job 1; exit 0
echo Job 2; exit 0
Then run with:
parallel --halt soon,fail=1 < jobs.txt
From bash manual section about usage of set
-e Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR above), exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !. If a compound command other than a subshell returns a non- zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell.
tl;dr
In a bash script, for a command list like this
command1 && command2
command1 is run in a separate environment, so it cannot affect the script's execution environment. but command2 is run in the current environment, so it can affect

Run shell script after succesfull execution of parellel scripts

I have 4 shell scripts, first 3 scripts i want to execute parallel. Later after successful completion of all 3 scripts i want to execute 4th script
Parellelexecution
sh script1.sh,
sh script2.sh,
sh script3.sh
script4.sh should execute after all 3 execution.
bash 4.3 added a -n flag to wait that lets it wait for any one background job to complete. For a fixed number of background jobs, you could do use something like
script1.sh &
script2.sh &
script3.sh &
wait -n && wait -n && wait -n && script4.sh
For a large or variable number of background jobs, Kurt's answer is better.
In bash you can do:
pids=
for s in script1.sh script2.sh script3.sh; do
$s &
pids="$pids $!"
done
JOBS_FAILED=false
for pid in $pids; do
if ! wait $pid; then
# script didn't exit successfully
JOBS_FAILED=true
fi
done
if [[ $JOBS_FAILED == false ]]; then
script4.sh
fi
First it starts all the first 3 scripts in background and collects their pids. Then it runs through each pid waiting for it to exit and checking its return value. If any of the first three scripts fail, $JOBS_FAILED is set to the string true but all the processes are still waited on. Once all the first 3 scripts finish, the script checks if any jobs failed. If not, script4.sh is run.

How can I run multiple bash scripts in unison?

I'm learning Bash for a Unix class, and I'm trying to figure out how to run a script, then run a second script while the first is running and have the two interact. To clarify, the scripts look like this:
#!/bin/bash
num = 1
trap exit 0 SIGINT SIGTERM
trap "{ echo &num ; num++; }" SIGUSR1
while :
do
sleep 2
done
and the second one:
#!/bin/bash
if ps | grep "$1" > /dev/null
then
kill -SIGUSR1 $1
else
echo "Process doesn't exist"
fi
exit 0
In case the code isn't correct, the general idea is for the first script to loop until it recieves a SIGINT or SIGTERM, and echo and increment a number whenever it receives a SIGUSR1. The second script takes a pid as an argument and checks if it exists, and sends a SIGUSR1 to the given process. The problem is that when I run the first script, I can't do anything unless I move it to the background with ctrl-z, but when it's there it doesn't seem to respond to any signal except a kill signal. Any ideas on how to make this work?
You can use mycommand & to run a script in the background. Ctrl-Z stops the script, but you can then use bg to let it run in the background. In either case, you can use fg to bring it to the foreground again.
Also note that you can't have spaces around the = in assignments, and you can use let num++ to increment num. You should also singlequote the command in trap, to prevent "$num" from expanding.
All in all:
#!/bin/bash
num=1
trap exit 0 SIGINT SIGTERM
trap '{ echo $num ; let num++; }' SIGUSR1
while :
do
sleep 2
done
Finally, you can more easily check if a pid exists by just using kill -0 pid, or just attempting to sigusr1 it and check the result, to avoid grep "123" matching the substring of pid "1234" and such.
You need to make the first script run in the background. When you press Ctrl+Z it is suspended. Then you can type "bg" to make it run in the background (it will stop again if it tries to read from standard input, to allow you to switch back to it with the "fg" command).
Another way is to start script1 already in the background like this:
$ ./script1 &
The ampersand starts a job in the background and returns you to the prompt immediately.
Look in the bash man page under "JOB CONTROL" (here's a copy) for more information on how this works. The key commands to deal with jobs from an interactive shell is "jobs", "fg", and "bg".

Resources