parallel execution in shell scripting hangs - shell

My requirement is to run multiple shell scripts at a time.
After searching on Google could conclude that I can use "&" at the end of filename while triggering the run like:
sh file.sh &
the thing is I have for loop which generates the values and gives runtime parameters for the shell script:
sample code:
declare -a arr=("1" "2")
for ((i=0;i<${#arr[#]};++i));
do
sh fileto_run.sh ${arr[i]}
done
this successfully triggers the fileto_run.sh in parallel but it hangs there itself.. imagine I have echo statement in the script then the following is how the code hangs:
-bash-x.x$ 1
2
until I use ctrl+c the code execution wont exit.
I thought of using a break statement but that breaks the loop.
Am I doing wrong anywhere?
Please do correct me.

Related

Writing a bash script, how do I stop my session from exiting when my script exits?

bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.

Run sh scripts successively

I'd like to write .sh script that runs several scripts in the same directory one-by-one without running them concurrently (e.x. while the first one is still executing, the second one doesn't start executing).
Could you tell me the command, that could be written in front of script's name that does the actual thing?
I've tried source but it gives the following message for every listed script
./outer_script.sh: source: not found
source is a non-standard extension introduced by bash. POSIX specifies that you must use the . command. Other than the name, they are identical.
However, you probably don't want to source, because that is only supposed to be used when you need the script to be able to change the state of the script calling it. It is like a #include or import statement in other languages.
You would usually want to just run the script directly as a command, i.e. do not prefix it with source nor with any other command.
As a quick example of not using source:
for script in scripts/*; do
"$script"
done
If the above does not work, ensure that you've set the executable bit (chmod a+x) on the necessary scripts.
That is normal behavior of the bash script. i.e. if you have three scripts:
script1.sh:
echo "starting"
./script2.sh
./script3.sh
echo "done"
script2.sh:
while [ 1 ]; do
echo "script2"
sleep 2
done
and script3.sh:
echo "script3"
The output is:
starting
script2
script2
script2
...
and script3.sh will never be executed, unless you modify script1.sh to be:
echo "starting"
./script2.sh &
./script3.sh &
echo "done"
in which case the output will be something like:
starting
done
script2
script3
script2
script2
...
So in this case I assume your second level scripts contain something that starts new processes.
Have you included the line #!bin/bash in your outer_script? Some OS's don't consider it to be bash by default and source is bash command. Else just call the scripts using ./path/to/script.sh inside the outer_script

Exiting a shell script with an error

basically I have written a shell script for a homework assignment that works fine however I am having issues with exiting. Essentially the script reads numbers from the user until it reads a negative number and then does some output. I have the script set to exit and output an error code when it receives anything but a number and that's where the issue is.
The code is as follows:
if test $number -eq $number >dev/null 2>&1
then
"do stuff"
else
echo "There was an error"
exit
The problem is that we have to turn in our programs as text files using script and whenever I try to script my program and test the error cases it exits out of script as well. Is there a better way to do this?
The script is being run with the following command in the terminal
script "insert name of program here"
Thanks
If the program you're testing is invoked as a subprocess, then any exit command will only exit the command itself. The fact that you're seeing contrary behavior means you must be invoking it differently.
When invoking your script from the parent testing program, use:
# this runs "yourscript" as its own, external process.
./yourscript
...to invoke it as a subprocess, not
# this is POSIX-compliant syntax to run the commands in "yourscript" in the current shell.
. yourscript
...or...
# this is bash-extended syntax to run the commands in "yourscript" in the current shell.
source yourscript
...as either of the latter will run all the commands -- including exit -- inside your current shell, modifying its state or, in the case of exit, exec or similar, telling it to cease execution.

capture exit code from a script flow

I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !

abort execution of shell script on error

I am new to shelll script and my question here might be very basic.
I have a script(.sh file) which is making call to couple of scipt files. now suppose, I am getting error message on execution of script 1, I would like to abort the script and completely terminate the script flow. It should not go on next step.
could you please tell me how i can achieve this.
EDIT
My scriptA is making call to Script B, which internally making call to some other scriptC.
if execution of StopServer1.py script(part of script B) failed, flow should terminate here itself and should not come to StopServer2 and StopServer3 and control goes to Script A. which should also terminate .
please let me know if set -e will help here.
cd /usr/oracle/WSAutomate/
java weblogic.WLST /usr/oracle/StopServer1.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer2.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer3.py >> $logFileName
You can use:
set -e
for aborting the script on error.
main script snippet:
#!/bin/bash
set -e
# Any failure in these script calls will make main script to exit immediately
./script1
./script2
./script3
You can use the operator && this way <execute_script1> && <execute_script2>.
It executes script1 and, only if everything goes right, it executes script2. In case of error script2 will not run.

Resources