I have 1 bash script that runs another bash script, however the first bashscript isn't waiting for the second one to complete before proceeding, how can I force it to wait?
For example:
#!/bin/bash
# first.sh
#call to secondary script
sh second.sh
echo "second.sh has completed"
echo "continuing with the rest of first.sh..."
The way it is now, it will run second.sh, and continue on, without waiting for second.sh to complete.
AS I use scheme like this in few scripts - just calling second scripts in the same shell-copy using source.
In script-1:
source script2.sh
or:
. script2.sh
So - no one command in script-1 will not be proceeded till script2.sh will end all it's tasks.
Little example.
First script:
$ cat script-1.sh
#!/bin/bash
echo "I'm sccript $0."
echo "Runnig script-2..."
source script-2.sh
echo "script-2.sh finished!"
Second script:
$ cat script-2.sh
#bin/bash
echo "I'm script-2. Running wait operation..."
sleep 2
echo "I'm ended my task."
How it works:
$ ./script-1.sh
I'm sccript ./script-1.sh.
Runnig script-2...
I'm script-2. Running wait operation...
I'm ended my task.
script-2.sh finished!
Normally it does; something else is happening. Are you sure that the other script isn't running something in the background instead? You can try using wait regardless.
You can simply add the command wait after you execute the second script, it will wait for all process that you launch from your principal script
You can even recuperate the PID of your second script using the command echo $! directly after you call the second script, and then pass this PID as an argument to the wait command
try using bash second.sh and check your second.sh and make sure you don't have programs that run in the background
Another way to do it $(second.sh)
Related
I made simple script. file name is sutest.
#!/bin/bash
cd ~/Downloads/redis-4.0.1/src
./redis-server
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
I runed script.$ . sutest
But, script code is stopped at ./redis-server.
So I can't see echo messages.
I want to make this kind of script files. How can I do that??
I would be appreciate your help.
Let's say more general case.
myscript1 file executes process like redis-server above.
another myscript2 file executes process like redis-server above.
another myscript3 file executes process like redis-server above.
How can I run three script files simultaneously??
I want to do job in ssh connection.
To make the matter worse, If I can't use screen or tmux??
Add a '&' char at the end of the row
./redis-server &
this char permits to run in backgroud the job, and the script continues.
Just do the echos first:
cd ~/Downloads/redis-4.0.1/src
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
exec ./redis-server
The use of exec is a small trick (which you can omit if you prefer): it replaces the shell script with redis-server, so the shell script is no longer running at all. Without exec, you end up with the shell script waiting around for redis-server to finish, which is unnecessary if the script will do nothing further.
If you don't like that for some reason, you can keep the original order:
cd ~/Downloads/redis-4.0.1/src
./redis-server & # run in background
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
wait # optional
I'd like to write .sh script that runs several scripts in the same directory one-by-one without running them concurrently (e.x. while the first one is still executing, the second one doesn't start executing).
Could you tell me the command, that could be written in front of script's name that does the actual thing?
I've tried source but it gives the following message for every listed script
./outer_script.sh: source: not found
source is a non-standard extension introduced by bash. POSIX specifies that you must use the . command. Other than the name, they are identical.
However, you probably don't want to source, because that is only supposed to be used when you need the script to be able to change the state of the script calling it. It is like a #include or import statement in other languages.
You would usually want to just run the script directly as a command, i.e. do not prefix it with source nor with any other command.
As a quick example of not using source:
for script in scripts/*; do
"$script"
done
If the above does not work, ensure that you've set the executable bit (chmod a+x) on the necessary scripts.
That is normal behavior of the bash script. i.e. if you have three scripts:
script1.sh:
echo "starting"
./script2.sh
./script3.sh
echo "done"
script2.sh:
while [ 1 ]; do
echo "script2"
sleep 2
done
and script3.sh:
echo "script3"
The output is:
starting
script2
script2
script2
...
and script3.sh will never be executed, unless you modify script1.sh to be:
echo "starting"
./script2.sh &
./script3.sh &
echo "done"
in which case the output will be something like:
starting
done
script2
script3
script2
script2
...
So in this case I assume your second level scripts contain something that starts new processes.
Have you included the line #!bin/bash in your outer_script? Some OS's don't consider it to be bash by default and source is bash command. Else just call the scripts using ./path/to/script.sh inside the outer_script
I am new to shelll script and my question here might be very basic.
I have a script(.sh file) which is making call to couple of scipt files. now suppose, I am getting error message on execution of script 1, I would like to abort the script and completely terminate the script flow. It should not go on next step.
could you please tell me how i can achieve this.
EDIT
My scriptA is making call to Script B, which internally making call to some other scriptC.
if execution of StopServer1.py script(part of script B) failed, flow should terminate here itself and should not come to StopServer2 and StopServer3 and control goes to Script A. which should also terminate .
please let me know if set -e will help here.
cd /usr/oracle/WSAutomate/
java weblogic.WLST /usr/oracle/StopServer1.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer2.py >> $logFileName
java weblogic.WLST /usr/oracle/StopServer3.py >> $logFileName
You can use:
set -e
for aborting the script on error.
main script snippet:
#!/bin/bash
set -e
# Any failure in these script calls will make main script to exit immediately
./script1
./script2
./script3
You can use the operator && this way <execute_script1> && <execute_script2>.
It executes script1 and, only if everything goes right, it executes script2. In case of error script2 will not run.
I have the following problem: I have a script that execute an internal background process by:
====myinternalscript====
...
myinternalscript-program &
...
Which is called from a scripts that waits for the myinternalscript-program termination by:
====mainscript====
...
myinternalscript
while [ "$(ps -u ${CURRENT_USER} | grep myinternalscript-program)" ];
...
The problem comes when I want to call mainscript again before the first call ends. This makes that if myinternalscript-program related to the first call ends before the second myinternalscript-program finishes, the pause condition on the first call to mainscript is still true (because the myinternalscript-program called by the second mainscript call) and the first mainscript call doesn't advance until the myinternalscript-program called by the second mainscript also finishes.
My solution would be:
====mainscript====
...
myinternalscript
internalpid = #some way to get the myinternalscript-program PID
while [ "$(ps -u ${CURRENT_USER} | grep myinternalscript-program| grep $internalpid)" ];
...
Where "internalpid" has the PID of the myinternalscript-program called during the execution of myinternalscript.
As suggested by one of the answers, $! after the myinternalscript-program could give me the PID information I require on myinternalscript and then I could imagine some way to get that information to mainscript, but I have no permissions to edit myinternalscript. In this context the challenge is:
How to get the myinternalscript-program PID at mainscript
without editing myinternalscript?
Obviously I could also use some other way to pause the mainscript execution to solve the issue, but it puzzles me if what I wanted to do originally can be achieved.
Any comments?
The PID of the current script is in the $$ bash variable. No need to parse ps output.
You may create PID file from your scripts in other terminal.
echo $$ > /tmp/script1.pid
and then you have to check it in your script
checkpid=$(cat /tmp/script1.pid)
while [ "x$(ps $checkpid | grep -v PID)" != "x" ];do echo still running;sleep 1;done
or if you can run scripts in same terminal (how many I realize you need run it parallel) you may run it in background and use "wait" keyword to wait for background jobs:
script1.sh &
script2.sh &
wait
echo "scripts finished"
I have 1 bash script that runs another bash script, however the first bashscript isn't waiting for the second one to complete before proceeding, how can I force it to wait?
For example:
#!/bin/bash
# first.sh
#call to secondary script
sh second.sh
echo "second.sh has completed"
echo "continuing with the rest of first.sh..."
The way it is now, it will run second.sh, and continue on, without waiting for second.sh to complete.
AS I use scheme like this in few scripts - just calling second scripts in the same shell-copy using source.
In script-1:
source script2.sh
or:
. script2.sh
So - no one command in script-1 will not be proceeded till script2.sh will end all it's tasks.
Little example.
First script:
$ cat script-1.sh
#!/bin/bash
echo "I'm sccript $0."
echo "Runnig script-2..."
source script-2.sh
echo "script-2.sh finished!"
Second script:
$ cat script-2.sh
#bin/bash
echo "I'm script-2. Running wait operation..."
sleep 2
echo "I'm ended my task."
How it works:
$ ./script-1.sh
I'm sccript ./script-1.sh.
Runnig script-2...
I'm script-2. Running wait operation...
I'm ended my task.
script-2.sh finished!
Normally it does; something else is happening. Are you sure that the other script isn't running something in the background instead? You can try using wait regardless.
You can simply add the command wait after you execute the second script, it will wait for all process that you launch from your principal script
You can even recuperate the PID of your second script using the command echo $! directly after you call the second script, and then pass this PID as an argument to the wait command
try using bash second.sh and check your second.sh and make sure you don't have programs that run in the background
Another way to do it $(second.sh)