How to make a shell script wait for another with out using sleep - shell

I want to know how to make a shell script wait till other script finishes its execution with out the help of sleep command.
suppose i have scripts run.sh and kill.sh, where run.sh will make all the processes up(means to start running the image on the box) whereas kill.sh contains just the kill commands to kill all the running processes.
Whenever i have run the run.sh, it will make all the processes up and it will end. Then what happens here is all the running processes becoming orphan(handled by init). Whenever we run kill.sh, some of the processes are becoming zombies.
Means, Orphan processes becoming zombies.
To avoid this, i want to make the run.sh wait till the end of kill.sh script.
So, How to make a shell script wait for another script ? Please provide the comments.
Thanks in Advance

You can use wait to let the first script finish without giving an explicit sleep.
#!/bin/bash
./first_script.sh
wait
./second_script.sh

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Execute bash script that will continue though Apache restarts

I need to have a bash script triggered and run, but part of the script requires Apache to restart. This obviously kills the script from continuing. I can't move the restarts in the script to the end
I have tried to run the bash scrip though a php script using shell_exec() in a GNU screen session to keep it going but that doesn't work. as soon as Apache goes down the script stops.
There has to be a way to do this but I'm not seeing it.
How I can accomplish this?
Does nohup do the job?
nohup is a POSIX command which means "no hang up". Its purpose is to execute a command such that it ignores the HUP (hangup) signal and therefore does not stop when the user logs out.
Output that would normally go to the terminal goes to a file called nohup.out, if it has not already been redirected.
https://en.wikipedia.org/wiki/Nohup

Start multiple processes in Bash and time how long they take

How do I start multiple processes in bash and time how long they take?
From this question I know how to start multiple processes in a bash script but using time script.sh doesn't work because the processes spawned end after the script ends.
I tried using wait but that didn't change anything.
Here is the script in its entirety:
for i in `seq $1`
do
( ./client & )
done
wait # This doesn't seem to change anything
I'm trying to get the total time for all the processes to finish and not the time for each process.
Why the parentheses around client invocation? That's going to run the command in a subshell. Since the background job isn't in the top level shell, that's why the wait is ineffective (there's no jobs in this shell to wait for).
Then you can add time back inside the for loop and it should work.

Running and killing servers and watchers from a bash script

For a webapp I am developing, I have a bash script to watch for changes in the source and update the running environment.
function app-serve {
python runserver.py
}
function compile-coffee {
inotifywait -e modity scripts | while read change; do
coffee -o js scripts
done
}
Now, I need these two functions run simultaneously,
app-serve &
compile-coffee &
And wait, too
wait
The problem is, when I want to stop these processes, a simple Ctrl-C isn't doing it. When I do a Ctrl-C, I get the command prompt back, but the processes run in the functions are still alive.
Is there a way to tell bash to just wait until I hit Ctrl-C, and then kill all the subprocesses?
Edit: One clarification, I see the python process I start in app-serve function is killed. Only the inotifywait and a couple of bash processes are dangling.
You can catch the ctrl-c signal in your bash script and have it execute a function that finds the processes and kills them.
See http://hacktux.com/bash/control/c
Probably something like killall app-server and killall compile-coffee.
Don't forget to exit the bash script with a call to exit.

Is it possible for bash commands to continue before the result of the previous command?

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Resources