Running shell script commands sequentially in Jenkins - bash

In Jenkins, I have created a job which runs many shell script commands:
command1
command2
...etc
command1 is an ssh command which calls a shell script file on another server machine. I have to wait until it is finished, and AFTER it, command2 should come.
So, how can I make sure that the script file on the other machine, started by command1, has already finished its jobs, when in the Jenkins job the next command (command2) is started?
Or, alternatively,how can I make sure that command2 won't be started until the shell script on the other machine (started by command1) has already finished?

You can check out "How to send many commands to shell and wait for the command behind ends" in order to chain commands and wait for their completion.
When you execute a command through an ssh session, you might have to wrap that command in a script able to loop/wait for the command completion.
See an example in "How can I make ssh wait until the command exits?".
Or (a simpler wraper): How do I know when a command run over ssh has finished?
#/bin/bash
$#
echo "==== Command Output Finished ===="
look for the string ==== Command Output Finished ==== in your I/O routines to determine where the boundary between command outputs are.
Or you can try isolate those commands in their own Jenkins shell build step.
(Not a different job, just a different build step within the same job)

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

jobs command result is empty when process is run through script

I need to run rsync in background through shell script but once it has started, I need to monitor the status of that jobs through shell.
jobs command return empty when its run in shell after the script exits. ps -ef | grep rsync shows that the rsync is still running.
I can check the status through script but I need to run the script multiple times so it uses a different ip.txt file to push. So I can't have the script running to check jobs status.
Here is the script:
for i in `cat $ip.txt`; do
rsync -avzh $directory/ user#"$i":/cygdrive/c/test/$directory 2>&1 > /dev/null &
done
jobs; #shows the jobs status while in the shell script.
exit 1
Output of jobs command is empty after the shell script exits:
root#host001:~# jobs
root#host001:~#
What could be the reason and how could I get the status of jobs while the rsync is running in background? I can't find an article online related to this.
Since your shell (the one from which you execute jobs) did not start rsync, it doesn't know anything about it. There are different approaches to fixing that, but it boils down to starting the background process from your shell. For example, you can start the script you have using the source BASH command instead of executing it in a separate process. Of course, you'd have to remove the exit 1 at the end, because that exits your shell otherwise.

UNIX batch shell script - will current command execute only after previous command finishes execution

If I have a UNIX shell script which has some program on each line that needs to be run, like
#!/bin/bash
command1
command2
command3
command4
will command2 execute only after command1 execution finishes or are they run in parallel without waiting for the previous command to finish as each command is a separate process that needs to be executed.
The commands are run serially. To run them in parallel, append & to each line:
#!/bin/bash
command1&
command2&
command3&
command4&
wait

bash script order of execution

Do lines in a bash script execute sequentially? I can't see any reason why not, but I am really new to bash scripting and I have a couple commands that need to execute in order.
For example:
#!/bin/sh
# will this get finished before the next command starts?
./someLongCommand1 arg1
./someLongCommand2 arg1
Yes, they are executed sequentially. However, if you run a program in the background, the next command in your script is executed immediately after the backgrounded command is started.
#!/bin/sh
# will this get finished before the next command starts?
./someLongCommand1 arg1 &
./someLongCommand2 arg1 &
would result in an near-instant completion of the script; however, the commands started in it will not have completed. (You start a command in the background by putting an ampersand (&) behind the name.
Yes... unless you go out of your way to run one of the commands in the background, one will finish before the next one starts.

Is it possible for bash commands to continue before the result of the previous command?

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Resources