series of commands using nohup [duplicate] - shell

This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?

nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'

Related

&& in bash script using GNU "parallel"

I would like to run my bash my_script.sh using GNU parallel:
parallel < my_script.sh
where my_script.sh is:
command1 && command2
command3
command4
Will command1 and command2 run in parallel or sequentially.
In case I am not clear:
command1
command2
command3
command4
or
command1 -> command2
command3
command4
?
Thank you!
&& lets you do something based on whether the previous command completed successfully.
It will run like this:
command1 -> command2 (only if the exit status of "command1" is zero)
command3
command4

Whats the best way to switch user and run multiple commands?

Whats the best way to switch user run multiple commands and send output from all commands to file?
Needs to be done in one line so something like:
sudo -u user (command1; command2; command3; command4; command5) > out.log 2&1
Thanks in advance.
Running multiple commands separated by semicolons requires a shell. To get such shell features, you must invoke a shell like, in this case, bash:
sudo -u user bash -c 'command1; command2; command3; command4; command5' > out.log 2&1
Here, sudo -u user bash runs bash under user. The -c option tells bash which commands to run.
sudo -u user command1 > outFile; command2 >> outFile; command3 >> outFile; command3 >> outFile; command4 >> outFile;
If you want subsequent command to not execute if the previous one failed then...
command1 > outFile && command2 >> outFile && command3 >> outFile && command3 >> outFile && command4 >> outFile;
Note that command1 output is piped to the outFile by '>' (which will create a file and add output) whereas all subsequent commands are piping through '>>' (which will append the output)

Executing a command (only) when prior jobs are finished in Bash

I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3

bash && operator prevents backgrounding over ssh

After trying to figure out why a Capistrano task (which tried to start a daemon in the background) was hanging, I discovered that using && in bash over ssh prevents a subsequent program from running in the background. I tried it on bash 4.1.5 and 4.2.20.
The following will hang (i.e. wait for sleep to finish) in bash:
ssh localhost "cd /tmp && nohup sleep 10 >/dev/null 2>&1 &"
The following won't:
ssh localhost "cd /tmp ; nohup sleep 10 >/dev/null 2>&1 &"
Neither will this:
cd /tmp && nohup sleep 10 >/dev/null 2>&1 &
Both zsh and dash will execute it in the background in all cases, regardless of && and ssh. Is this normal/expected behavior for bash, or a bug?
One easy solution is to use:
ssh localhost "(cd /tmp && nohup sleep 10) >/dev/null 2>&1 &"
(this also works if you use braces, see second example below).
I did not experiment further but I am reasonably convinced it has to do with open file descriptors hanging around. Perhaps zsh and dash bind the && so that this means what has to be spelled as:
{ cd /tmp && nohup sleep 10; } >/dev/null 2>&1
in bash.Nope, quick experiment in dash shows that echo foo && echo bar >file only redirects the latter. Still, it has to have something to do with lingering open fd's causing ssh to wait for more output; I've run into this a lot in the past.
One more trick, not needed if you use the parentheses or braces for this particular case but might be useful in a more general context, where the set of commands to do with && are more complex. Since bash seems to be hanging on to the file descriptor inappropriately with && but not with ;, you can turn a && b && c into a || exit 1; b || exit 1; c. This works with the test case:
ssh localhost "true || exit 1; echo going on; nohup sleep 10 >/dev/null 2>&1 &"
Replace true with false and the echo of "going on" is omitted.
(You can also set -e, although sometimes that is a bigger hammer than desired.)
This seems to work:
ssh localhost "(exec 0>&- ; exec 1>&-; exec 2>&-; cd /tmp; sleep 20&)"

How to include nohup inside a bash script?

I have a large script called mandacalc which I want to always run with the nohup command. If I call it from the command line as:
nohup mandacalc &
everything runs swiftly. But, if I try to include nohup inside my command, so I don't need to type it everytime I execute it, I get an error message.
So far I tried these options:
nohup (
command1
....
commandn
exit 0
)
and also:
nohup bash -c "
command1
....
commandn
exit 0
" # and also with single quotes.
So far I only get error messages complaining about the implementation of the nohup command, or about other quotes used inside the script.
cheers.
Try putting this at the beginning of your script:
#!/bin/bash
case "$1" in
-d|--daemon)
$0 < /dev/null &> /dev/null & disown
exit 0
;;
*)
;;
esac
# do stuff here
If you now start your script with --daemon as an argument, it will restart itself detached from your current shell.
You can still run your script "in the foreground" by starting it without this option.
Just put trap '' HUP on the beggining of your script.
Also if it creates child process someCommand& you will have to change them to nohup someCommand& to work properly... I have been researching this for a long time and only the combination of these two (the trap and nohup) works on my specific script where xterm closes too fast.
Create an alias of the same name in your bash (or preferred shell) startup file:
alias mandacalc="nohup mandacalc &"
Why don't you just make a script containing nohup ./original_script ?
There is a nice answer here: http://compgroups.net/comp.unix.shell/can-a-script-nohup-itself/498135
#!/bin/bash
### make sure that the script is called with `nohup nice ...`
if [ "$1" != "calling_myself" ]
then
# this script has *not* been called recursively by itself
datestamp=$(date +%F | tr -d -)
nohup_out=nohup-$datestamp.out
nohup nice "$0" "calling_myself" "$#" > $nohup_out &
sleep 1
tail -f $nohup_out
exit
else
# this script has been called recursively by itself
shift # remove the termination condition flag in $1
fi
### the rest of the script goes here
. . . . .
the best way to handle this is to use $()
nohup $( command1, command2 ...) &
nohup is expecting one command and in that way You're able to execute multiple commands with one nohup

Resources