Using nohup within a loop of a bash script - bash

I have a bash script that contains a loop over a list of subdirectories. Inside the loop I cd into each subdirectory, run a command using nohup and then cd back out. In the following example I have replaced the executable by an echo command for simplicity.
#!/bin/bash
dList=("dname1" "dname2" "dname3")
for d in $dList; do
cd $d
nohup echo $d &
cd ..
done
The above causes nohup to hang during the first loop with the following output:
$ ./script.sh
./dname1
$ nohup: appending output to `nohup.out'
The script does not continue through the loop and in order to type again on the command line one must press the enter key.
OK, this is normal nohup behaviour when one is using it on the shell, but obviously it doesn't work for my script. How can I get nohup to simply run and then gracefully allow the script to continue?
I have already (unsuccessfully) tried variations on the nohup command including
nohup echo $d < /dev/null &
but that didn't help.
Further, I tried including
trap "" HUP
at the top of the script too, but this did not help either.
Please help!
EDIT: As #anubhava correctly pointed out my loop contained an error which was causing the script to only use the first entry in the array. Here is the corrected version.
#!/bin/bash
dList=("dname1" "dname2" "dname3")
for d in ${dList[#]}; do
cd $d
nohup echo $d &
cd ..
done
So now the script achieves what I wanted. However, we still get the annoying output from nohup, which was part of my original question.

Problem is here:
for d in $dList; do
That will only run for loop once for the 1st element of the array.
To iterate over an array use:
for d in ${dList[#]}; do
Full working script:
dList=("dname1" "dname2" "dname3")
for d in "${dList[#]}"; do
cd "$d"
{ nohup echo "$d" & cd -; } 2>/dev/null
done

Related

BASH Run multiple scripts from a another script

I have a doubt about running multiple scripts from a third one:
first.sh
#!/bin/bash
echo "script 1"
#... and also download a csv file from gdrive
second.sh
#!/bin/bash
echo "script 2"
third.awk
#!/usr/bin/awk -f
BEGIN {
print "script3"
}
I would like a 4th script that run them in order, I've tried the following but only runs the first script.
#!/bin/bash
array=( first.sh second.sh )
for i in "${array[#]}"
do
chmod +x $i
echo $i
. $i
done
But only runs the first script and nothing else.
Thank you very much for the support!
Santiago
You can't source an awk script into a shell script. Run the script instead of sourcing it.
. (aka source) executes commands from the file in the current shell, it disregards the shebang line.
What you need instead is ./, i.e. path to the script, unless . is part of your $PATH (which is usually not recommended ).
#!/bin/bash
array=( first.sh second.sh )
for i in "${array[#]}"
do
chmod +x "$i"
echo "$i"
./"$i" # <---
done
Why is the second script not running? I guess the first script contains an exit, which when sourced exits the shell, i.e. it doesn't continue running the outer wrapper.

Bash script tail and export misbehaving

Lo there! I have a homework where I have to ls -l any dir or file given in the arguments with the following restrictions: I have to send the whole listing to stdout
then I have to tail the last 5 lines onto stderr, and finally i have to get the last line
into a variable called LIST and it have to be exported.
Here is my code as far as i got:
#!/bin/bash
TMP="tmp"
echo "" > $TMP
ls -l $# >>$TMP
cat $TMP
tail -n5 $TMP 1>&2
export LIST=$(tail -n1 $TMP)
of course it doesnt work, and i dont know where did i go wrong :[ any suggestions?
If you run your script that way:
$ ./script.sh
then it'll run new /bin/bash and the LIST variable will be exported to it's env.
If you run it that way:
$ . script.sh
which is a shortcut for:
$ source script.sh
Then it will execute commands from the script in the current running shell and the LIST var will be exported to its env, so you'll be able to use it later.
Your script works fine (it could be much improved but it's good enough for a simple homework task) but it exports LIST to a new shell which end its live when the script finishes.
The reason for all of this is that the child process cannot modify it's parent env. Another way to make it work is to execute one more bash at the end of your script (adding /bin/bash at the end). Then you would end-up in it with inherited LIST form the parent (the script)
If you want to improve your script than:
#!/bin/bash
TMP="tmp" # use `mktemp` for that and `trap` to clean it even if the script will be interrupted
echo "" > $TMP # `> $TMP` is enough to create an empty file. Another way is `touch`
ls -l $# >>$TMP # check Bash pitfalls webpage
cat $TMP # you can use `tee` at the begging
tail -n5 $TMP 1>&2 # you can ommit 1 here
export LIST=$(tail -n1 $TMP)

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

stop a calling script upon error

I have 2 shell scripts, namely script A and script B.
I have both of them "set -e", telling them to stop upon error.
However, when script A call script B, and script B had an error and stopped, script A didn't stop.
What can I stop the mother script when the child script dies?
It should work as you'd expect. For example:
In mother.sh:
#!/bin/bash
set -ex
./child.sh
echo "you should not see this (a.sh)"
In child.sh:
#!/bin/bash
set -ex
ls &> /dev/null # good cmd
ls /path/that/does/not/exist &> /dev/null # bad cmd
echo "you should not see this (b.sh)"
Calling mother.sh:
[me#home]$ ./mother.sh
++ ./child.sh
+++ ls
+++ ls /path/that/does/not/exist
Why is it not working for you?
One possible situation where it won't work as expected is if you specified -e in the shabang line (#!/bin/bash -e) and passed the script directly to bash which will treat that as a comment.
For example, if we change mother.sh to:
#!/bin/bash -ex
./child.sh
echo "you should not see this (a.sh)"
Notice how it behaves differently depending on how you call it:
[me#home]$ ./mother.sh
+ ./child.sh
+ ls
+ ls /path/that/does/not/exist
[me#home]$ bash mother.sh
+ ls
+ ls /path/that/does/not/exist
you should not see this (a.sh)
Explicitly calling set -e within the script will solve this problem.

How to include nohup inside a bash script?

I have a large script called mandacalc which I want to always run with the nohup command. If I call it from the command line as:
nohup mandacalc &
everything runs swiftly. But, if I try to include nohup inside my command, so I don't need to type it everytime I execute it, I get an error message.
So far I tried these options:
nohup (
command1
....
commandn
exit 0
)
and also:
nohup bash -c "
command1
....
commandn
exit 0
" # and also with single quotes.
So far I only get error messages complaining about the implementation of the nohup command, or about other quotes used inside the script.
cheers.
Try putting this at the beginning of your script:
#!/bin/bash
case "$1" in
-d|--daemon)
$0 < /dev/null &> /dev/null & disown
exit 0
;;
*)
;;
esac
# do stuff here
If you now start your script with --daemon as an argument, it will restart itself detached from your current shell.
You can still run your script "in the foreground" by starting it without this option.
Just put trap '' HUP on the beggining of your script.
Also if it creates child process someCommand& you will have to change them to nohup someCommand& to work properly... I have been researching this for a long time and only the combination of these two (the trap and nohup) works on my specific script where xterm closes too fast.
Create an alias of the same name in your bash (or preferred shell) startup file:
alias mandacalc="nohup mandacalc &"
Why don't you just make a script containing nohup ./original_script ?
There is a nice answer here: http://compgroups.net/comp.unix.shell/can-a-script-nohup-itself/498135
#!/bin/bash
### make sure that the script is called with `nohup nice ...`
if [ "$1" != "calling_myself" ]
then
# this script has *not* been called recursively by itself
datestamp=$(date +%F | tr -d -)
nohup_out=nohup-$datestamp.out
nohup nice "$0" "calling_myself" "$#" > $nohup_out &
sleep 1
tail -f $nohup_out
exit
else
# this script has been called recursively by itself
shift # remove the termination condition flag in $1
fi
### the rest of the script goes here
. . . . .
the best way to handle this is to use $()
nohup $( command1, command2 ...) &
nohup is expecting one command and in that way You're able to execute multiple commands with one nohup

Resources