I have a doubt about running multiple scripts from a third one:
first.sh
#!/bin/bash
echo "script 1"
#... and also download a csv file from gdrive
second.sh
#!/bin/bash
echo "script 2"
third.awk
#!/usr/bin/awk -f
BEGIN {
print "script3"
}
I would like a 4th script that run them in order, I've tried the following but only runs the first script.
#!/bin/bash
array=( first.sh second.sh )
for i in "${array[#]}"
do
chmod +x $i
echo $i
. $i
done
But only runs the first script and nothing else.
Thank you very much for the support!
Santiago
You can't source an awk script into a shell script. Run the script instead of sourcing it.
. (aka source) executes commands from the file in the current shell, it disregards the shebang line.
What you need instead is ./, i.e. path to the script, unless . is part of your $PATH (which is usually not recommended ).
#!/bin/bash
array=( first.sh second.sh )
for i in "${array[#]}"
do
chmod +x "$i"
echo "$i"
./"$i" # <---
done
Why is the second script not running? I guess the first script contains an exit, which when sourced exits the shell, i.e. it doesn't continue running the outer wrapper.
Related
In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!
I have a bash script that contains a loop over a list of subdirectories. Inside the loop I cd into each subdirectory, run a command using nohup and then cd back out. In the following example I have replaced the executable by an echo command for simplicity.
#!/bin/bash
dList=("dname1" "dname2" "dname3")
for d in $dList; do
cd $d
nohup echo $d &
cd ..
done
The above causes nohup to hang during the first loop with the following output:
$ ./script.sh
./dname1
$ nohup: appending output to `nohup.out'
The script does not continue through the loop and in order to type again on the command line one must press the enter key.
OK, this is normal nohup behaviour when one is using it on the shell, but obviously it doesn't work for my script. How can I get nohup to simply run and then gracefully allow the script to continue?
I have already (unsuccessfully) tried variations on the nohup command including
nohup echo $d < /dev/null &
but that didn't help.
Further, I tried including
trap "" HUP
at the top of the script too, but this did not help either.
Please help!
EDIT: As #anubhava correctly pointed out my loop contained an error which was causing the script to only use the first entry in the array. Here is the corrected version.
#!/bin/bash
dList=("dname1" "dname2" "dname3")
for d in ${dList[#]}; do
cd $d
nohup echo $d &
cd ..
done
So now the script achieves what I wanted. However, we still get the annoying output from nohup, which was part of my original question.
Problem is here:
for d in $dList; do
That will only run for loop once for the 1st element of the array.
To iterate over an array use:
for d in ${dList[#]}; do
Full working script:
dList=("dname1" "dname2" "dname3")
for d in "${dList[#]}"; do
cd "$d"
{ nohup echo "$d" & cd -; } 2>/dev/null
done
#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
fi
The above is part of my script. When the current shell is /bin/csh my script fails with the following error:
If: Expression Syntax
Then: Command not found
If I run bash and then run my script, it runs fine(as expected).
So the question is: If there is any way that myscript can change the current shell and then interpretate rest of the code.
PS: If i keep bash in my script, it changes the current shell and rest of the code in script doesn't get executed.
The other replies are correct, however, to answer your question, this should do the trick:
[[ $(basename $SHELL) = 'bash' ]] || exec /bin/bash
The exec builtin replaces the current shell with the given command (in this case, /bin/bash).
You can use SHEBANG(#!) to overcome your issue.
In your code you are already using she-bang but make sure it is first and foremost line.
$ cat test.sh
#!/bin/bash
if [ ! -f readexportfile ]; then
echo "readexportfile does not exist"
exit 0
else
echo "No File"
fi
$ ./test.sh
readexportfile does not exist
$ echo $SHELL
/bin/tcsh
In the above code even though I am using CSH that code executed as we mentioned shebang in the code. In case if there is no shebang then it will take the help of shell in which you are already logged in.
In you case you also check the location of bash interpreter using
$ which bash
or
$ cat /etc/shells |grep bash
I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.
How can a script determine it's path when it is sourced by ksh? i.e.
$ ksh ". foo.sh"
I've seen very nice ways of doing this in BASH posted on stackoverflow and elsewhere but haven't yet found a ksh method.
Using "$0" doesn't work. This simply refers to "ksh".
Update: I've tried using the "history" command but that isn't aware of the history outside the current script.
$ cat k.ksh
#!/bin/ksh
. j.ksh
$ cat j.ksh
#!/bin/ksh
a=$(history | tail -1)
echo $a
$ ./k.ksh
270 ./k.ksh
I would want it echo "* ./j.ksh".
If it's the AT&T ksh93, this information is stored in the .sh namespace, in the variable .sh.file.
Example
sourced.sh:
(
echo "Sourced: ${.sh.file}"
)
Invocation:
$ ksh -c '. ./sourced.sh'
Result:
Sourced: /var/tmp/sourced.sh
The .sh.file variable is distinct from $0. While $0 can be ksh or /usr/bin/ksh, or the name of the currently running script, .sh.file will always refer to the file for the current scope.
In an interactive shell, this variable won't even exist:
$ echo ${.sh.file:?}
-ksh: .sh.file: parameter not set
I believe the only portable solution is to override the source command:
source() {
sourced=$1
. "$1"
}
And then use source instead of . (the script name will be in $sourced).
The difference of course between sourcing and forking is that sourcing results in the invoked script being executed within the calling process. Henk showed an elegant solution in ksh93, but if, like me, you're stuck with ksh88 then you need an alternative. I'd rather not change the default ksh method of sourcing by using C-shell syntax, and at work it would be against our coding standards, so creating and using a source() function would be unworkable for me. ps, $0 and $_ are unreliable, so here's an alternative:
$ cat b.sh ; cat c.sh ; ./b.sh
#!/bin/ksh
export SCRIPT=c.sh
. $SCRIPT
echo "PPID: $$"
echo "FORKING c.sh"
./c.sh
If we set the invoked script in a variable, and source it using the variable, that variable will be available to the invoked script, since they are in the same process space.
#!/bin/ksh
arguments=$_
pid=$$
echo "PID:$pid"
command=`ps -o args -p $pid | tail -1`
echo "COMMAND (from ps -o args of the PID): $command"
echo "COMMAND (from c.sh's \$_ ) : $arguments"
echo "\$SCRIPT variable: $SCRIPT"
echo dirname: `dirname $0`
echo ; echo
Output is as follows:
PID:21665
COMMAND (from ps -o args of the PID): /bin/ksh ./b.sh
COMMAND (from c.sh's $_ ) : SCRIPT=c.sh
$SCRIPT variable: c.sh
dirname: .
PPID: 21665
FORKING c.sh
PID:21669
COMMAND (from ps -o args of the PID): /bin/ksh ./c.sh
COMMAND (from c.sh's $_ ) : ./c.sh
$SCRIPT variable: c.sh
dirname: .
So when we set the SCRIPT variable in the caller script, the variable is either accessible from the sourced script's operands, or, in the case of a forked process, the variable along with all other environment variables of the parent process are copied for the child process. In either case, the SCRIPT variable can contain your command and arguments, and will be accessible in the case of both sourcing and forking.
You should find it as last command in the history.