stop a calling script upon error - bash

I have 2 shell scripts, namely script A and script B.
I have both of them "set -e", telling them to stop upon error.
However, when script A call script B, and script B had an error and stopped, script A didn't stop.
What can I stop the mother script when the child script dies?

It should work as you'd expect. For example:
In mother.sh:
#!/bin/bash
set -ex
./child.sh
echo "you should not see this (a.sh)"
In child.sh:
#!/bin/bash
set -ex
ls &> /dev/null # good cmd
ls /path/that/does/not/exist &> /dev/null # bad cmd
echo "you should not see this (b.sh)"
Calling mother.sh:
[me#home]$ ./mother.sh
++ ./child.sh
+++ ls
+++ ls /path/that/does/not/exist
Why is it not working for you?
One possible situation where it won't work as expected is if you specified -e in the shabang line (#!/bin/bash -e) and passed the script directly to bash which will treat that as a comment.
For example, if we change mother.sh to:
#!/bin/bash -ex
./child.sh
echo "you should not see this (a.sh)"
Notice how it behaves differently depending on how you call it:
[me#home]$ ./mother.sh
+ ./child.sh
+ ls
+ ls /path/that/does/not/exist
[me#home]$ bash mother.sh
+ ls
+ ls /path/that/does/not/exist
you should not see this (a.sh)
Explicitly calling set -e within the script will solve this problem.

Related

How to pass argument in bash pipe from terminal

i have a bash script show below in a file called test.sh
#!/usr/bin/env bash
echo $1
echo "execution done"
when i execute this script using
Case-1
./test.sh "started"
started
execution done
showing properly
Case-2
If i execute with
bash test.sh "started"
i'm getting the out put as
started
execution done
But i would like to execute this using a cat or wget command with arguments
For example like.
Q1
cat test.sh |bash
Or using a command
Q2
wget -qO - "url contain bash" |bash
So in Q1 and Q2 how do i pass argument
Something simlar to this shown in this github
https://github.com/creationix/nvm
Please refer installation script
$ bash <(curl -Ls url_contains_bash_script) arg1 arg2
Explanation:
$ echo -e 'echo "$1"\necho "done"' >test.sh
$ cat test.sh
echo "$1"
echo "done"
$ bash <(cat test.sh) "hello"
hello
done
$ bash <(echo -e 'echo "$1"\necho "done"') "hello"
hello
done
You don't need to pipe to bash; bash runs as standard in your terminal.
If I have a script and I have to use cat, this is what I'll do:
cat script.sh > file.sh; chmod 755 file.sh; ./file.sh arg1 arg2 arg3
script.sh is the source script. You can replace that call with anything you want.
This has security implications though; just running an arbitrary code in your shell - especially with wget where the code comes from a remote location.

How can I redirect stdout and stderr with variant?

Normally, we use
sh script.sh 1>t.log 2>t.err
to redirect log.
How can I use variant to log:
string="1>t.log 2>t.err"
sh script.sh $string
You need to use 'eval' shell builtin for this purpose. As per man page of bash command:
eval [arg ...]
The args are read and concatenated together into a single command. This command is then read and exe‐
cuted by the shell, and its exit status is returned as the value of eval. If there are no args, or
only null arguments, eval returns 0.
Run your command like below:
eval sh script.sh $string
However, do you really need to run script.sh through sh command? If you instead put sh interpreter line (using #!/bin/sh as the first line in your shell script) in your script itself and give it execute permission, that would let you access return code of ls command. Below is an example of using sh and not using sh. Notice the difference in exit codes.
Note: I had only one file try.sh in my current directory. So ls command was bound to exit with return code 2.
$ ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
2
$ eval sh ls try1.sh try1.sh.backup 1>out.txt 2>err.txt
$ echo $?
127
In the second case, the exit code is of sh shell. In first case, the exit code is of ls command. You need to make cautious choice depending on your needs.
I figure out one way but it's ugly:
echo script.sh $string | sh
I think you can just put the name into a string variable
and then use data redirection
file_name="file1"
outfile="$file_name"".log"
errorfile="$file_name"".err"
sh script.sh 1> $outfile 2> $errorfile

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

Resources