basically I have written a shell script for a homework assignment that works fine however I am having issues with exiting. Essentially the script reads numbers from the user until it reads a negative number and then does some output. I have the script set to exit and output an error code when it receives anything but a number and that's where the issue is.
The code is as follows:
if test $number -eq $number >dev/null 2>&1
then
"do stuff"
else
echo "There was an error"
exit
The problem is that we have to turn in our programs as text files using script and whenever I try to script my program and test the error cases it exits out of script as well. Is there a better way to do this?
The script is being run with the following command in the terminal
script "insert name of program here"
Thanks
If the program you're testing is invoked as a subprocess, then any exit command will only exit the command itself. The fact that you're seeing contrary behavior means you must be invoking it differently.
When invoking your script from the parent testing program, use:
# this runs "yourscript" as its own, external process.
./yourscript
...to invoke it as a subprocess, not
# this is POSIX-compliant syntax to run the commands in "yourscript" in the current shell.
. yourscript
...or...
# this is bash-extended syntax to run the commands in "yourscript" in the current shell.
source yourscript
...as either of the latter will run all the commands -- including exit -- inside your current shell, modifying its state or, in the case of exit, exec or similar, telling it to cease execution.
Related
Let's say there's this script
#!/bin/zsh
python -c 'a'
which will fail since a isn't defined. Just before the shell script exits, I want to run a command, say echo bye. How can that be achieved?
Flow is to be:
Python command above fails.
bye appears in terminal.
The zsh script exits.
I'd prefer it to affect the python command as little as possible such as indent, putting it in an if block, checking its exit code etc. In real life, the command is in fact multiple commands.
In the script you posted, the fact that the shell exits is unrelated to any error. The shell would exit, because the last argument hast been executed. Take for instance the script
#!/bin/zsh
python -c 'a'
echo This is the End
The final echo will always be exeuted, independent of the python command. To cause the script to exit, when python returns a non-zero exit code, you would write something like
#!/bin/zsh
python -c 'a' || exit $?
echo Successful
If you want to exit a script, whenever the first one of the commands produces a non-zeror exit status, AND at the same time want to print a message, you can use the TRAPZERR callback:
#!/bin/zsh
TRAPZERR() {
echo You have an unhandled non-zero exit code in your otherwise fabulous script
exit $?
}
python -c 'a'
echo Only Exit Code 0 encountered
I am calling a bash script (say child.sh) within my bash script (say parent.sh), and I would like have the errors in the script (child.sh) trapped in my parent script (parent.sh).
I read through the medium article and the stack exchange post. Based on that I thought I should do set -E on my parent script so that the TRAPS are inherited by sub shell. Accordingly my code is as follows
parent.sh
#!/bin/bash
set -E
error() {
echo -e "$0: \e[0;33mERROR: The Zero Touch Provisioning script failed while running the command $BASH_COMMAND at line $BASH_LINENO.\e[0m" >&2
exit 1
}
trap error ERR
./child.sh
child.sh
#!/bin/bash
ls -al > /dev/null
cd non_exisiting_dir #To simulate error
echo "$0: I am still continuing after error"
Output
./child.sh: line 5: cd: non_exisiting_dir: No such file or directory
./child.sh: I am still continuing after error
Can you please let me know what am missing so that I can inherit the TRAPs defined in the parent script.
./child.sh does not run in a "subshell".
A subshell is not a child process of your shell which happens to be a shell, too, but a special environment where the commands from inside (...), $(...) or ...|... are run in, which is usually implemented by forking the current shell without executing another shell.
If you want to run child.sh in a subshell, then source that script from a subshell you can create with (...):
(. ./child.sh)
which will inherit your ERR trap because of set -E.
Notes:
There are other places where bash runs the commands in a subshell: process substitutions (<(...)), coprocesses, the command_not_found_handle function, etc.
In some shells (but not in bash) the leftmost command from a pipeline is not run in a subshell. For instance ksh -c ':|a=2; echo $a' will print 2. Also, not all shells implement subshells by forking a separate process.
Even if bash infamously allows functions to be exported to other bash scripts via the environment (with export -f funcname), that's AFAIK not possible with traps ;-)
I've got tcl script with two ways of execution bash script:
#exec bash ./run.sh
open "|bash ./run.sh r"
The bash script is shown below:
#!/bin/bash
ls
if [ "$?" != "0" ]; then
echo "ERROR: Status failed!" > status
else
echo "Everything is OK!" > status
fi
I'm using tclsh for Windows with bash from git bash. When I use:
exec bash ./run.sh
I've got in status file:
Everything is OK!
otherwise:
open "|bash ./run.sh r"
got:
ERROR: Status failed!
Is there any possibility to correctly detect exit code when opened the tcl pipe?
You don't describe whether you get different results out of the ls part of the script. That matters; the ls command is most certainly capable of changing its behaviour according to the environment in which it is invoked. This matters because Tcl executes subprocesses (on Windows) directly using the CreateProcess() system call, rather than the various wrapped versions that Cygwin and git bash use. Other possibilities are that you're launching the script in a different directory and so on.
However, in general we'd expect a script to behave very similarly when launched via exec or via open |… r as they share a common core of functionality. The only differences are to do with how output and termination are waited for.
If you create a subprocess pipeline, by default you won't get to find out about errors from it until you close the pipeline. exec generates any errors “immediately” because it doesn't return control to you until the subprocess has terminated and all output has been read.
bash scripting noob here. I've found this article: https://www.shellhacks.com/print-usage-exit-if-arguments-not-provided/ that suggests putting
[ $# -eq 0 ] && { echo "Usage: $0 argument"; exit 1; }
at the top of a script to ensure arguments are passed. Seems sensible.
However, when I do that and test that that line does indeed work (by running the script without supplying any arguments: . myscript.sh) then the script does indeed exit but so does the bash session that I was calling the script from. This is very irritating.
Clearly I'm doing something wrong but I don't know what. Can anyone put me straight?
. myscript.sh is a synonym for source myscript.sh, which runs the script in the current shell (rather than as a separate process). So exit terminates your current shell. (return, on the other hand, wouldn't; it has special behaviour for sourced scripts.)
Use ./myscript.sh to run it "the normal way" instead. If that gives you a permission error, make it executable first, using chmod a+x myscript.sh. To inform the kernel that your script should be run with bash (rather than /bin/sh), add the following as the very first line in the script:
#!/usr/bin/env bash
You can also use bash myscript.sh if you can't make it executable, but this is slightly more error-prone (somebody might do sh myscript.sh instead).
Question seems not clear if you're sourcing script source script_name or . script_name it's interpreted in current bash process, if you're running a function it's the same it's running in same process, otherwise, calling a script, caller bash forks a new bash process and waits until it terminates (so running exit doesn't exit caller process), but when running exit builtin in in current bash it exits current process.
I'd like to write .sh script that runs several scripts in the same directory one-by-one without running them concurrently (e.x. while the first one is still executing, the second one doesn't start executing).
Could you tell me the command, that could be written in front of script's name that does the actual thing?
I've tried source but it gives the following message for every listed script
./outer_script.sh: source: not found
source is a non-standard extension introduced by bash. POSIX specifies that you must use the . command. Other than the name, they are identical.
However, you probably don't want to source, because that is only supposed to be used when you need the script to be able to change the state of the script calling it. It is like a #include or import statement in other languages.
You would usually want to just run the script directly as a command, i.e. do not prefix it with source nor with any other command.
As a quick example of not using source:
for script in scripts/*; do
"$script"
done
If the above does not work, ensure that you've set the executable bit (chmod a+x) on the necessary scripts.
That is normal behavior of the bash script. i.e. if you have three scripts:
script1.sh:
echo "starting"
./script2.sh
./script3.sh
echo "done"
script2.sh:
while [ 1 ]; do
echo "script2"
sleep 2
done
and script3.sh:
echo "script3"
The output is:
starting
script2
script2
script2
...
and script3.sh will never be executed, unless you modify script1.sh to be:
echo "starting"
./script2.sh &
./script3.sh &
echo "done"
in which case the output will be something like:
starting
done
script2
script3
script2
script2
...
So in this case I assume your second level scripts contain something that starts new processes.
Have you included the line #!bin/bash in your outer_script? Some OS's don't consider it to be bash by default and source is bash command. Else just call the scripts using ./path/to/script.sh inside the outer_script