I need to differentiate two cases: ( …subshell… ) vs $( …command substitution… )
I already have the following function which differentiates between being run in either a command substitution or a subshell and being run directly in the script.
#!/bin/bash
set -e
function setMyPid() {
myPid="$(bash -c 'echo $PPID')"
}
function echoScriptRunWay() {
local myPid
setMyPid
if [[ $myPid == $$ ]]; then
echo "function run directly in the script"
else
echo "function run from subshell or substitution"
fi
}
echoScriptRunWay
echo "$(echoScriptRunWay)"
( echoScriptRunWay; )
Example output:
function run directly in the script
function run from subshell or substitution
function run from subshell or substitution
Desired output
But I want to update the code so it differentiates between command substitution and subshell. I want it to produce the output:
function run directly in the script
function run from substitution
function run from subshell
P.S. I need to differentiate these cases because Bash has different behavior for the built-in trap command when run in command substitution and in a subshell.
P.P.S. i care about echoScriptRunWay | cat command also. But it's new question for me which i created here.
I don't think one can reliably test if a command is run inside a command substitution.
You could test if stdout differs from the stdout of the main script, and if it does, boldly infer it might have been redirected. For example
samefd() {
# Test if the passed file descriptors share the same inode
perl -MPOSIX -e "exit 1 unless (fstat($1))[1] == (fstat($2))[1]"
}
exec {mainstdout}>&1
whereami() {
if ((BASHPID == $$))
then
echo "In parent shell."
elif samefd 1 $mainstdout
then
echo "In subshell."
else
echo "In command substitution (I guess so)."
fi
}
whereami
(whereami)
echo $(whereami)
Related
I have a function that runs a set of scripts that set variables, functions, and aliases in the current shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
. "$script"
done
}
If one of the scripts has an error, I want to exit the script and then exit the function, but not to kill the shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
{(
set -e
. "$script"
)}
if [[ $? -ne 0 ]]; then
>&2 echo $script failed. Skipping remaining scripts.
return 1
fi
done
}
This would do what I want except it doesn't set the variables in the script whether the script succeeds or fails.
Without the subshell, set -e causes the whole shell to exit, which is undesirable.
Is there a way I can either prevent the called script from continuing on an error without killing the shell or else set/export variables, aliases, and functions from within a subshell?
The following script simulates my problem:
test() {
{(
set -e
export foo=bar
false
echo Should not have gotten here!
export bar=baz
)}
local errorCode=$?
echo foo="'$foo'". It should equal 'bar'.
echo bar="'$bar'". It should not be set.
if [[ $errorCode -ne 0 ]]; then
echo Script failed correctly. Exiting function.
return 1
fi
echo Should not have gotten here!
}
test
If worst comes to worse, since these scripts don't actually edit the filesystem, I can run each script in a subshell, check the exit code, and if it succeeds, run it outside of a subshell.
Note that set -e has a number of surprising behaviors -- relying on it is not universally considered a good idea. That caveat being give, though: We can shuffle environment variables, aliases, and shell functions out as text:
envTest() {
local errorCode newVars
newVars=$(
set -e
{
export foo=bar
false
echo Should not have gotten here!
export bar=baz
} >&2
# print generate code which, when eval'd, recreates our functions and variables
declare -p | egrep -v '^declare -[^[:space:]]*r'
declare -f
alias -p
); errorCode=$?
if (( errorCode == 0 )); then
eval "$newVars"
fi
printf 'foo=%q. It should equal %q\n' "$foo" "bar"
printf 'bar=%q. It should not be set.\n' "$bar"
if [[ $errorCode -ne 0 ]]; then
echo 'Script failed correctly. Exiting function.'
return 1
fi
echo 'Should not have gotten here!'
}
envTest
Note that this code only evaluates either export should the entire script segment succeed; the question text and comments appear to indicate that this is acceptable if not desired.
Currently at work on the following version of Bash:
GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)
My current script:
#!/usr/bin/env bash
function main() {
local commands=$#
for command in ${commands[#]} ; do
echo "command arg: $command"
done
}
if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
set -e
main $#
fi
In simple terms, this script will only exec main if it's the script being called, similar to Python's if __name__ == '__main__' convention.
In the main function, I'm simply looping over all the command variables, but quote escaping isn't happening as expected:
$ tests/simple /bin/bash -c 'echo true'
command arg: /bin/bash
command arg: -c
command arg: echo
command arg: true
The last argument here should get parsed by Bash as a single argument, nevertheless it is split into individual words.
What am I doing wrong? I want echo true to show up as a single argument.
You are getting the right output except for the 'echo true' part which is getting word split. You need to use double quotes in your code:
main "$#"
And in the function:
function main() {
local commands=("$#") # need () and double quotes here
for command in "${commands[#]}" ; do
echo "command arg: $command"
done
}
The function gets its own copy of $# and hence you don't really need to make a local copy of it.
With these changes, we get this output:
command arg: /bin/bash
command arg: -c
command arg: echo true
In general, it is not good to store shell commands in a variable. See BashFAQ/050.
See also:
How to copy an array in Bash?
You'll likely want to do something more like this:
function main() {
while [ $# -gt 0 ]
do
echo "$1"
shift
done
}
main /bin/bash -c "echo true"
The key really being $#, which counts the number of command line arguments, (not including the invocation name $0). The environment variable $# is automatically set to the number of command line arguments. If the function/script was called with the following command line:
$ main /bin/bash -c "echo true"
$# would have the value "3" for the arguments: "/bin/bash", "-c", and "echo true". The last one counts as one argument, because they are enclosed within quotes.
The shift command "shifts" all command line arguments one position to the left.
The leftmost argument is lost (main).
Quoting of # passed to main was your issue, but I thought I would mention that you also do not need to assign the value inside main to use it.
You could do the following:
main()
{
for command
do
...
done
}
main "$#"
Let's imagine we have this code (script.sh):
#!/bin/bash
set -e
f() {
echo "[f] Start" >&2
echo "f:before-false1"
echo "f:before-false2"
false
echo "f:after-false"
echo "[f] Fail! I don't want this executed" >&2
}
out=$(f)
The output:
$ bash myscript.sh
[f] Start
[f] Fail! I don't want this executed
I understand that $(...) starts a sub-shell where set -e is not propagated, so my question is: what's the idiomatic way to make this run as expected without too much clutter? I can see 3 solutions, none of which I like (nor I am actually sure they indeed work): 1) Add set -e to the start of f (and every other function in the app). 2) Run $(set -e && f). 3) Add ... || return 1 to every command that may fail.
It's not the prettiest solution, but it does allow you to emulate set -e for the current shell as well as any functions and subshells:
#!/bin/bash
# Set up an ERR trap that unconditionally exits with a nonzero exit code.
# Similar to -e, this trap is invoked when a command reports a nonzero
# exit code (outside of a conditional / test).
# Note: This trap may be called *multiple* times.
trap 'exit 1' ERR
# Now ensure that the ERR trap is called not only by the current shell,
# but by any subshells too:
# set -E (set -o errtrace) causes functions and subshells to inherit
# ERR traps.
set -E
f() {
echo "[f] Start" >&2
echo "f:before-false1"
echo "f:before-false2"
false
echo "f:after-false"
echo "[f] Fail! I don't want this executed" >&2
}
out=$(f)
Output (to stderr) if you call this script (exit code afterward will be 1) - note how the 2nd echo to stderr (>&2) is not printed, proving that the execution of false aborted the command substitution:
[f] Start
Note:
By design, set -e / trap ERR only respond to failures that aren't part of conditionals (see man bash, under the description of set (search for literal " set ["), for the exact rules, which changed slightly between Bash 3.x and 4.x).
Thus, for instance, f does NOT trigger the trap in the following commands: if ! f; then ..., f && echo ok; the following triggers the trap in the subshell (command substitution $(...), but not in the enclosing conditional ([[ ... ]]): [[ $(f) == 'foo' ]] && echo ok, so the script as a whole doesn't abort.
To exit a function / subshell explicitly in such cases, use something like || return 1 / || exit 1, or call the function / subshell separately, outside of a conditional first; e.g., in the [[ $(f) == 'foo' ]] case: res=$(f); [[ $res == 'foo' ]] - res=$(f) will then trigger the trap for the current shell too.
As for why the trap code may be invoked multiple times: In the case at hand, false inside f() first triggers the trap, and then, because the trap's exit 1 exits the subshell ($(f)), the trap is triggered again for the current shell (the one running the script).
Instead of command substitution, you should use process substitution to call your function so that set -e remains in effect:
mapfile arr < <(f) # call function f using process substitution
out="${arr[*]}" # convert array content into a string
declare -p out # check output
Output:
[f] Start
declare -- out="f:before-false1
f:before-false2
"
I am writing a script in BASH. I have a function within the script that I want to provide progress feedback to the user. Only problem is that the echo command does not print to the terminal. Instead all echos are concatenated together and returned at the end.
Considering the following simplified code how do I get the first echo to print in the users terminal and have the second echo as the return value?
function test_function {
echo "Echo value to terminal"
echo "return value"
}
return_val=$(test_function)
Yet a solution other than sending to STDERR (it may be preferred if your STDERR has other uses, or possibly be redirected by the caller)
This solution direct prints to the terminal tty:
function test_function {
echo "Echo value to terminal" > /dev/tty
echo "return value"
}
-- update --
If your system support the tty command, you could obtain your tty device from the tty command, and thus you may:
echo "this prints to the terminal" > `tty`
send terminal output to stderr:
function test_function {
echo "Echo value to terminal" >&2
echo "return value"
}
Dont use command substitution to obtain the return value from the function
The return value is always available at the $? variable. You can use the variable rather than using command substitution
Test
$ function test_function {
> return_val=10;
> echo "Echo value to terminal $return_val";
> return $return_val;
> }
$ test_function
Echo value to terminal 10
$ return_value=$?
$ echo $return_value
10
If you don't know in which terminal/device you are:
function print_to_terminal(){
echo "Value" >$(tty)
}
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.