Does Powershell have an equivalent to the bash subshell? - bash

One thing that's really great in linux bash shell is that you can define variables inside of a subshell and after that subshell completes the (environment?) variables defined within are just gone provided you define them without exporting them and within the subshell.
for example:
$ (set bob=4)
$ echo $bob
$
No variable exists so no output.
I was also recently writing some powershell scripts and noticed that I kept having to null out my variables / objects at the end of the script; using a subshell equivalent in powershell would clear this up.

I've not heard of such functionality before, but you can get the same effect by running something like the following:
Clear-Host
$x = 3
& {
$a = 5
"inner a = $a"
"inner x = $x"
$x++
"inner x increment = $x"
}
"outer a = $a"
"outer x = $x"
Output:
inner a = 5
inner x = 3
inner x increment = 4
outer a =
outer x = 3
i.e. this uses the call operator (&) to run a script block ({...}).

Related

bash: set -o errexit is not exiting when a command has an error. how do you change code to make it exit when there is an error? [duplicate]

In the bash man page, it states:
Exit immediately if a pipeline (which may consist of a single simple command),
a subshell command enclosed in parentheses, or one of the commands executed as
part of a command list enclosed by braces...
So I assumed that a function should be considered a command list enclosed by braces. However, if you apply a conditional to the function call, errexit no longer persists inside the function body and it executes the entire command list before returning. Even if you explicitly create a subshell inside the function with errexit enabled for that subshell, all commands in the command list are executed. Here is a simple example that demonstrates the issue:
function a() { b ; c ; d ; e ; }
function ap() { { b ; c ; d ; e ; } ; }
function as() { ( set -e ; b ; c ; d ; e ) ; }
function b() { false ; }
function c() { false ; }
function d() { false ; }
function e() { false ; }
( set -Eex ; a )
+ a
+ b
+ false
( set -Eex ; ap )
+ ap
+ b
+ false
( set -Eex ; as )
+ as
+ set -e
+ b
+ false
Now if I apply a conditional to each of them...
( set -Eex ; a || false )
+ a
+ b
+ false
+ c
+ false
+ d
+ false
+ e
+ false
+ false
( set -Eex ; ap || false )
+ ap
+ b
+ false
+ c
+ false
+ d
+ false
+ e
+ false
+ false
( set -Eex ; as )
+ as
+ set -e
+ b
+ false
+ c
+ false
+ d
+ false
+ e
+ false
+ false
You started to quote the manual but then you cut the bit that explained this behaviour, which was in the very next sentence:
-e Exit immediately if a pipeline, which may consist of a single simple command, a subshell command enclosed in parentheses, or one of the commands executed as part of a command list enclosed by braces returns a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command’s return status is being inverted with !.
bug-bash mailing list has an explanation by Eric Blake more explicit about functions:
Short answer: historical compatibility.
...
Indeed, the correct behavior mandated by POSIX (namely, that 'set -e' is
completely ignored for the duration of the entire body of f(), because f
was invoked in a context that ignores 'set -e') is not intuitive. But
it is standardized, so we have to live with it.
And some words about whether set -e can be exploited to achieve the wanted behavior:
Because once you are in a context that ignores 'set -e', the historical
behavior is that there is no further way to turn it back on, for that
entire body of code in the ignored context. That's how it was done 30
years ago, before shell functions were really thought about, and we are
stuck with that poor design decision.
Not an answer to the original question, but a work-around for the underlying problem: set up a trap on errors:
function on_error() {
echo "error happened!"
}
trap on_error ERR
echo "OK so far"
false
echo "this line should not execute"
The reason for the behavior itself is properly explained in other answers (basically it's expected bash behavior as per the manual and POSIX): https://stackoverflow.com/a/19789651/1091436
not an answer but you might fix this counter intuitive behaviur by defining this helper function:
fixerrexit() { ( eval "expr '$-' : '.*e' >/dev/null && set -e; $*"; ); }
then call your functions via fixerrexit.
example:
f1()
{
mv unimportant-0.txt important.txt
rm unimportant-*.txt
}
set -e
if fixerrexit f1
then
echo here is the important file: important.txt
echo unimportant files are deleted
fi
if outer context has errexit on, then fixerrexit turns on errexit inside f1() as well, so you dont need to worry about commands being executed after a failure occurs.
the only downside is you can not set variables since it runs f1 inside a subshell.

Update global variable from while loop

Having trouble updating my global variable in this shell script.
I have read that the variables inside the loops run on a sub shell. Can someone clarify how this works and what steps I should take.
USER_NonRecursiveSum=0.0
while [ $lineCount -le $(($USER_Num)) ]
do
thisTime="$STimeDuration.$NTimeDuration"
USER_NonRecursiveSum=`echo "$USER_NonRecursiveSum + $thisTime" | bc`
done
That particular style of loop does not run in a sub-shell, it will update the variable just fine. You can see that in the following code, equivalent to yours other than adding things that you haven't included in the question:
USER_NonRecursiveSum=0
((USER_Num = 4)) # Add this to set loop limit.
((lineCount = 1)) # Add this to set loop control variable initial value.
while [ $lineCount -le $(($USER_Num)) ]
do
thisTime="1.2" # Modify this to provide specific thing to add.
USER_NonRecursiveSum=`echo "$USER_NonRecursiveSum + $thisTime" | bc`
(( lineCount += 1)) # Add this to limit loop.
done
echo ${USER_NonRecursiveSum} # Add this so we can see the final value.
That loop runs four times and adds 1.2 each time to the value starting at zero, and you can see it ends up as 4.8 after the loop is done.
While the echo command does run in a sub-shell, that's not an issue as the backticks explicitly capture the output from it and "deliver" it to the current shell.

Bash passing array as reference to function gets clobbered

I am using GNU bash version 4.4.12 and coming across an unusual situation. I am trying to pass an array by reference to a function. So the following code works as expected.
function test() {
local -n varK=${2}
local varJ=$(( ${1} + 10 ))
echo "${varJ}, ${varK[#]}"
}
varI=( 1 2 )
varJ=3
echo "result = '$( test 1 varI )'" # result = '11, 1 2'
echo "varI = '${varI[#]}'" # varI = '1 2'
echo "varJ = '${varJ}'" # varJ = '3'
The weird situation is that if I use the variable varI within the function, even if I define it as a local variable, then the variable varI gets clobbered.
function test() (
local -n varK=${2}
local varI=$(( ${1} + 10 ))
echo "${varI}, ${varK[#]}"
)
varI=( 1 2 )
varJ=3
echo "result = '$( test 1 varI )'" # result = '11, 11'
echo "varI = '${varI[#]}'" # varI = '1 2'
echo "varJ = '${varJ}'" # varJ = '3'
Does not the command local -n varK=${2} make a local copy of the array passed by reference? Also, if I am running the function in a subshell (and calling it as a subshell, should it not affect the parent process as all the documents claim?
The nameref doesn't know anything about the name it refers to; it is a simple alias. local -n varK=varI simply states that whenever you use the parameter name varK, pretend it is really varI, whatever that name means. By the time you use varK, varI is a local variable with the value 11, so that's what varK is as well. (Part of the confusion may also lie in how bash treats a non-array name used with array syntax; ${varI[#]} and ${varI} are equivalent whether or not the array attribute is set on the name.)
So in order to use varI within my function and still have access to referenced array, I made a local copy of the reference array:
local varJ=( "${varK[#]}" )

set var in Csh script

I'm creating a script like this
#!/bin/csh
set h=1;
while [h=1] do echo "hi"; h=2; done;
but when I execute it a get this:
===> message after : csh test.sh [h=1]: No match.
Try:
set h = 1
while ( $h == 1 )
echo "hi"
set h = 2
end
You seem to be trying to mix Bourne shell syntax into your C shell script.
Csh is generally a lousy language for scripting, try to avoid it if at all possible
http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/
UPDATE:
The csh equivalent to read h is:
set h = $<

using 'eval' in a function produces different results than when using it in a script

Say in my current directory I have files: a.1, a.2, b.1, b.2.
I have this script in file 'x':
echo `eval echo "$1"`
echo 'eval echo "$2"`
If I do:
> set -f; x a* b*
I get:
> a.1 a.2
> b.1 b.2
... Which is very nice, I can access and expand any of the command line arguments (as typed) and expand them at my pleasure.
Alas! If I put the contents of my script 'x' inside a function it no longer works, the wildcards refuse to expand. Of course I can remove the 'set -f' but then the list of arguments expands 'out of control' which is to say that I don't know where the 'b*' arguments start since I don't know how many arguments 'a*' will exand to.
Can I make the above work inside a function? And, for that matter, why does it behave differently than the same code in a script?
Thanks
The reason for the difference is that when you put it in a separate script x, and call that script using
x ...
then you're launching a separate instance of Bash to run the script (that is, it's equivalent to
bash x ...
), and that separate instance doesn't inherit the set -f setting, whereas when you put it in a shell function, it runs within the same instance.
One solution is to drop the whole set -f approach, and just quote your arguments when you call the function:
function x () { for arg in "$#" ; do echo $arg ; done ; }
x 'a.*' 'b.*' # prints a.1 a.2 on one line, b.1 b.2 on the next.
Another solution is to define the function to run in a subshell (but still within the same instance of Bash), and then cancel the set -f within that function by using set +f:
set -f
function x () ( set +f ; for arg in "$#" ; do echo $arg ; done )
x a.* b.* # prints a.1 a.2 on one line, b.1 b.2 on the next.
(The reason for using the subshell, i.e. for using (...) instead of {...} is that otherwise, the set +f would "leak out" of the function call. But maybe that's O.K.?)
Yet another solution is to have the function invoke a new instance of Bash:
set -f
function x () { for arg in "$#" ; do bash -c "echo $arg" ; done ; }
x a.* b.* # prints a.1 a.2 on one line, b.1 b.2 on the next.
Your initial set -f affects only the current shell instance. When you launch your script, a new shell is started, and that shell has globbing enabled. If you replace the script with the function, the function runs in the context of the shell where set -f has effect, and so no globbing takes place.
You actually don't need to disable globbing to get the behaviour you want. You need only quote the arguments to your function:
myfunc()
{
echo $1
echo $2
}
myfunc "a.*" "b.*"

Resources