In Bash I can easily do something like
command1 && command2 || command3
which means to run command1 and if command1 succeeds to run command2 and if command1 fails to run command3.
What's the equivalent in PowerShell?
Update: && and || have finally come to PowerShell (Core), namely in v7, termed pipeline-chain operators; see this answer for details.
Many years after the question was first asked, let me summarize the behavior of Windows PowerShell, whose latest and final version is v5.1:
Bash's / cmd's && and || control operators have NO PowerShell equivalents, and since you cannot define custom operators in PowerShell, there are no good workarounds:
Use separate commands (on separate lines or separated with ;), and explicitly test the success status of each command via automatic $? variable, e.g.:
# Equivalent of &&
command1 -arg1 -arg2; if ($?) { command2 -arg1 }
# Equivalent of ||
command1 -arg1 -arg2; if (-not $?) { command2 -arg1 }
Better alternative for external programs, using the automatic $LASTEXITCODE variable; this is preferable, because $? in Windows PowerShell can yield false negatives (no longer in PowerShell (Core) 7.2+), if a 2> redirection is involved - see this answer:
# Equivalent of &&
command1 -arg1 -arg2; if ($LASTEXITCODE -eq 0) { command2 -arg1 }
# Equivalent of ||
command1 -arg1 -arg2; if ($LASTEXITCODE -ne 0) { command2 -arg1 }
See below for why PowerShell's -and and -or are generally not a solution.
[Since implemented in PowerShell (Core) 7+] There was talk about adding them a while back, but it seemingly never made the top of the list.
Now that PowerShell has gone open-source, an issue has been opened on GitHub.
The tokens && and || are currently reserved for future use in PowerShell, so there's hope that the same syntax as in Bash can be implemented.
(As of PSv5.1, attempting something like 1 && 1 yields error message The token '&&' is not a valid statement separator in this version.)
Why PowerShell's -and and -or are no substitute for && and ||:
Bash's control operators && (short-circuiting logical AND) and || (short-circuiting logical OR) implicitly check the success status of commands by their exit codes, without interfering with their output streams; e.g.:
ls / nosuchfile && echo 'ok'
Whatever ls outputs -- both stdout output (the files in /) and stderr output (the error message from attempting to access non-existent file nosuchfile) -- is passed through, but && checks the (invisible) exit code of the ls command to decide if the echo command - the RHS of the && control operator - should be executed.
ls reports exit code 1 in this case, signaling failure -- because file nosuchfile doesn't exist -- so && decides that ls failed and, by applying short-circuiting, decides that the echo command need not be executed.
Note that it is exit code 0 that signals success in the world of cmd.exe and bash, whereas any nonzero exit code indicates failure.
In other words: Bash's && and || operate completely independently of the commands' output and only act on the success status of the commands.
PowerShell's -and and -or, by contrast, act only on the commands' standard (success) output, consume it and then output only the Boolean result of the operation; e.g.:
(Get-ChildItem \, nosuchfile) -and 'ok'
The above:
uses and consumes the success (standard) output -- the listing of \ -- and interprets it as a Boolean; a non-empty input collection is considered $true in a Boolean context, so if there's at least one entry, the expression evaluates to $true.
However, the error information resulting from nonexistent file nosuchfile is passed through, because errors are sent to a separate stream.
Given that Get-ChildItem \, nosuchfile returns non-empty success output, the LHS evaluated to $true, so -and also evaluates the RHS, 'ok', but, again, consumes its output and interprets it as a Boolean, which, as a nonempty string, also evaluates to $true.
Thus, the overall result of the -and expression is $true, which is (the only success) output.
The net effect is:
The success output from both sides of the -and expression is consumed during evaluation and therefore effectively hidden.
The expression's only (success) output is its Boolean result, which is $true in this case (which renders as True in the terminal).
What Bash must be doing is implicitly casting the exit code of the commands to a Boolean when passed to the logical operators. PowerShell doesn't do this - but a function can be made to wrap the command and create the same behavior:
> function Get-ExitBoolean($cmd) { & $cmd | Out-Null; $? }
($? is a bool containing the success of the last exit code)
Given two batch files:
#pass.cmd
exit
and
#fail.cmd
exit /b 200
...the behavior can be tested:
> if (Get-ExitBoolean .\pass.cmd) { write pass } else { write fail }
pass
> if (Get-ExitBoolean .\fail.cmd) { write pass } else { write fail }
fail
The logical operators should be evaluated the same way as in Bash. First, set an alias:
> Set-Alias geb Get-ExitBoolean
Test:
> (geb .\pass.cmd) -and (geb .\fail.cmd)
False
> (geb .\fail.cmd) -and (geb .\pass.cmd)
False
> (geb .\pass.cmd) -and (geb .\pass.cmd)
True
> (geb .\pass.cmd) -or (geb .\fail.cmd)
True
You can do something like this, where you hide the boolean output with [void], and only get the side effect. In this case, if $a or $b are null, then $c gets assigned to $result. An assignment can be an expression.
$a = ''
$b = ''
$c = 'hi'
[void](
($result = $a) -or
($result = $b) -or
($result = $c))
$result
output
hi
We can use try catch finally method instead of using && method in powershell.
try {hostname} catch {echo err} finally {ipconfig /all | findstr bios}
Related
How do I get -e / errexit to work in bash functions, so that the first failed command* within a function causes the function to return with an error code (just as -e works at top-level).
* not part of boolean expression, if/elif/while/etc etc etc
I ran the following test-script, and I expect any function containing f in its name (i.e. a false line in its source) to return error-code, but they don't if there's another command after. Even when I put the function body in a subshell with set -e specified again, it just blindly steamrolls on through the function after a command fails instead of exiting the subshell/function with the nonzero status code.
Environments / interpreters tested:
I get the same result in all of these envs/shells, which is to be expected I guess unless one was buggy.
Arch:
bash test.sh (bash 5.1)
zsh test.sh (zsh 5.8)
sh test.sh (just a symlink to bash)
Alpine:
docker run -v /test.sh:/test.sh alpine:latest sh /test.sh (BusyBox 1.34)
Ubuntu:
docker run -v /test.sh:/test.sh ubuntu:21.04 sh /test.sh
docker run -v /test.sh:/test.sh ubuntu:21.04 bash /test.sh (bash 5.1)
docker run -v /test.sh:/test.sh ubuntu:21.04 dash /test.sh (bash 5.1)
Test script
set -e
t() {
true
}
f() {
false
}
tf() {
true
false
}
ft() {
false
true
}
et() {
set -e
true
}
ef() {
set -e
false
}
etf() {
set -e
true
false
}
eft() {
set -e
false
true
}
st() {( set -e
true
)}
sf() {( set -e
false
)}
stf() {( set -e
true
false
)}
sft() {( set -e
false
true
)}
for test in t f tf ft _ et ef etf eft _ st sf stf sft; do
if [ "$test" = '_' ]; then
echo ""
elif "$test"; then
echo "$test: pass"
else
echo "$test: fail"
fi
done
Output on my machine
t: pass
f: fail
tf: fail
ft: pass
et: pass
ef: fail
etf: fail
eft: pass
st: pass
sf: fail
stf: fail
sft: pass
Desired output
Without significantly changing source in functions themselves, i.e. not adding an if/then or || return to every line in the function.
t: pass
f: fail
tf: fail
ft: fail
et: pass
ef: fail
etf: fail
eft: fail
st: pass
sf: fail
stf: fail
sft: fail
Or at least pass/fail/fail/fail in one group, so I can use that approach for writing robust functions.
With accepted solution
With the accepted solution, the first four tests give the desired result. The other two groups don't, but are irrelevant since those were my own attempts to work around the issue. The root cause is that the "don't errexit the script when if/while-condition fails" behaviour propagates into any functions called in the condition, rather than just applying to the end-result.
How to make errexit behaviour work in bash functions
Call the function not inside if.
I expect any function containing f in its name (i.e. a false line in its source) to return error-code
Weeeeeell, your expectancy does not match reality. It is not how it is implemented. And flag errexit will exit your script, not return error-code.
Desired output
You can:
Download Bash or other shell sources or write your own and implement the behavior that you want to have.
consider extending the behavior with like shopt -s errexit_but_return_nonzero_in_if_contexts or something shorter
Run it in a separate shell.
elif bash -ec "$(declare -f "$test"); $test"; then
Write a custom loadable to "clear" CMD_IGNORE_RETURN flag when Bash enters contexts in which set -e should be ignored. I think for that static COMMAND *currently_executing_command; needs to be extern and then just currently_executing_command.flags &= ~CMD_IGNORE_RETURN;.
You can do a wrapper where you temporarily disable errexit and get return status, but you have to call it outside of if:
errreturn() {
declare -g ERRRETURN
local flags=$(set +o)
set +e
(
set -e
"$#"
)
ERRRETURN=$?
eval "$flags"
}
....
echo ""
else
errreturn "$test"
if (( ERRRETURN == 0 )); then
echo "$test: pass"
else
echo "$test: fail"
fi
fi
I am running the following script using tcsh. In my while loop, I'm running a C++ program that I created and will return a different exit code depending on certain things. While it returns an exit code of 0, I want the script to increment counter and run the program again.
#!/bin/tcsh
echo "Starting the script."
set counter = 0
while ($? == 0)
# counter ++
./auto $counter
end
I have verified that my program is definitely returning with exit code = 1 after a certain point. However, the condition in the while loop keeps evaluating to true for some reason and running.
I found that if I stick the following line at the end of my loop and then replace the condition check in the while loop with this new variable, it works fine.
while ($return_code == 0)
# counter ++
./auto $counter
set return_code = $?
end
Why is it that I can't just use $? directly? Is another operation underneath the hood performed in between running my custom program and checking the loop condition that's causing $? to change value?
That is peculiar.
I've altered your example to something that I think illustrates the issue more clearly. (Note that $? is an alias for $status.)
#!/bin/tcsh -f
foreach i (1 2 3)
false
# echo false status=$status
end
echo Done status=$status
The output is
Done status=0
If I uncomment the echo command in the loop, the output is:
false status=1
false status=1
false status=1
Done status=0
(Of course the echo in the loop would break the logic anyway, because the echo command completes successfully and sets $status to zero.)
I think what's happening is that the end that terminates the loop is executed as a statement, and it sets $status ($?) to 0.
I see the same behavior with both tcsh and bsd-csh.
Saving the value of $status in another variable immediately after the command is a good workaround -- and arguably just a better way of doing it, since $status is extremely fragile, and will almost literally be clobbered if you look at it.
Note that I've add a -f option to the #! line. This prevents tcsh from sourcing your init file(s) (.cshrc or .tcshrc) and is considered good practice. (That's not the case for sh/bash/ksh/zsh, which assign a completely different meaning to -f.)
A digression: I used tcsh regularly for many years, both as my interactive login shell and for scripting. I would not have anticipated that end would set $status. This is not the first time I've had to find out how tcsh or csh behaves by trial and error and been surprised by the result. It is one of the reasons I switched to bash for interactive and scripting use. I won't tell you to do the same, but you might want to read Tom Christiansen's classic "csh.whynot".
Slightly shorter/simpler explanation:
Recall that with tcsh/csh EACH command (including shell builtin) return a status. Therefore $? (aliases to $status) is updated by 'if' statements, 'for' loops, assignments, ...
From practical point of view, better to limit the usage of direct use of $? to an if statement after the command execution:
do-something
if ( $status == 0 )
...
endif
In all other cases, capture the status in a variable, and use only that variable
do-something
something_status=$?
if ( $something_status == 0 )
...
endif
To expand on the $status, even a condition test in an if statement will modify the status, therefore the following repeated test on $status will not never hit the '$status == 5', even when do-something will return status of 5
do-something
if ( $status == 2 ) then
echo FOO
else if ( $status == 5 ) then
echo BAR
endif
I want to write a script with several commands and get the combination result of all them:
#!/bin/bash
command1; RET_CMD1=$(echo $?)
command2; RET_CMD2=$(echo $?)
command3; RET_CMD3=$(echo $?)
\#result is error if any of them fails
\#could I do something like:
RET=RET_CMD1 && RET_CMD2 && RET_CMD3 *<- this is the part that I can't remember how I did in the past..*
echo $RET
Thanks for your help!
I think you're just looking for this:
if ! { command1 && command2 && command3; }; then
echo "one of the commands failed"
fi
The result of the block { command1 && command2 && command3; } will be 0 (success) only if all of the commands exited successfully. The semicolon is needed if the block is all written on one line.
There is no need to save the return codes to variables, or even to refer to $?, since if works based on the return code of a command (or list of commands).
So to think about this...
we want to return 0 on success... or some other positive integer if an error occurred with one of the commands.
If no error occurred with any 3, they would all return 0, which means you would also return 0 in your script. Some simple addition can resolve this.
RET=$[RET_CMD1 + RET_CMD2 + RET_CMD3] # !
echo $RET
You can also replace the first line (!) with logical or operator, as you mentioned.
RET=$[RET_CMD1 | RET_CMD2 | RET_CMD3]
Note that addition and logical or are different in nature. But you seemed to want the logical or...
Disadvantages of this setup: Not being able to trace where the error occurred from the return value. Tracing errors from either 3 commands will need to rely on other error output generated. (This is just a forewarning.)
I have a rather complex series of commands in bash that ends up returning a meaningful exit code. Various places later in the script need to branch conditionally on whether the command set succeed or not.
Currently I am storing the exit code and testing it numerically, something like this:
long_running_command | grep -q trigger_word
status=$?
if [ $status -eq 0 ]; then
: stuff
else
: more code
if [ $status -eq 0 ]; then
: stuff
else
For some reason it feels like this should be simpler. We have a simple exit code stored and now we are repeatedly typing out numerical test operations to run on it. For example I can cheat use the string output instead of the return code which is simpler to test for:
status=$(long_running_command | grep trigger_word)
if [ $status ]; then
: stuff
else
: more code
if [ $status ]; then
: stuff
else
On the surface this looks more straight forward, but I realize it's dirty.
If the other logic wasn't so complex and I was only running this once, I realize I could embed it in place of the test operator, but this is not ideal when you need to reuse the results in other locations without re-running the test:
if long_running_command | grep -q trigger_word; then
: stuff
else
The only thing I've found so far is assigning the code as part of command substitution:
status=$(long_running_command | grep -q trigger_word; echo $?)
if [ $status -eq 0 ]; then
: stuff
else
Even this is not technically a one shot assignment (although some may argue the readability is better) but the necessary numerical test syntax still seems cumbersome to me. Maybe I'm just being OCD.
Am I missing a more elegant way to assign an exit code to a variable then branch on it later?
The simple solution:
output=$(complex_command)
status=$?
if (( status == 0 )); then
: stuff with "$output"
fi
: more code
if (( status == 0 )); then
: stuff with "$output"
fi
Or more eleganter-ish
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands | grep -q trigger_word
)
g_status=$?
}
complex_command_succeeded () {
test $g_status -eq 0
}
complex_command_output () {
echo "$g_output"
}
do_complex_command
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
: more code
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
Or
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands
)
g_status=$?
}
complex_command_output () {
echo "$g_output"
}
complex_command_contains_keyword () {
complex_command_output | grep -q "$1"
}
if complex_command_contains_keyword "trigger_word"; then
: stuff with "$(complex_command_output)"
fi
If you don't need to store the specific exit status, just whether the command succeeded or failed (e.g. whether grep found a match), I's use a fake boolean variable to store the result:
if long_running_command | grep trigger_word; then
found_trigger=true
else
found_trigger=false
fi
# ...later...
if ! $found_trigger; then
# stuff to do if the trigger word WASN'T found
fi
#...
if $found_trigger; then
# stuff to do if the trigger WAS found
fi
Notes:
The shell doesn't really have boolean (true/false) variables. What's actually happening here is that "true" and "false" are stored as strings in the found_trigger variable; when if $found_trigger; then executes, it runs the value of $found_trigger as a command, and it just happens that the true command always succeeds and the false command always fails, thus causing "the right thing" to happen. In if ! $found_trigger; then, the "!" toggles the success/failure status, effectively acting as a boolean "not".
if long_running_command | grep trigger_word; then is equivalent to running the command, then using if [ $? -ne 0 ]; then to check its exit status. I find it a little cleaner, but you have to get used to thinking of if as checking the success/failure of a command, not just testing boolean conditions. If "active" if commands aren't intuitive to you, use a separate test instead.
As Charles Duffy pointed out in a comment, this trick executes data as a command, and if you don't have full control over that data... you don't have control over what your script is going to do. So never set a fake-boolean variable to anything other than the fixed strings "true" and "false", and be sure to set the variable before using it. If you have any nontrivial execution flow in the script, set all fake-boolean variables to sane default values (i.e. "true" or "false") before the execution flow gets complicated.
Failure to follow these rules can lead to security holes large enough to drive a freight train through.
Why don't you set flags for the stuff that needs to happen later?
cheeseballs=false
nachos=false
guppies=false
command
case $? in
42) cheeseballs=true ;;
17 | 31) cheeseballs=true; nachos=true; guppies=true;;
66) guppies=true; echo "Bingo!";;
esac
$cheeseballs && java -crash -burn
$nachos && python ./tex.py --mex
if $guppies; then
aquarium --light=blue --door=hidden --decor=squid
else
echo SRY
fi
As pointed out by #CharlesDuffy in the comments, storing an actual command in a variable is slightly dubious, and vaguely triggers Bash FAQ #50 warnings; the code reads (slightly & IMHO) more naturally like this, but you have to be really careful that you have total control over the variables at all times. If you have the slightest doubt, perhaps just use string values and compare against the expected value at each junction.
[ "$cheeseballs" = "true" ] && java -crash -burn
etc etc; or you could refactor to some other implementation structure for the booleans (an associative array of options would make sense, but isn't portable to POSIX sh; a PATH-like string is flexible, but perhaps too unstructured).
Based on the OP's clarification that it's only about success v. failure (as opposed to the specific exit codes):
long_running_command | grep -q trigger_word || failed=1
if ((!failed)); then
: stuff
else
: more code
if ((!failed)); then
: stuff
else
Sets the success-indicator variable only on failure (via ||, i.e, if a non-zero exit code is returned).
Relies on the fact that variables that aren't defined evaluate to false in an arithmetic conditional (( ... )).
Care must be taken that the variable ($failed, in this example) hasn't accidentally been initialized elsewhere.
(On a side note, as #nos has already mentioned in a comment, you need to be careful with commands involving a pipeline; from man bash (emphasis mine):
The return status of a pipeline is the exit status of the last command,
unless the pipefail option is enabled. If pipefail is enabled, the
pipeline's return status is the value of the last (rightmost) command
to exit with a non-zero status, or zero if all commands exit successfully.
To set pipefail (which is OFF by default), use set -o pipefail; to turn it back off, use set +o pipefail.)
If you don't care about the exact error code, you could do:
if long_running_command | grep -q trigger_word; then
success=1
: success
else
success=0
: failure
fi
if ((success)); then
: success
else
: failure
fi
Using 0 for false and 1 for true is my preferred way of storing booleans in scripts. if ((flag)) mimics C nicely.
If you do care about the exit code, then you could do:
if long_running_command | grep -q trigger_word; then
status=0
: success
else
status=$?
: failure
fi
if ((status == 0)); then
: success
else
: failure
fi
I prefer an explicit test against 0 rather than using !, which doesn't read right.
(And yes, $? does yield the correct value here.)
Hmm, the problem is a bit vague - if possible, I suggest considering refactoring/simplify, i.e.
function check_your_codes {
# ... run all 'checks' and store the results in an array
}
###
function process_results {
# do your 'stuff' based on array values
}
###
create_My_array
check_your_codes
process_results
Also, unless you really need to save the exit code then there is no need to store_and_test - just test_and_do, i.e. use a case statement as suggested above or something like:
run_some_commands_and_return_EXIT_CODE_FROM_THE_LAST_ONE
if [[ $? -eq 0 ]] ; then do_stuff else do_other_stuff ; fi
:)
Dale
There's already question addressing my issue (Can I get && to work in Powershell?), but with one difference. I need an OUTPUT from both commands. See, if I just run:
(command1 -arg1 -arg2) -and (command2 -arg1)
I won't see any output, but stderr messages. And, as expected, just typing:
command1 -arg1 -arg2 -and command2 -arg1
Gives syntax error.
2019: the Powershell team are considering adding support for && to Powershell - weigh in at this GitHub PR
Try this:
$(command -arg1 -arg2 | Out-Host;$?) -and $(command2 -arg1 | Out-Host;$?)
The $() is a subexpression allowing you to specify multiple statements within including a pipeline. Then execute the command and pipe to Out-Host so you can see it. The next statement (the actual output of the subexpression) should output $? i.e. the last command's success result.
The $? works fine for native commands (console exe's) but for cmdlets it leaves something to be desired. That is, $? only seems to return $false when a cmdlet encounters a terminating error. Seems like $? needs at least three states (failed, succeeded and partially succeeded). So if you're using cmdlets, this works better:
$(command -arg1 -arg2 -ev err | Out-Host;!$err) -and
$(command -arg1 -ev err | Out-Host;!$err)
This kind of blows still. Perhaps something like this would be better:
function ExecuteUntilError([scriptblock[]]$Scriptblock)
{
foreach ($sb in $scriptblock)
{
$prevErr = $error[0]
. $sb
if ($error[0] -ne $prevErr) { break }
}
}
ExecuteUntilError {command -arg1 -arg2},{command2-arg1}
Powershell 7 preview 5 has them.
I don't know why this was deleted with no notification or explanation.
https://devblogs.microsoft.com/powershell/powershell-7-preview-5/ This will give the output of both commands, as the question requested.
echo 'hello' && echo 'there'
hello
there
echo 'hello' || echo 'there'
hello
To simplify multistep scripts where doThis || exit 1 would be really useful, I use something like:
function ProceedOrExit {
if ($?) { echo "Proceed.." } else { echo "Script FAILED! Exiting.."; exit 1 }
}
doThis; ProceedOrExit
doNext
# or for long doos
doThis
ProceedOrExit
doNext
Update: PowerShell [Core] 7.0 introduced && and || support - see this answer.
Bash's / cmd's && and || control operators have NO Windows PowerShell equivalents, and since you cannot define custom operators, there are no good workarounds.
The | Out-Host-based workaround in Keith Hill's answer is a severely limited in that it can only send normal command output to the console (terminal), preventing the output from being sent on through the pipeline or being captured in a variable or file.
Find background information in this answer.
The simplest solution is to use
powershell command1 && powershell command2
in a cmd shell. Of course, you can't use this in a .ps1 script, so there's that limitation.
Little longer way is see below
try {
hostname
if ($lastexitcode -eq 0) {
ipconfig /all | findstr /i bios
}
} catch {
echo err
} finally {}
With Powershell 7.0 released, && and || are supported
https://devblogs.microsoft.com/powershell/announcing-powershell-7-0/
New operators:
Ternary operator: a ? b : c
Pipeline chain operators: || and &&
Null coalescing operators: ?? and ??=