There's already question addressing my issue (Can I get && to work in Powershell?), but with one difference. I need an OUTPUT from both commands. See, if I just run:
(command1 -arg1 -arg2) -and (command2 -arg1)
I won't see any output, but stderr messages. And, as expected, just typing:
command1 -arg1 -arg2 -and command2 -arg1
Gives syntax error.
2019: the Powershell team are considering adding support for && to Powershell - weigh in at this GitHub PR
Try this:
$(command -arg1 -arg2 | Out-Host;$?) -and $(command2 -arg1 | Out-Host;$?)
The $() is a subexpression allowing you to specify multiple statements within including a pipeline. Then execute the command and pipe to Out-Host so you can see it. The next statement (the actual output of the subexpression) should output $? i.e. the last command's success result.
The $? works fine for native commands (console exe's) but for cmdlets it leaves something to be desired. That is, $? only seems to return $false when a cmdlet encounters a terminating error. Seems like $? needs at least three states (failed, succeeded and partially succeeded). So if you're using cmdlets, this works better:
$(command -arg1 -arg2 -ev err | Out-Host;!$err) -and
$(command -arg1 -ev err | Out-Host;!$err)
This kind of blows still. Perhaps something like this would be better:
function ExecuteUntilError([scriptblock[]]$Scriptblock)
{
foreach ($sb in $scriptblock)
{
$prevErr = $error[0]
. $sb
if ($error[0] -ne $prevErr) { break }
}
}
ExecuteUntilError {command -arg1 -arg2},{command2-arg1}
Powershell 7 preview 5 has them.
I don't know why this was deleted with no notification or explanation.
https://devblogs.microsoft.com/powershell/powershell-7-preview-5/ This will give the output of both commands, as the question requested.
echo 'hello' && echo 'there'
hello
there
echo 'hello' || echo 'there'
hello
To simplify multistep scripts where doThis || exit 1 would be really useful, I use something like:
function ProceedOrExit {
if ($?) { echo "Proceed.." } else { echo "Script FAILED! Exiting.."; exit 1 }
}
doThis; ProceedOrExit
doNext
# or for long doos
doThis
ProceedOrExit
doNext
Update: PowerShell [Core] 7.0 introduced && and || support - see this answer.
Bash's / cmd's && and || control operators have NO Windows PowerShell equivalents, and since you cannot define custom operators, there are no good workarounds.
The | Out-Host-based workaround in Keith Hill's answer is a severely limited in that it can only send normal command output to the console (terminal), preventing the output from being sent on through the pipeline or being captured in a variable or file.
Find background information in this answer.
The simplest solution is to use
powershell command1 && powershell command2
in a cmd shell. Of course, you can't use this in a .ps1 script, so there's that limitation.
Little longer way is see below
try {
hostname
if ($lastexitcode -eq 0) {
ipconfig /all | findstr /i bios
}
} catch {
echo err
} finally {}
With Powershell 7.0 released, && and || are supported
https://devblogs.microsoft.com/powershell/announcing-powershell-7-0/
New operators:
Ternary operator: a ? b : c
Pipeline chain operators: || and &&
Null coalescing operators: ?? and ??=
Related
I want to write a script with several commands and get the combination result of all them:
#!/bin/bash
command1; RET_CMD1=$(echo $?)
command2; RET_CMD2=$(echo $?)
command3; RET_CMD3=$(echo $?)
\#result is error if any of them fails
\#could I do something like:
RET=RET_CMD1 && RET_CMD2 && RET_CMD3 *<- this is the part that I can't remember how I did in the past..*
echo $RET
Thanks for your help!
I think you're just looking for this:
if ! { command1 && command2 && command3; }; then
echo "one of the commands failed"
fi
The result of the block { command1 && command2 && command3; } will be 0 (success) only if all of the commands exited successfully. The semicolon is needed if the block is all written on one line.
There is no need to save the return codes to variables, or even to refer to $?, since if works based on the return code of a command (or list of commands).
So to think about this...
we want to return 0 on success... or some other positive integer if an error occurred with one of the commands.
If no error occurred with any 3, they would all return 0, which means you would also return 0 in your script. Some simple addition can resolve this.
RET=$[RET_CMD1 + RET_CMD2 + RET_CMD3] # !
echo $RET
You can also replace the first line (!) with logical or operator, as you mentioned.
RET=$[RET_CMD1 | RET_CMD2 | RET_CMD3]
Note that addition and logical or are different in nature. But you seemed to want the logical or...
Disadvantages of this setup: Not being able to trace where the error occurred from the return value. Tracing errors from either 3 commands will need to rely on other error output generated. (This is just a forewarning.)
I'm trying to build a Jenkins Pipeline for which a parameter is
optional:
parameters {
string(
name:'foo',
defaultValue:'',
description:'foo is foo'
)
}
My purpose is calling a shell script and providing foo as argument:
stages {
stage('something') {
sh "some-script.sh '${params.foo}'"
}
}
The shell script will do the Right Thing™ if the provided value is the empty
string.
Unfortunately I can't just get an empty string. If the user does not provide
a value for foo, Jenkins will set it to null, and I will get null
(as string) inside my command.
I found this related question but the only answer is not really helpful.
Any suggestion?
OP here realized a wrapper script can be helpful… I ironically called it junkins-cmd and I call it like this:
stages {
stage('something') {
sh "junkins-cmd some-script.sh '${params.foo}'"
}
}
Code:
#!/bin/bash
helpme() {
cat <<EOF
Usage: $0 <command> [parameters to command]
This command is a wrapper for jenkins pipeline. It tries to overcome jenkins
idiotic behaviour when calling programs without polluting the remaining part
of the toolkit.
The given command is executed with the fixed version of the given
parameters. Current fixes:
- 'null' is replaced with ''
EOF
} >&2
trap helpme EXIT
command="${1:?Missing command}"; shift
trap - EXIT
typeset -a params
for p in "$#"; do
# Jenkins pipeline uses 'null' when the parameter is undefined.
[[ "$p" = 'null' ]] && p=''
params+=("$p")
done
exec $command "${params[#]}"
Beware: prams+=("$p") seems not to be portable among shells: hence this ugly script is running #!/bin/bash.
I'm trying to implement a small bash script in AIX, but I'm having some problems. Bellow you can find a example. I have another question, if I want to add the script to Crontab, I think I'll have problems to call serverStatus.sh from IBM, how can avoid this problem.
#!/usr/bin/sh
WAS_HOME="/usr/IBM/WebSphere/AppServer/profiles/bpmnprd01/"
function StatusCheck()
{
$WAS_HOME/bin/serverStatus.sh BPM.AppTarget.bpmnprd01.0 -username admin -password admin
status=$(cat /usr/IBM/WebSphere/AppServer/profiles/bpmnprd01/logs/BPM.AppTarget.xxxxx/serverStatus.log| awk '{ if (NF > 0) { last = $NF } } END { print last }' "$#")
text="STOPPED"
if [[ $text == $status ]]
then
echo "OK"
else
echo "NOK"
fi
}
function start()
{
StatusCheck
}
start
-----------------------
when I try to execute the script above, I get the following error:
[root#bpmnprd01]/root/health_check# ./servers_check.sh
./servers_check.sh[7]: 0403-057 Syntax error at line 7 : `(' is not expected.
...after this I search on google, and I found some examples without "()" on subroutine.But I got this:
[root#bpmnprd01]/root/health_check# ./servers_check.sh
./servers_check.sh[30]: 0403-057 Syntax error at line 33 : `StatusCheck' is not expected.
Thanks in Advance
Tiago
AIX has a true bourne shell living in /bin/sh, not sure about /usr/bin/sh, but would expect that to be Bourne shell as well.
Change your script heading line (the #shebang!) to
#!/usr/bin/bash
Or the result of which bash
IHTH
You are using bash specific syntax but calling the script with sh, which has more limited capabilities. Since you want to use sh, you can use a tool like checkbashisms or shellcheck to help uncover non-portable syntax.
The immediate problem is that function foo() { ..; } is not a POSIX compliant function definition, and you should drop the keyword function and use just foo() { ..; }.
Your shell may also be lacking [[ ]] in which case you should use [ ] instead, with = instead of ==.
I have a rather complex series of commands in bash that ends up returning a meaningful exit code. Various places later in the script need to branch conditionally on whether the command set succeed or not.
Currently I am storing the exit code and testing it numerically, something like this:
long_running_command | grep -q trigger_word
status=$?
if [ $status -eq 0 ]; then
: stuff
else
: more code
if [ $status -eq 0 ]; then
: stuff
else
For some reason it feels like this should be simpler. We have a simple exit code stored and now we are repeatedly typing out numerical test operations to run on it. For example I can cheat use the string output instead of the return code which is simpler to test for:
status=$(long_running_command | grep trigger_word)
if [ $status ]; then
: stuff
else
: more code
if [ $status ]; then
: stuff
else
On the surface this looks more straight forward, but I realize it's dirty.
If the other logic wasn't so complex and I was only running this once, I realize I could embed it in place of the test operator, but this is not ideal when you need to reuse the results in other locations without re-running the test:
if long_running_command | grep -q trigger_word; then
: stuff
else
The only thing I've found so far is assigning the code as part of command substitution:
status=$(long_running_command | grep -q trigger_word; echo $?)
if [ $status -eq 0 ]; then
: stuff
else
Even this is not technically a one shot assignment (although some may argue the readability is better) but the necessary numerical test syntax still seems cumbersome to me. Maybe I'm just being OCD.
Am I missing a more elegant way to assign an exit code to a variable then branch on it later?
The simple solution:
output=$(complex_command)
status=$?
if (( status == 0 )); then
: stuff with "$output"
fi
: more code
if (( status == 0 )); then
: stuff with "$output"
fi
Or more eleganter-ish
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands | grep -q trigger_word
)
g_status=$?
}
complex_command_succeeded () {
test $g_status -eq 0
}
complex_command_output () {
echo "$g_output"
}
do_complex_command
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
: more code
if complex_command_succeeded; then
: stuff with "$(complex_command_output)"
fi
Or
do_complex_command () {
# side effects: global variables
# store the output in $g_output and the status in $g_status
g_output=$(
command -args | commands
)
g_status=$?
}
complex_command_output () {
echo "$g_output"
}
complex_command_contains_keyword () {
complex_command_output | grep -q "$1"
}
if complex_command_contains_keyword "trigger_word"; then
: stuff with "$(complex_command_output)"
fi
If you don't need to store the specific exit status, just whether the command succeeded or failed (e.g. whether grep found a match), I's use a fake boolean variable to store the result:
if long_running_command | grep trigger_word; then
found_trigger=true
else
found_trigger=false
fi
# ...later...
if ! $found_trigger; then
# stuff to do if the trigger word WASN'T found
fi
#...
if $found_trigger; then
# stuff to do if the trigger WAS found
fi
Notes:
The shell doesn't really have boolean (true/false) variables. What's actually happening here is that "true" and "false" are stored as strings in the found_trigger variable; when if $found_trigger; then executes, it runs the value of $found_trigger as a command, and it just happens that the true command always succeeds and the false command always fails, thus causing "the right thing" to happen. In if ! $found_trigger; then, the "!" toggles the success/failure status, effectively acting as a boolean "not".
if long_running_command | grep trigger_word; then is equivalent to running the command, then using if [ $? -ne 0 ]; then to check its exit status. I find it a little cleaner, but you have to get used to thinking of if as checking the success/failure of a command, not just testing boolean conditions. If "active" if commands aren't intuitive to you, use a separate test instead.
As Charles Duffy pointed out in a comment, this trick executes data as a command, and if you don't have full control over that data... you don't have control over what your script is going to do. So never set a fake-boolean variable to anything other than the fixed strings "true" and "false", and be sure to set the variable before using it. If you have any nontrivial execution flow in the script, set all fake-boolean variables to sane default values (i.e. "true" or "false") before the execution flow gets complicated.
Failure to follow these rules can lead to security holes large enough to drive a freight train through.
Why don't you set flags for the stuff that needs to happen later?
cheeseballs=false
nachos=false
guppies=false
command
case $? in
42) cheeseballs=true ;;
17 | 31) cheeseballs=true; nachos=true; guppies=true;;
66) guppies=true; echo "Bingo!";;
esac
$cheeseballs && java -crash -burn
$nachos && python ./tex.py --mex
if $guppies; then
aquarium --light=blue --door=hidden --decor=squid
else
echo SRY
fi
As pointed out by #CharlesDuffy in the comments, storing an actual command in a variable is slightly dubious, and vaguely triggers Bash FAQ #50 warnings; the code reads (slightly & IMHO) more naturally like this, but you have to be really careful that you have total control over the variables at all times. If you have the slightest doubt, perhaps just use string values and compare against the expected value at each junction.
[ "$cheeseballs" = "true" ] && java -crash -burn
etc etc; or you could refactor to some other implementation structure for the booleans (an associative array of options would make sense, but isn't portable to POSIX sh; a PATH-like string is flexible, but perhaps too unstructured).
Based on the OP's clarification that it's only about success v. failure (as opposed to the specific exit codes):
long_running_command | grep -q trigger_word || failed=1
if ((!failed)); then
: stuff
else
: more code
if ((!failed)); then
: stuff
else
Sets the success-indicator variable only on failure (via ||, i.e, if a non-zero exit code is returned).
Relies on the fact that variables that aren't defined evaluate to false in an arithmetic conditional (( ... )).
Care must be taken that the variable ($failed, in this example) hasn't accidentally been initialized elsewhere.
(On a side note, as #nos has already mentioned in a comment, you need to be careful with commands involving a pipeline; from man bash (emphasis mine):
The return status of a pipeline is the exit status of the last command,
unless the pipefail option is enabled. If pipefail is enabled, the
pipeline's return status is the value of the last (rightmost) command
to exit with a non-zero status, or zero if all commands exit successfully.
To set pipefail (which is OFF by default), use set -o pipefail; to turn it back off, use set +o pipefail.)
If you don't care about the exact error code, you could do:
if long_running_command | grep -q trigger_word; then
success=1
: success
else
success=0
: failure
fi
if ((success)); then
: success
else
: failure
fi
Using 0 for false and 1 for true is my preferred way of storing booleans in scripts. if ((flag)) mimics C nicely.
If you do care about the exit code, then you could do:
if long_running_command | grep -q trigger_word; then
status=0
: success
else
status=$?
: failure
fi
if ((status == 0)); then
: success
else
: failure
fi
I prefer an explicit test against 0 rather than using !, which doesn't read right.
(And yes, $? does yield the correct value here.)
Hmm, the problem is a bit vague - if possible, I suggest considering refactoring/simplify, i.e.
function check_your_codes {
# ... run all 'checks' and store the results in an array
}
###
function process_results {
# do your 'stuff' based on array values
}
###
create_My_array
check_your_codes
process_results
Also, unless you really need to save the exit code then there is no need to store_and_test - just test_and_do, i.e. use a case statement as suggested above or something like:
run_some_commands_and_return_EXIT_CODE_FROM_THE_LAST_ONE
if [[ $? -eq 0 ]] ; then do_stuff else do_other_stuff ; fi
:)
Dale
In Bash I can easily do something like
command1 && command2 || command3
which means to run command1 and if command1 succeeds to run command2 and if command1 fails to run command3.
What's the equivalent in PowerShell?
Update: && and || have finally come to PowerShell (Core), namely in v7, termed pipeline-chain operators; see this answer for details.
Many years after the question was first asked, let me summarize the behavior of Windows PowerShell, whose latest and final version is v5.1:
Bash's / cmd's && and || control operators have NO PowerShell equivalents, and since you cannot define custom operators in PowerShell, there are no good workarounds:
Use separate commands (on separate lines or separated with ;), and explicitly test the success status of each command via automatic $? variable, e.g.:
# Equivalent of &&
command1 -arg1 -arg2; if ($?) { command2 -arg1 }
# Equivalent of ||
command1 -arg1 -arg2; if (-not $?) { command2 -arg1 }
Better alternative for external programs, using the automatic $LASTEXITCODE variable; this is preferable, because $? in Windows PowerShell can yield false negatives (no longer in PowerShell (Core) 7.2+), if a 2> redirection is involved - see this answer:
# Equivalent of &&
command1 -arg1 -arg2; if ($LASTEXITCODE -eq 0) { command2 -arg1 }
# Equivalent of ||
command1 -arg1 -arg2; if ($LASTEXITCODE -ne 0) { command2 -arg1 }
See below for why PowerShell's -and and -or are generally not a solution.
[Since implemented in PowerShell (Core) 7+] There was talk about adding them a while back, but it seemingly never made the top of the list.
Now that PowerShell has gone open-source, an issue has been opened on GitHub.
The tokens && and || are currently reserved for future use in PowerShell, so there's hope that the same syntax as in Bash can be implemented.
(As of PSv5.1, attempting something like 1 && 1 yields error message The token '&&' is not a valid statement separator in this version.)
Why PowerShell's -and and -or are no substitute for && and ||:
Bash's control operators && (short-circuiting logical AND) and || (short-circuiting logical OR) implicitly check the success status of commands by their exit codes, without interfering with their output streams; e.g.:
ls / nosuchfile && echo 'ok'
Whatever ls outputs -- both stdout output (the files in /) and stderr output (the error message from attempting to access non-existent file nosuchfile) -- is passed through, but && checks the (invisible) exit code of the ls command to decide if the echo command - the RHS of the && control operator - should be executed.
ls reports exit code 1 in this case, signaling failure -- because file nosuchfile doesn't exist -- so && decides that ls failed and, by applying short-circuiting, decides that the echo command need not be executed.
Note that it is exit code 0 that signals success in the world of cmd.exe and bash, whereas any nonzero exit code indicates failure.
In other words: Bash's && and || operate completely independently of the commands' output and only act on the success status of the commands.
PowerShell's -and and -or, by contrast, act only on the commands' standard (success) output, consume it and then output only the Boolean result of the operation; e.g.:
(Get-ChildItem \, nosuchfile) -and 'ok'
The above:
uses and consumes the success (standard) output -- the listing of \ -- and interprets it as a Boolean; a non-empty input collection is considered $true in a Boolean context, so if there's at least one entry, the expression evaluates to $true.
However, the error information resulting from nonexistent file nosuchfile is passed through, because errors are sent to a separate stream.
Given that Get-ChildItem \, nosuchfile returns non-empty success output, the LHS evaluated to $true, so -and also evaluates the RHS, 'ok', but, again, consumes its output and interprets it as a Boolean, which, as a nonempty string, also evaluates to $true.
Thus, the overall result of the -and expression is $true, which is (the only success) output.
The net effect is:
The success output from both sides of the -and expression is consumed during evaluation and therefore effectively hidden.
The expression's only (success) output is its Boolean result, which is $true in this case (which renders as True in the terminal).
What Bash must be doing is implicitly casting the exit code of the commands to a Boolean when passed to the logical operators. PowerShell doesn't do this - but a function can be made to wrap the command and create the same behavior:
> function Get-ExitBoolean($cmd) { & $cmd | Out-Null; $? }
($? is a bool containing the success of the last exit code)
Given two batch files:
#pass.cmd
exit
and
#fail.cmd
exit /b 200
...the behavior can be tested:
> if (Get-ExitBoolean .\pass.cmd) { write pass } else { write fail }
pass
> if (Get-ExitBoolean .\fail.cmd) { write pass } else { write fail }
fail
The logical operators should be evaluated the same way as in Bash. First, set an alias:
> Set-Alias geb Get-ExitBoolean
Test:
> (geb .\pass.cmd) -and (geb .\fail.cmd)
False
> (geb .\fail.cmd) -and (geb .\pass.cmd)
False
> (geb .\pass.cmd) -and (geb .\pass.cmd)
True
> (geb .\pass.cmd) -or (geb .\fail.cmd)
True
You can do something like this, where you hide the boolean output with [void], and only get the side effect. In this case, if $a or $b are null, then $c gets assigned to $result. An assignment can be an expression.
$a = ''
$b = ''
$c = 'hi'
[void](
($result = $a) -or
($result = $b) -or
($result = $c))
$result
output
hi
We can use try catch finally method instead of using && method in powershell.
try {hostname} catch {echo err} finally {ipconfig /all | findstr bios}