how to run multiple commands on success - windows

In bash & CMD you can do rm not-exists && ls to string together multiple commands, each running conditionally only if the previous commands succeeded.
In powershell you can do rm not-exists; ls, but the ls will always run, even when rm fails.
How do I easily replicate the functionality (in one line) that bash & CMD do?

Most errors in Powershell are "Non-terminating" by default, that is, they do not cause your script to cease execution when they are encountered. That's why ls will be executed even after an error in the rm command.
You can change this behavior in a couple of ways, though. You can change it globally via the $errorActionPreference variable (e.g. $errorActionPreference = 'Stop'), or change it only for a particular command by setting the -ErrorAction parameter, which is common to all cmdlets. This is the approach that makes the most sense for you.
# setting ErrorAction to Stop will cause all errors to be "Terminating"
# i.e. execution will halt if an error is encountered
rm 'not-exists' -ErrorAction Stop; ls
Or, using some common shorthand
rm 'not-exists' -ea 1; ls
The -ErrorAction parameter is explained the help. Type Get-Help about_CommonParameters

To check the exit code from a powershell command you can use $?.
For example, the following command will try to remove not-exists and if it is successful it will run ls.
rm not-exists; if($?){ ls }

Related

How to run bash commands from pwsh?

We have a powershell script which is 99% cross-platform but occasionally we need an IF LINUX THEN branch because of how different windows and linux service management is.
We would like to run the kill command from bash but this is an alias of the powershell Stop-Process.
How do we run native bash commands like ps, kill and ls from Powershell.
Note sh ps or bash ps do not work.
PS > bash ps
/usr/bin/ps: /usr/bin/ps: cannot execute binary file
Assuming that running bash runs bash from pwsh, you would want bash -c "ps". Normally the argument to bash would be a script that it tries to execute, hence the error "cannot exxecute binary file". ps is not a bash script, but an executable binary. The -c on the other hand runs arbitrary bash code provided as a command line argument, which can obviously run programs like ps.
If you know the location of the command you can just run it:
/usr/bin/kill
Usage:
kill [options] <pid|name]...
...
Otherwise which can find it and run it with Invoke-Expression or iex:
which kill | iex
This can get tricky since which could return multiple lines which you then would have to guess and just take the first one. You also need to somehow add parameters (e.g. 123) to your command:
which kill | select -first 1 | % {iex "$_ 123"}
kill: sending signal to 123 failed: No such process
Had lots of trouble running ant -version from pwsh but this works:
Invoke-Expression "/bin/bash ant -version"
Cross-Platform Function
function RunCommand($Command) {
if($env:OS -eq 'Windows_NT') {
CMD /c $Command
} else {
Invoke-Expression "/bin/bash $Command"
}
}

Proper syntax for bash script line

Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx

Check if possible to run command as sudo in Bourne shell?

I'm writing a Bourne shell deployment script, which runs some commands as root and some as the current user. I want to not run all commands as root, and check upfront if the commands I'll need are available to root (to prevent aborted half-done deployments).
In order to do this, I want to make a function that checks if a command can be run as root. My idea was to do this:
sudo_command() {
sudo sh -c 'type "$1"'
}
And then to use it like so:
required_sudo_commands="cp rm apt"
for command in $required_sudo_commands do
sudo_command "$command" || (
echo "missing required command: $command;
exit 1;
)
done
As you might guess by my question here: it doesn't work. Does any of you see what I'm doing wrong here?
I tried running the command inside sudo_command by itself, but that miraculously (to me) did work. But when I put the command into a separate file, it didn't work.
There are two immediate problems:
The $1 not expanding in single quotes.
You can semi-fix this by expanding it in double quotes instead: sudo sh -c "type '$1'"
Your command not exiting. That's easily fixed by replacing your || (..) with || {..}.
(..) creates a subshell that limits the scope of everything inside it including exit. To group commands, use {..}
However, there is also the fundamental problem of trying to use sh -c 'type "$1" to do anything.
One of the major points of sudo is the ability to limit what a user can and can't do. You're assuming that a user has complete, unrestricted access to run arbitrary commands as root, and that any problems are due to root not having these commands available.
That may be a valid assumption for you, but you may want to instead run e.g. sudo apt --version to get a better (but still incomplete) picture of whether you're allowed and able to run apt with sudo without requiring complete and unrestricted access.

$LastExitCode=0, but $?=False in PowerShell. Redirecting stderr to stdout gives NativeCommandError

Why does PowerShell show the surprising behaviour in the second example below?
First, an example of sane behaviour:
PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?"
Hello from standard error
$LastExitCode=0 and $?=True
No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected.
However, if I ask PowerShell to redirect standard error to standard output over the first command, I get a NativeCommandError:
PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
$LastExitCode=0 and $?=False
My first question, why the NativeCommandError?
Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? PowerShell's documentation about automatic variables doesn't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0, but my example contradicts that.
Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one PowerShell script from another. The inner script:
cmd /c "echo Hello from standard error 1>&2"
if (! $?)
{
echo "Job failed. Sending email.."
exit 1
}
# Do something else
Running this simply as .\job.ps1, it works fine, and no email is sent. However, I was calling it from another PowerShell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! What you do outside the script with the error stream affects the internal behaviour of the script. Observing a phenomenon changes the outcome. This feels like quantum physics rather than scripting!
[Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]
(I am using PowerShell v2.)
The '$?' variable is documented in about_Automatic_Variables:
$?
Contains the execution status of the last operation
This is referring to the most recent PowerShell operation, as opposed to the last external command, which is what you get in $LastExitCode.
In your example, $LastExitCode is 0, because the last external command was cmd, which was successful in echoing some text. But the 2>&1 causes messages to stderr to be converted to error records in the output stream, which tells PowerShell that there was an error during the last operation, causing $? to be False.
To illustrate this a bit more, consider this:
> java -jar foo; $?; $LastExitCode
Unable to access jarfile foo
False
1
$LastExitCode is 1, because that was the exit code of java.exe. $? is False, because the very last thing the shell did failed.
But if all I do is switch them around:
> java -jar foo; $LastExitCode; $?
Unable to access jarfile foo
1
True
... then $? is True, because the last thing the shell did was print $LastExitCode to the host, which was successful.
Finally:
> &{ java -jar foo }; $?; $LastExitCode
Unable to access jarfile foo
True
1
...which seems a bit counter-intuitive, but $? is True now, because the execution of the script block was successful, even if the command run inside of it was not.
Returning to the 2>&1 redirect.... that causes an error record to go in the output stream, which is what gives that long-winded blob about the NativeCommandError. The shell is dumping the whole error record.
This can be especially annoying when all you want to do is pipe stderr and stdout together so they can be combined in a log file or something. Who wants PowerShell butting in to their log file??? If I do ant build 2>&1 >build.log, then any errors that go to stderr have PowerShell's nosey $0.02 tacked on, instead of getting clean error messages in my log file.
But, the output stream is not a text stream! Redirects are just another syntax for the object pipeline. The error records are objects, so all you have to do is convert the objects on that stream to strings before redirecting:
From:
> cmd /c "echo Hello from standard error 1>&2" 2>&1
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd &2" 2>&1
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
To:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" }
Hello from standard error
...and with a redirect to a file:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } | tee out.txt
Hello from standard error
...or just:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } >out.txt
This bug is an unforeseen consequence of PowerShell's prescriptive design for error handling, so most likely it will never be fixed. If your script plays only with other PowerShell scripts, you're safe. However if your script interacts with applications from the big wide world, this bug may bite.
PS> nslookup microsoft.com 2>&1 ; echo $?
False
Gotcha! Still, after some painful scratching, you'll never forget the lesson.
Use ($LastExitCode -eq 0) instead of $?
(Note: This is mostly speculation; I rarely use many native commands in PowerShell and others probably know more about PowerShell internals than me)
I guess you found a discrepancy in the PowerShell console host.
If PowerShell picks up stuff on the standard error stream it will assume an error and throw a NativeCommandError.
PowerShell can only pick this up if it monitors the standard error stream.
PowerShell ISE has to monitor it, because it is no console application and thus a native console application has no console to write to. This is why in the PowerShell ISE this fails regardless of the 2>&1 redirection operator.
The console host will monitor the standard error stream if you use the 2>&1 redirection operator because output on the standard error stream has to be redirected and thus read.
My guess here is that the console PowerShell host is lazy and just hands native console commands the console if it doesn't need to do any processing on their output.
I would really believe this to be a bug, because PowerShell behaves differently depending on the host application.
Update: The problems have been fixed in v7.2 - see this answer.
A summary of the problems as of v7.1:
The PowerShell engine still has bugs with respect to 2> redirections applied to external-program calls:
The root cause is that using 2> causes the stderr (standard error) output to be routed via PowerShell's error stream (see about_Redirection), which has the following undesired consequences:
If $ErrorActionPreference = 'Stop' happens to be in effect, using 2> unexpectedly triggers a script-terminating error, i.e. aborts the script (even in the form 2>$null, where the intent is clearly to ignore stderr lines). See GitHub issue #4002.
Workaround: (Temporarily) set $ErrorActionPreference = 'Continue'
Since 2> currently touches the error stream, $?, the automatic success-status variable is invariably set to $False if at least one stderr line was emitted, and then no longer reflects the true success status of the command. See this GitHub issue.
Workaround, as recommended in your answer: only ever use $LASTEXITCODE -eq 0 to test for success after calls to external programs.
With 2>, stderr lines are unexpectedly recorded in the automatic $Error variable (the variable that keeps a log of all errors that occurred in the session) - even if you use 2>$null. See this GitHub issue.
Workaround: Short of keeping track how many error records were added and removing them with $Error.RemoveAt() one by one, there is none.
Generally, unfortunately, some PowerShell hosts by default route stderr output from external programs via PowerShell's error stream, i.e. treat it as error output, which is inappropriate, because many external programs use stderr also for status information, or more generally, for anything that is not data (git being a prime example): Not every stderr line can be assumed to represent an error, and the presence of stderr output does not imply failure.
Affected hosts:
The obsolescent Windows PowerShell ISE and possibly other, older GUI-based IDEs other than Visual Studio Code.
When executing external programs via PowerShell remoting or in a background job (these two invocation mechanisms share the same infrastructure and use the ServerRemoteHost host that ships with PowerShell).
Hosts that DO behave as expected in non-remoting, non-background invocations (they pass stderr lines through to the display and print them normally):
Terminals (consoles), including Windows Terminal.
Visual Studio Code with the PowerShell extension; this cross-platform editor (IDE) is meant to supersede the Windows PowerShell ISE.
This inconsistency across hosts is discussed in this GitHub issue.
For me it was an issue with ErrorActionPreference.
When running from ISE I've set $ErrorActionPreference = "Stop" in the first lines and that was intercepting everything event with *>&1 added as parameters to the call.
So first I had this line:
& $exe $parameters *>&1
Which like I've said didn't work because I had $ErrorActionPreference = "Stop" earlier in file (or it can be set globally in profile for user launching the script).
So I've tried to wrap it in Invoke-Expression to force ErrorAction:
Invoke-Expression -Command "& `"$exe`" $parameters *>&1" -ErrorAction Continue
And this doesn't work either.
So I had to fallback to hack with temporary overriding ErrorActionPreference:
$old_error_action_preference = $ErrorActionPreference
try
{
$ErrorActionPreference = "Continue"
& $exe $parameters *>&1
}
finally
{
$ErrorActionPreference = $old_error_action_preference
}
Which is working for me.
And I've wrapped that into a function:
<#
.SYNOPSIS
Executes native executable in specified directory (if specified)
and optionally overriding global $ErrorActionPreference.
#>
function Start-NativeExecutable
{
[CmdletBinding(SupportsShouldProcess = $true)]
Param
(
[Parameter (Mandatory = $true, Position = 0, ValueFromPipelinebyPropertyName=$True)]
[ValidateNotNullOrEmpty()]
[string] $Path,
[Parameter (Mandatory = $false, Position = 1, ValueFromPipelinebyPropertyName=$True)]
[string] $Parameters,
[Parameter (Mandatory = $false, Position = 2, ValueFromPipelinebyPropertyName=$True)]
[string] $WorkingDirectory,
[Parameter (Mandatory = $false, Position = 3, ValueFromPipelinebyPropertyName=$True)]
[string] $GlobalErrorActionPreference,
[Parameter (Mandatory = $false, Position = 4, ValueFromPipelinebyPropertyName=$True)]
[switch] $RedirectAllOutput
)
if ($WorkingDirectory)
{
$old_work_dir = Resolve-Path .
cd $WorkingDirectory
}
if ($GlobalErrorActionPreference)
{
$old_error_action_preference = $ErrorActionPreference
$ErrorActionPreference = $GlobalErrorActionPreference
}
try
{
Write-Verbose "& $Path $Parameters"
if ($RedirectAllOutput)
{ & $Path $Parameters *>&1 }
else
{ & $Path $Parameters }
}
finally
{
if ($WorkingDirectory)
{ cd $old_work_dir }
if ($GlobalErrorActionPreference)
{ $ErrorActionPreference = $old_error_action_preference }
}
}

bash script + rsync: bash won't sync to host?

I've only been writing actual .sh scripts since sometime this morning, and I'm a bit stuck. I'm trying to write a script to check to see if a process is running, and to start it if it isn't. (I plan to run this script once every 10 to 15 minutes with cron.)
Here's what I have so far:
#!/bin/bash
APPCHK=$(ps aux | grep -c "/usr/bin/rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images
")
RUNSYNC=$(rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images)
if [ $APPCHK < '2' ];
then
$RUNSYNC
fi
exit
Here's the error that I'm getting:
$ ./image_sync.sh
rsync: mkdir "/home/i/webapps/pavlick_container/public/images" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(595) [Receiver=3.0.7]
rsync: connection unexpectedly closed (9 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7]
./image_sync.sh: line 8: 2: No such file or directory
TRTWF is that
rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images
runs just fine from a terminal window.
What am I doing wrong?
Your grep call is wrong on two counts. The pattern shouldn't include a newline. To look for an exact string, use grep -F 'substring' or grep -xF 'exact whole line'.
Finding if a process is running with ps | grep is highly brittle. On most unices (at least Solaris, Linux and *BSD), use pgrep: pgrep -f 'PATTERN' returns true if there's a running process whose command line matches PATTERN.
Every program returns a status code, either 0 to indicate success or a number between 1 and 255 to indicate failure. In the shell, any command is a valid boolean expression; the status code 0 is treated as true and anything else as false.
$(…) means run the command inside the parentheses and capture its output. So rsync is executed as soon as the shell hits the definition of the RUNSYNC variable. To store a block of shell code, use a function (example below, although you don't actually need a function here, you could just write the code directly).
Your test [ $APPCHK < 2 ] should be [ $APPCHK -lt 2 ]: < means input redirection. (In bash, you can also write [[ foo < bar ]], but that's string comparison, not numeric comparison.)
~/ at the beginning of the remote rsync path is optional. Also, -e ssh is the default unless your version of rsync is really old.
exit at the end of the script is useless, the script will exit anyway.
Here's a script taking the above into account:
#!/bin/bash
run_rsync () {
rsync -rvz '/home/e-smith/files/ibays/drive-i/files/Warehouse Pics/organized_pics' \
imgserv#192.168.0.140:webapps/pavlick_container/public/images
}
process_pattern='/usr/bin/rsync -rvz /home/e-smith/files/ibays/drive-i/files/Warehouse Pics/organized_pics imgserv#192\.168\.0\.140:webapps/pavlick_container/public/images'
if pgrep -xF "$process_pattern"; then
run_rsync
fi
Looks like with your rsync command that some directory along this path is wrong: ~/webapps/pavlick_container/public/images
Have you checked on the server 192.168.0.140 in imgserv's home directory to see if "pavlick_container/public" exists? That's my guess.
You have a number of problems. First you are running the commands instead of putting the commands in variables. There is also a much easier way.
RUNSYNC="rsync -rvz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/organized_pics imgserv#192.168.0.140:~/webapps/pavlick_container/public/images"
if ! pgrep -f "rsync.*organized_pics"; then $RUNSYNC; fi
First of all, the way of checking if the program is running is mostly wrong. This may or may not work. You should rely on some special file you create when your script starts, that it is deleted when your script ends. This will tell you if the script is running, just checking if this file exists.
Then, try to either put a \ before the ~ or to remove the ~/ completely. If cron is run as other user, the tilde will be substituted in the client for the user directory. It works for the command line because maybe the home directory of your user in both machines match, but not in the user the cron is running. A guess at this point, but again, try to remove the ~/ and see if it works.
If your real code is missing a closing dlb-quote on the grep target, you're going to get weird results from the get-go.
Also, ps aux will not list a complete command line result like you show (at least on all the the pss I have used).
You need to make it ps auxwww. Often you will see people add | grep -v grep | (you'll see why at some point). This can be reduced to changing your static search target slightly like "/usr/bin/rsync" to "/usr/bin/[r]sync ".
Other users are also helping with their comments. Using a flag file as #DiegoSevilla mentions is marginally deprecated. use a mkdir /tmp/MyWatcher_flagDir for your flag. Directory creation is an atomic activity (where as file creations are not), and this will eliminate any errors you might encounter from having 2 copies of you monitor try to make a flag file at the same time. Only one process will succeed in making or removing a flag dir.
I hope this helps.

Resources