How to output something in PowerShell - windows

I am running a PowerShell script from within a batch file. The script fetches a web page and checks whether the page's content is the string "OK".
The PowerShell script returns an error level to the batch script.
The batch script is executed by ScriptFTP, an FTP automation program. If an error occurs, I can have ScriptFTP send the full console output to the administrator via E-Mail.
In the PowerShell script, I would like to output the return value from the web site if it is not "OK", so the error message gets included in the console output, and thus in the status mail.
I am new to PowerShell and not sure which output function to use for this. I can see three:
Write-Host
Write-Output
Write-Error
What would be the right thing to use to write to the Windows equivalent of stdout?

Simply outputting something is PowerShell is a thing of beauty - and one its greatest strengths. For example, the common Hello, World! application is reduced to a single line:
"Hello, World!"
It creates a string object, assigns the aforementioned value, and being the last item on the command pipeline it calls the .toString() method and outputs the result to STDOUT (by default). A thing of beauty.
The other Write-* commands are specific to outputting the text to their associated streams, and have their place as such.

I think in this case you will need Write-Output.
If you have a script like
Write-Output "test1";
Write-Host "test2";
"test3";
then, if you call the script with redirected output, something like yourscript.ps1 > out.txt, you will get test2 on the screen test1\ntest3\n in the "out.txt".
Note that "test3" and the Write-Output line will always append a new line to your text and there is no way in PowerShell to stop this (that is, echo -n is impossible in PowerShell with the native commands). If you want (the somewhat basic and easy in Bash) functionality of echo -n then see samthebest's answer.
If a batch file runs a PowerShell command, it will most likely capture the Write-Output command. I have had "long discussions" with system administrators about what should be written to the console and what should not. We have now agreed that the only information if the script executed successfully or died has to be Write-Host'ed, and everything that is the script's author might need to know about the execution (what items were updated, what fields were set, et cetera) goes to Write-Output. This way, when you submit a script to the system administrator, he can easily runthescript.ps1 >someredirectedoutput.txt and see on the screen, if everything is OK. Then send the "someredirectedoutput.txt" back to the developers.

I think the following is a good exhibit of Echo vs. Write-Host. Notice how test() actually returns an array of ints, not a single int as one could easily be led to believe.
function test {
Write-Host 123
echo 456 # AKA 'Write-Output'
return 789
}
$x = test
Write-Host "x of type '$($x.GetType().name)' = $x"
Write-Host "`$x[0] = $($x[0])"
Write-Host "`$x[1] = $($x[1])"
Terminal output of the above:
123
x of type 'Object[]' = 456 789
$x[0] = 456
$x[1] = 789

You can use any of these in your scenario since they write to the default streams (output and error). If you were piping output to another commandlet you would want to use Write-Output, which will eventually terminate in Write-Host.
This article describes the different output options: PowerShell O is for Output

What would be the right thing to use to write to the Windows equivalent of stdout?
In effect, but very unfortunately, both Windows PowerShell and PowerShell Core as of v7.2, send all of their 6(!) output streams to stdout when called from the outside, via PowerShell's CLI.
See GitHub issue #7989 for a discussion of this problematic behavior, which likely won't get fixed, so as to preserve backward compatibility.
In practice, this means that whatever PowerShell stream you send output to will be seen as stdout output by an external caller:
E.g., if you run the following from cmd.exe, you'll see no output, because the stdout redirection to NUL applies equally to all PowerShell streams:
C:\>powershell -noprofile -command "Write-Error error!" >NUL
However - curiously - if you redirect stderr, PowerShell does send its error stream to stderr, so that with 2> you can capture the error-stream output selectively; the following outputs just 'hi' - the success-stream output - while capturing the error-stream output in file err.txt:
C:\>powershell -noprofile -command "'hi'; Write-Error error!" 2>err.txt
The desirable behavior is:
Send PowerShell's success output stream (number 1) to stdout.
Send output from all other streams to stderr, which is the only option, given that between processes only 2 output streams exist - stdout (standard output) for data, and stderr (standard error) for error messages and all other types of messages - such as status information - that aren't data.
It's advisable to make this distinction in your code, even though it currently isn't being respected.
Inside PowerShell:
Write-Host is for display output, and bypasses the success output stream - as such, its output can neither be (directly) captured in a variable nor suppressed nor redirected.
Its original intent was simply to create user feedback and create simple, console-based user interfaces (colored output).
Due to the prior inability to be captured or redirected, PowerShell version 5 made Write-Host write to the newly introduced information stream (number 6), so since then it is possible to capture and redirect Write-Host output.
Write-Error is meant for writing non-terminating errors to the error stream (number 2); conceptually, the error stream is the equivalent of stderr.
Write-Output writes to the success [output] stream (number 1), which is conceptually equivalent to stdout; it is the stream to write data (results) to.
However, explicit use of Write-Output is rarely needed due to PowerShell's implicit output feature:
Output from any command or expression that isn't explicitly captured, suppressed or redirected is automatically sent to the success stream; e.g., Write-Output "Honey, I'm $HOME" and "Honey, I'm $HOME" are equivalent, with the latter not only being more concise, but also faster.
See this answer for more information.

Write-Host "Found file - " + $File.FullName -ForegroundColor Magenta
Magenta can be one of the "System.ConsoleColor" enumerator values - Black, DarkBlue, DarkGreen, DarkCyan, DarkRed, DarkMagenta, DarkYellow, Gray, DarkGray, Blue, Green, Cyan, Red, Magenta, Yellow, White.
The + $File.FullName is optional, and shows how to put a variable into the string.

You simply cannot get PowerShell to ommit those pesky newlines. There is no script or cmdlet that does this.
Of course Write-Host is absolute nonsense because you can't redirect/pipe from it! You simply have to write your own:
using System;
namespace WriteToStdout
{
class Program
{
static void Main(string[] args)
{
if (args != null)
{
Console.Write(string.Join(" ", args));
}
}
}
}
E.g.
PS C:\> writetostdout finally I can write to stdout like echo -n
finally I can write to stdout like echo -nPS C:\>

Related

Powershell output messages intertwined with other messages from other sources

I basically have something like
$outputMsg1="foo`nbla`n"
Write-host $outputMsg1
in a Powershell script which I call in Visual Studio 2022 Post Build Event. The script is called more than once, sometimes simultaneously. The result sometimes is
foo
foo
bla
bla
so the output is interlaced, intertwined. Interestingly enough, the split is never done in the middle of an word.
How can I make Powershell write a message to output in one chunk?
Even better, is it a way to group all the writes in a script in a chunk so even when I have several Write-host statements they are written one after another not interrupted by other messages from another instance of the script or whatever? So as long as a script runs to get exclusive output access.
You can try adding all logic into a function and store the function output into a variable, so messages will be grouped:
function Do-Stuff{
Write-Output '1'
Write-Output '2'
}
$out = Do-Stuff | Out-String
$out

powershell stdin pipes and redirection

Hello so I have been making a small cross platform script I can curl and pipe into bash and Powershell. The basic idea is the server sends a command to the interpreter and then it gives a command to redirect all output after to stdout. An example in bash is
#some commands
aplay rick.wav
cat -
random text
that will be redirected to stdout by cat...
bash will never see this
I would then pipe this to stdin of bash
But for Powershell I can do cat test.ps1 | iex or cat test.ps1 | powershell -
But can't redirect stdin to stdout continuously in one command like cat - because cat doesn't look from stdin.
Also some side notes after trying a lot of random things it seems like there are many stdin types for Windows, one being keyboard and another being pipes
You can pipe lines of text to powershell.exe, the Windows PowerShell CLI, via -Command - (-c -), and it will interpret them one by one.
Here's an interactive demonstration from inside PowerShell; it works the same with input piped (provided via stdin) from the outside:
# Repeatedly prompt for a line of input and execute it as a PowerShell command.
# Press Ctrl-C to exit.
& { while ($true) { Read-Host } } | powershell -noprofile -c -
Note:
-Command - has problematic aspects, notably with commands that span multiple lines (an additional Enter keystroke / newline is then needed for the command to be recognized) and so does -File -, whose behavior is even stranger - see this answer and GitHub issue #3223.
Another demonstration, simulating outside stdin input via 2 lines piped to powershell -c -:
'get-date', 'get-item /' | powershell -noprofile -c -
The two commands are executed and their output is printed; powershell.exe then exits, because no more stdin input is available; however, with indefinite stdin input (analogous to cat - on Unix-like platforms) the PowerShell process would be kept alive indefintely too.

How to output an constantly active CMD

I am running an application which opens CMD and connect via API service. Throughout the day new stuff will show up in the CMD and I would like to export that information to txt somewhere and Everytime something new shows up append to the same file, or create a new one. It doesn't really matter
App.exe > /file.txt doesn't really work
Redirection examples
command > filename # Redirect command output to a file (overwrite)
command >> filename # APPEND into a file
command 2> filename # Redirect Errors from operation to a file(overwrite)
command 2>> filename # APPEND errors to a file
command 2>&1 # Add errors to results
command 1>&2 # Add results to errors
command | command # This is the basic form of a PowerShell Pipeline
# In PowerShell 3.0+
command 3> warning.txt # Write warning output to warning.txt
command 4>> verbose.txt # Append verbose.txt with the verbose output
command 5>&1 # Writes debug output to the output stream
command *> out.txt # Redirect all streams (output, error, warning, verbose, and debug) to out.txt
You are not showing any code as to how you are starting/using cmd.exe for your use case. Which just leaves folks trying to help you, to guess. So, redirect of cmd.exe, for example:
$MyOutputFile = C:\MyOutputFile.txt
Start-Process -FilePath c:\windows\system32\cmd.exe -ArgumentList '/c C:\YourCommand.bat' -Wait -NoNewWindow -RedirectStandardOutput $MyOutputFile
Lastly, since you've left us to guess. If you’re launching Process A from PowerShell, but it, Process A is, in turn, launching Process B, then it would be up to Process A to capture or redirect the output of Process B. There’s no way for PowerShell to sub-capture if Process A isn’t doing it.
Resources
About Redirection
How-to: Redirection
PowerShell Redirection Operators
Understanding Streams, Redirection, and Write-Host in PowerShell
Use PowerShell Redirection Operators for Script Flexibility

Output shows up in console, but disappears when redirected to file

I'm using a tool that tests hard disks, fstest.exe. It runs fine from the command line, displaying how long it took to do various file-creation/-deletion/-mangling tasks. The usual output, when run from the command line as fstest.exe otherParams, looks like this:
---
CPU Usage: 0.0%
Disk reads/sec: 0
Disk writes/sec: 0
Disk bytes/read: 0
Disk bytes/write: 0
Test duration: 0 milliseconds, 1153 ticks (3507177 ticks/sec)
---
The trouble is that when I redirect the output to file, it doesn't display anything:
fstest.exe otherParams > out.txt creates an empty out.txt file, even though the command otherwise executed just fine (and created a few test-files as part of its execution).
How can I force this application to redirect output to a file? I've tried looking at it more closely with PowerShell (via Start-Process), and both the standard-out and standard-error streams are just empty.
Other things I've tried:
cmd /c "fstest.exe otherParams > out.txt"
fstest.exe otherParams 2>&1 >> out.txt
fstest.exe otherParams | sort
powershell Start-Process -FilePath .\fstest.exe -ArgumentList #("create2", "-openexisting") -RedirectStandardOutput out.txt -RedirectStandardError err.txt -wait
(That creates both out.txt and err.txt, both empty.)
What would cause an application to change its output depending on whether it's redirected, and is there any way I can make it redirect to file?
UPDATE: I've gotten my hands on the source code. It's C++, and the output is just straightforward printf statements.
It turns out the program in question wasn't flushing stdout after doing a printf to it. Apparently, cmd is willing to flush that buffer when printing to the console, but not when redirecting output to a file. (Or perhaps the file handle is closed before the console could force a flush.) The program was exiting normally (via return, exit code always 0).
To answer the question: I had to fix the program; there was nothing I could do with command-line switches or redirects to change it.
CMD can handle up to 10 file descriptors. Try redirecting them to separate files to identify the descriptor your program writes to:
fstest.exe {params} 0>out0.txt 3>out3.txt 4>out4.txt 5>out5.txt ...
If it's writing to standard error, instead of standard output, redirect thusly:
fstest.exe otherParams 2> out.txt

$LastExitCode=0, but $?=False in PowerShell. Redirecting stderr to stdout gives NativeCommandError

Why does PowerShell show the surprising behaviour in the second example below?
First, an example of sane behaviour:
PS C:\> & cmd /c "echo Hello from standard error 1>&2"; echo "`$LastExitCode=$LastExitCode and `$?=$?"
Hello from standard error
$LastExitCode=0 and $?=True
No surprises. I print a message to standard error (using cmd's echo). I inspect the variables $? and $LastExitCode. They equal to True and 0 respectively, as expected.
However, if I ask PowerShell to redirect standard error to standard output over the first command, I get a NativeCommandError:
PS C:\> & cmd /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd <<<< /c "echo Hello from standard error 1>&2" 2>&1; echo "`$LastExitCode=$LastExitCode and `$?=$?"
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
$LastExitCode=0 and $?=False
My first question, why the NativeCommandError?
Secondly, why is $? False when cmd ran successfully and $LastExitCode is 0? PowerShell's documentation about automatic variables doesn't explicitly define $?. I always supposed it is True if and only if $LastExitCode is 0, but my example contradicts that.
Here's how I came across this behaviour in the real-world (simplified). It really is FUBAR. I was calling one PowerShell script from another. The inner script:
cmd /c "echo Hello from standard error 1>&2"
if (! $?)
{
echo "Job failed. Sending email.."
exit 1
}
# Do something else
Running this simply as .\job.ps1, it works fine, and no email is sent. However, I was calling it from another PowerShell script, logging to a file .\job.ps1 2>&1 > log.txt. In this case, an email is sent! What you do outside the script with the error stream affects the internal behaviour of the script. Observing a phenomenon changes the outcome. This feels like quantum physics rather than scripting!
[Interestingly: .\job.ps1 2>&1 may or not blow up depending on where you run it]
(I am using PowerShell v2.)
The '$?' variable is documented in about_Automatic_Variables:
$?
Contains the execution status of the last operation
This is referring to the most recent PowerShell operation, as opposed to the last external command, which is what you get in $LastExitCode.
In your example, $LastExitCode is 0, because the last external command was cmd, which was successful in echoing some text. But the 2>&1 causes messages to stderr to be converted to error records in the output stream, which tells PowerShell that there was an error during the last operation, causing $? to be False.
To illustrate this a bit more, consider this:
> java -jar foo; $?; $LastExitCode
Unable to access jarfile foo
False
1
$LastExitCode is 1, because that was the exit code of java.exe. $? is False, because the very last thing the shell did failed.
But if all I do is switch them around:
> java -jar foo; $LastExitCode; $?
Unable to access jarfile foo
1
True
... then $? is True, because the last thing the shell did was print $LastExitCode to the host, which was successful.
Finally:
> &{ java -jar foo }; $?; $LastExitCode
Unable to access jarfile foo
True
1
...which seems a bit counter-intuitive, but $? is True now, because the execution of the script block was successful, even if the command run inside of it was not.
Returning to the 2>&1 redirect.... that causes an error record to go in the output stream, which is what gives that long-winded blob about the NativeCommandError. The shell is dumping the whole error record.
This can be especially annoying when all you want to do is pipe stderr and stdout together so they can be combined in a log file or something. Who wants PowerShell butting in to their log file??? If I do ant build 2>&1 >build.log, then any errors that go to stderr have PowerShell's nosey $0.02 tacked on, instead of getting clean error messages in my log file.
But, the output stream is not a text stream! Redirects are just another syntax for the object pipeline. The error records are objects, so all you have to do is convert the objects on that stream to strings before redirecting:
From:
> cmd /c "echo Hello from standard error 1>&2" 2>&1
cmd.exe : Hello from standard error
At line:1 char:4
+ cmd &2" 2>&1
+ CategoryInfo : NotSpecified: (Hello from standard error :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
To:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" }
Hello from standard error
...and with a redirect to a file:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } | tee out.txt
Hello from standard error
...or just:
> cmd /c "echo Hello from standard error 1>&2" 2>&1 | %{ "$_" } >out.txt
This bug is an unforeseen consequence of PowerShell's prescriptive design for error handling, so most likely it will never be fixed. If your script plays only with other PowerShell scripts, you're safe. However if your script interacts with applications from the big wide world, this bug may bite.
PS> nslookup microsoft.com 2>&1 ; echo $?
False
Gotcha! Still, after some painful scratching, you'll never forget the lesson.
Use ($LastExitCode -eq 0) instead of $?
(Note: This is mostly speculation; I rarely use many native commands in PowerShell and others probably know more about PowerShell internals than me)
I guess you found a discrepancy in the PowerShell console host.
If PowerShell picks up stuff on the standard error stream it will assume an error and throw a NativeCommandError.
PowerShell can only pick this up if it monitors the standard error stream.
PowerShell ISE has to monitor it, because it is no console application and thus a native console application has no console to write to. This is why in the PowerShell ISE this fails regardless of the 2>&1 redirection operator.
The console host will monitor the standard error stream if you use the 2>&1 redirection operator because output on the standard error stream has to be redirected and thus read.
My guess here is that the console PowerShell host is lazy and just hands native console commands the console if it doesn't need to do any processing on their output.
I would really believe this to be a bug, because PowerShell behaves differently depending on the host application.
Update: The problems have been fixed in v7.2 - see this answer.
A summary of the problems as of v7.1:
The PowerShell engine still has bugs with respect to 2> redirections applied to external-program calls:
The root cause is that using 2> causes the stderr (standard error) output to be routed via PowerShell's error stream (see about_Redirection), which has the following undesired consequences:
If $ErrorActionPreference = 'Stop' happens to be in effect, using 2> unexpectedly triggers a script-terminating error, i.e. aborts the script (even in the form 2>$null, where the intent is clearly to ignore stderr lines). See GitHub issue #4002.
Workaround: (Temporarily) set $ErrorActionPreference = 'Continue'
Since 2> currently touches the error stream, $?, the automatic success-status variable is invariably set to $False if at least one stderr line was emitted, and then no longer reflects the true success status of the command. See this GitHub issue.
Workaround, as recommended in your answer: only ever use $LASTEXITCODE -eq 0 to test for success after calls to external programs.
With 2>, stderr lines are unexpectedly recorded in the automatic $Error variable (the variable that keeps a log of all errors that occurred in the session) - even if you use 2>$null. See this GitHub issue.
Workaround: Short of keeping track how many error records were added and removing them with $Error.RemoveAt() one by one, there is none.
Generally, unfortunately, some PowerShell hosts by default route stderr output from external programs via PowerShell's error stream, i.e. treat it as error output, which is inappropriate, because many external programs use stderr also for status information, or more generally, for anything that is not data (git being a prime example): Not every stderr line can be assumed to represent an error, and the presence of stderr output does not imply failure.
Affected hosts:
The obsolescent Windows PowerShell ISE and possibly other, older GUI-based IDEs other than Visual Studio Code.
When executing external programs via PowerShell remoting or in a background job (these two invocation mechanisms share the same infrastructure and use the ServerRemoteHost host that ships with PowerShell).
Hosts that DO behave as expected in non-remoting, non-background invocations (they pass stderr lines through to the display and print them normally):
Terminals (consoles), including Windows Terminal.
Visual Studio Code with the PowerShell extension; this cross-platform editor (IDE) is meant to supersede the Windows PowerShell ISE.
This inconsistency across hosts is discussed in this GitHub issue.
For me it was an issue with ErrorActionPreference.
When running from ISE I've set $ErrorActionPreference = "Stop" in the first lines and that was intercepting everything event with *>&1 added as parameters to the call.
So first I had this line:
& $exe $parameters *>&1
Which like I've said didn't work because I had $ErrorActionPreference = "Stop" earlier in file (or it can be set globally in profile for user launching the script).
So I've tried to wrap it in Invoke-Expression to force ErrorAction:
Invoke-Expression -Command "& `"$exe`" $parameters *>&1" -ErrorAction Continue
And this doesn't work either.
So I had to fallback to hack with temporary overriding ErrorActionPreference:
$old_error_action_preference = $ErrorActionPreference
try
{
$ErrorActionPreference = "Continue"
& $exe $parameters *>&1
}
finally
{
$ErrorActionPreference = $old_error_action_preference
}
Which is working for me.
And I've wrapped that into a function:
<#
.SYNOPSIS
Executes native executable in specified directory (if specified)
and optionally overriding global $ErrorActionPreference.
#>
function Start-NativeExecutable
{
[CmdletBinding(SupportsShouldProcess = $true)]
Param
(
[Parameter (Mandatory = $true, Position = 0, ValueFromPipelinebyPropertyName=$True)]
[ValidateNotNullOrEmpty()]
[string] $Path,
[Parameter (Mandatory = $false, Position = 1, ValueFromPipelinebyPropertyName=$True)]
[string] $Parameters,
[Parameter (Mandatory = $false, Position = 2, ValueFromPipelinebyPropertyName=$True)]
[string] $WorkingDirectory,
[Parameter (Mandatory = $false, Position = 3, ValueFromPipelinebyPropertyName=$True)]
[string] $GlobalErrorActionPreference,
[Parameter (Mandatory = $false, Position = 4, ValueFromPipelinebyPropertyName=$True)]
[switch] $RedirectAllOutput
)
if ($WorkingDirectory)
{
$old_work_dir = Resolve-Path .
cd $WorkingDirectory
}
if ($GlobalErrorActionPreference)
{
$old_error_action_preference = $ErrorActionPreference
$ErrorActionPreference = $GlobalErrorActionPreference
}
try
{
Write-Verbose "& $Path $Parameters"
if ($RedirectAllOutput)
{ & $Path $Parameters *>&1 }
else
{ & $Path $Parameters }
}
finally
{
if ($WorkingDirectory)
{ cd $old_work_dir }
if ($GlobalErrorActionPreference)
{ $ErrorActionPreference = $old_error_action_preference }
}
}

Resources