Powershell calling a function if script has mandatory parameters - windows

I'm pretty new to PowerShell but loving it for automating loads of tasks on our Windows machines. I love that you can call functions from other scripts, however the scripts I have written all use parameters the user can provide (so it's easier for colleagues to use them).
There is one parameter in particular that is usually mandatory in my scripts. The problem I'm facing is calling functions from scripts with mandatory parameters.
Here's a simple example:
Param(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$VirtualMachine=$(throw "Machine name missing!"),
[int]$Attempts = 150
)
Function DoSomething($VirtualMachine, $Attempts){
write("$VirtualMachine and $Attempts")
}
Running this as a script you would provide -VirtualMachine "VMnameHere" -Attempts 123. Running this would produce VMnameHere and 123. Perfect! However.. If I try to call this as a function from another script..
Example here:
. ".\Manage-Machine.ps1"
DoSomething -VirtualMachine "nwb-thisisamachine" -Attempts 500
This produced an error:
Machine name missing!
At C:\Users\something\Desktop\Dump\play\Manage-Machine.ps1:33 char:28
+ [string]$VirtualMachine=$(throw "Machine name missing!"),
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (Machine name missing!:String) [], RuntimeException
+ FullyQualifiedErrorId : Machine name missing!
Which is clearly because the field is mandatory. Am I doing something wrong in how I'm calling the function in this case? Is there an alternative way to calling the function if the script it belongs to has mandatory parameters, because if I remove the validation for the parameter, it all works.
Would love some input,
Thank you!

I would use [parameter(Mandatory = $true)] and remove =$(throw "Machine name missing!").
You can then run powershell with the -NonInteractive flag (documentation link) and any missing mandatory parameters will cause an error and a non-zero exit code will be returned.
This return code should be picked up by your CI process and it itself will handle the error.

I'm not sure it's such a great idea to do this, but it sounds like the following would work:
Param(
[ValidateNotNullOrEmpty()]
# Do NOT use = $(Throw ...) or [Parameter(Mandatory)].
[string]$VirtualMachine,
[int]$Attempts = 150
)
# Determine if the script is being "dot-sourced".
# Note: The `$MyInvocation.Line -eq ''` part detects being run from the
# ISE or Visual Studio Code, which implicitly perform sourcing too.
$isDotSourced = $MyInvocation.InvocationName -eq '.' -or $MyInvocation.Line -eq ''
# NOT sourced? Enforce mandatory parameters.
if (-not $isDotSourced) {
if (-not $VirtualMachine) { Throw "Machine name missing!" }
}
Function DoSomething($VirtualMachine, $Attempts) {
"$VirtualMachine and $Attempts"
}
# NOT sourced? Call the default function or
# do whatever you want the script to do when invoked as a whole.
if (-not $isDotSourced) {
DoSomething $VirtualMachine $Attempts
}
. .\Manage-Machine.ps1 will then merely define the functions (DoSomething in this case), for later invocation;
since none of the script parameters are technically declared as mandatory, invocation without parameters will succeed (unlike in your attempt, where the throw statement invariably kicked in - whether directly invoked or dot-sourced).
.\Manage-Machine.ps1, by contrast, will enforce the presence of a $VirtualMachine parameter value and instantly call DoSomething, passing the parameter values through.
Note that, of course, your functions could benefit from typing your parameters and adding validation attributes, too.

Related

Can't call a Powershell script through the registry properly. A positional parameter cannot be found that accepts argument '$null'

Here is a simple test function called RegistryBoundParams.ps1:
[CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]
$Target,
[Parameter(Mandatory = $false)]
[switch]
$MySwitch
)
if(!(Test-IsAdmin)){
Request-AdminRights -NoExit
Exit
}
if($MySwitch){
"Do something" | Out-Host
}else {
"Do something else" | Out-Host
}
Show-AllArguments
If I call it via the PS terminal, everything works as expected:
Exact call: C:\Tools\scripts> .\RegistryBoundParams.ps1 -Target "C:\Test\" -MySwitch
If I call it through the registry (adding the command to a context menu), I get:
pwsh -noexit -file "C:\Tools\scripts\RegistryBoundParams.ps1" -Target "C:\Program Files\Python39\python.exe" -MySwitch
Plaintext of the error: RegistryBoundParams.ps1: A positional parameter cannot be found that accepts argument '$null'.
Here's a reg file that shows exactly what I added in the registry:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\*\shell\1_TestRegistry]
#="Test Powershell Script from Registry"
"Icon"="C:\\Tools\\icons\\apps\\Powershell 1.ico,0"
"NeverDefault"=""
[HKEY_CLASSES_ROOT\*\shell\1_TestRegistry\command]
#="pwsh -noexit -file \"C:\\Tools\\scripts\\RegistryBoundParams.ps1\" -Target \"C:\\Program Files\\Python39\\python.exe\""
So somewhere along the lines $Null is being passed to the script, and I have no Idea why.
I could really, really use some help here.
Thanks so much for any guidance.
Edit:
I found that if I add a new string variable called $catchall, the script works. I suspect that when being called from the registry it's appending a null value for some reason. Which is why the script works when I define an additional "catch all" variable.
This is definitely not an ideal solution at all, so I am still looking for a solution here. Really appreciate any help!
Edit2:
It turns out that the Request-AdminRights script I was using that mklement0 authored had a bug that has now been fixed. Anyone who wants one-line self elevation with bound/unbound parameter support that's cross-platform... go get it!
The problem was a (since-fixed) bug in the code that you based your self-elevating function Request-AdminRights on:
The bug was that in the case of an advanced script such as yours, $args - which is never bound in advanced scripts - was mistakenly serialized as $null instead of translated to #(), resulting in that $null getting passed as an extra argument on re-invocation.
If you redefine your Request-AdminRights function based on the now updated body of the Ensure-Elevated function in the original answer, your problem should go away - no need to modify the enclosing script.

Call perl function from another perl script with different Active perl versions

We have two versions of Active perl 5.6 and 5.24. We have web services which has to be executed on Active perl '5.24' versions(to adopt TLS 1.2 version) and this needs to be invoked from Active perl '5.6' version. We are using windows operating system.
Steps followed :
Caller code which is executed in 5.6 version invokes the 5.24 version using system /require command.
Problem:
How to call the 5.24 perl function(example: webservicecall(arg1){return "xyz") from 5.6 perl script through system command, require or etc..?
Also how to get the return value of perl function 5.24?
Note:
Its a temporary work around to have two perl versions and the we have a plan to do upgrade it for higher version.
Here perl version 5.6 installed in "C:\Perl\bin\perl\" and perl version 5.24 installed in "D:\Perl\bin\perl\".
"**p5_6.pl**"
print "Hello Perl5_6\n";
system('D:\Perl\bin\perl D:\sample_program\p5.24.pl');
print $OUTFILE;
$retval = Mul(25, 10);
print ("Return value is $retval\n" );
"**p5_24.pl**"
print "Hello Perl5_24\n";
our $OUTFILE = "Hello test";
sub Mul($$)
{
my($a, $b ) = #_;
my $c = $a * $b;
return($c);
}
I have written sample program for detail information to call perl 5.24 version from perl script 5.6 version. During execution I didn't get the expected output. How to get the "return $c" value & the "our $OUTFILE" value of p5_24.pl in p5_6.pl script?
Note: The above is the sample program based on this I will modify the actual program using serialized data.
Place the code for the function that needs v5.24 in a wrapper script, written just so that it runs that function (and prints its result). Actually, I'd recommend writing a module with that function and then loading that module in the wrapper script.
Then run that script under the wanted (5.24) interpreter, by invoking it via its full path. (You may need to be careful to make sure that all libraries and environment are right.)   Do this in a way that allows you to pick up its output. That can be anything from backticks (qx) to pipe-open or, better, to good modules. There is a range of modules for this, like IPC::System::Simple, Capture::Tiny, IPC::Run3, or IPC::Run. Which to use would depend on how much you need out of that call.
You can't call a function in a running program but to have it somehow run under another program.
Also, variables (like $OUTFILE) defined in one program cannot be seen in another one. You can print them from the v5.24 program, along with that function result, and then parse that whole output in the v5.6 program. Then the two programs would need a little "protocol" -- to either obey an order in which things are printed, or to have prints labeled in some way.
Much better, write a module with functions and variables that need be shared. Then the v5.24 program can load the module and import the function it needs and run it, while the v5.6 program can load the same module but only to pick up that variable (and also run the v5.24 program).
Here is a sketch of all this. The package file SharedBetweenPerls.pm
package SharedBetweenPerls;
use warnings;
use strict;
use Exporter qw(import);
our #EXPORT_OK = qw(Mul export_vars);
my $OUTFILE = 'test_filename';
sub Mul { return $_[0] * $_[1] }
sub export_vars { return $OUTFILE }
1;
and then the v5.24 program (used below as program_for_5.24.pl) can do
use warnings;
use strict;
# Require this to be run by at least v5.24.0
use v5.24;
# Add path to where the module is, relative to where this script is
# In our demo it's the script's directory ($RealBin)
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(Mul);
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
while the v5.6 program can do
use warnings;
use strict;
use feature 'say';
use FindBin qw($RealBin);
use lib $RealBin;
use SharedBetweenPerls qw(export_vars);
my $outfile = export_vars(); #--> 'test_filename'
# Replace "path-to-perl..." with an actual path to a perl
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10); #--> 250
say "Got variable: $outfile, and return from function: $from_5.24";
where $outfile has the string test_filename while $from_5.24 variable is 250.†
This is tested to work as it stands if both programs, and the module, are in the same directory, with names as in this example. (And with path-to-perl-5.24 replaced with the actual path to your v5.24 executable.) If they are at different places you need to adjust paths, probably the package name and the use lib line. See lib pragma.
Please note that there are better ways to run an external program --- see the recommended modules above. All this is a crude demo since many details depend on what exactly you do.
Finally, the programs can also connect via a socket and exchange all they need but that is a bit more complex and may not be needed.
† The question's been edited, and we now have D:\Perl\bin\perl for path-to-perl-5.24 and D:\sample_program\p5.24.pl for program_for_5.24.
Note that with such a location of the p5.24.pl program you'd have to come up with a suitable location for the module and then its name would need to have (a part of) that path in it and to be loaded with such name. See for example this post.
A crude demo without a module (originally posted)
As a very crude sketch, in your program that runs under v5.6 you could do
my $from_5.24 = qx(path-to-perl-5.24 program_for_5.24.pl 25 10);
where the program_for_5.24.pl then could be something like
use warnings;
use strict;
sub Mul { return $_[0] * $_[1] }
my ($v1, $v2) = #ARGV;
print Mul($v1, $v2);
and the variable $from_5.24 ends up being 250 in my test.
You cannot directly call a Perl function running with another Perl version. You would need to create a program which explicitly invokes the function. The input and output need to be explicitly serialized in order to be transported between these two programs.
Serializing could be done with Data::Dumper, Storable or similar. If lower performance is needed you could invoke the program which provides the function with system and share the serialized data with temporary files or pipes. Or you could create some client-server architecture and share the serialized data with sockets. The latter is faster since it skips the repeated start and teardown of the other process but instead keeps it running.

How can I automatically syntax check a powershell script file?

I want to write a unit-test for some code which generates a powershell script and then check that the script has valid syntax.
What's a good way to do this without actually executing the script?
A .NET code solution is ideal, but a command line solution that I could use by launching an external process would be good enough.
I stumbled onto Get-Command -syntax 'script.ps1' and found it concise and useful.
ETA from the comment below: This gives a detailed syntax error report, if any; otherwise it shows the calling syntax (parameter list) of the script.
You could run your code through the Parser and observe if it raises any errors:
# Empty collection for errors
$Errors = #()
# Define input script
$inputScript = 'Do-Something -Param 1,2,3,'
[void][System.Management.Automation.Language.Parser]::ParseInput($inputScript,[ref]$null,[ref]$Errors)
if($Errors.Count -gt 0){
Write-Warning 'Errors found'
}
This could easily be turned into a simple function:
function Test-Syntax
{
[CmdletBinding(DefaultParameterSetName='File')]
param(
[Parameter(Mandatory=$true, ParameterSetName='File', Position = 0)]
[string]$Path,
[Parameter(Mandatory=$true, ParameterSetName='String', Position = 0)]
[string]$Code
)
$Errors = #()
if($PSCmdlet.ParameterSetName -eq 'String'){
[void][System.Management.Automation.Language.Parser]::ParseInput($Code,[ref]$null,[ref]$Errors)
} else {
[void][System.Management.Automation.Language.Parser]::ParseFile($Path,[ref]$null,[ref]$Errors)
}
return [bool]($Errors.Count -lt 1)
}
Then use like:
if(Test-Syntax C:\path\to\script.ps1){
Write-Host 'Script looks good!'
}
PS Script Analyzer is a good place to start at static analysis of your code.
PSScriptAnalyzer provides script analysis and checks for potential
code defects in the scripts by applying a group of built-in or
customized rules on the scripts being analyzed.
It also integrates with Visual Studio Code.
There are a number of strategies for mocking PowerShell as part of unit tests, and also have a look at Pester.
The Scripting Guy's Unit Testing PowerShell Code With Pester
PowerShellMagazine's Get Started With Pester (PowerShell unit testing framework)

how to track all function calls of some modules?

I'd like to have some usage statistics for a bunch of my modules.
It would be handy if I could run code whenever a function is called from a set of modules. Is it doable? Do powershell generate internal events we can hook on? I can not find any guidance yet
It's not completely clear to me whether you're more interested in logging events or executing code (hooking).
Logging
There are 2 places where in the event log where Powershell writes to the logs:
Applications and Services > Windows PowerShell
Applications and Services > Microsoft > Windows > PowerShell
On a per-module level, you can enable the LogPipelineExecutionDetails property. To do it on load:
$mod = Import-Module ActiveDirectory
$mod.LogPipelineExecutionDetails = $true
Or for an already loaded module:
$mod = Get-Module ActiveDirectory
$mod.LogPipelineExecutionDetails = $true
After that you check the first of the event log locations I listed (Windows PowerShell) and you'll see logs that show the calls to various cmdlets with the bound parameters.
You can also enable this via Group Policy as a Computer or User setting:
Administrative Templates > Windows Components > Windows PowerShell > Turn On Module Logging
You can specify the module(s) you want to enable logging for.
In PowerShell v5, there will be even more detailed logging available (see the link).
Source
You can see more detailed information about the logging settings (current and upcoming) on Boe Prox's blog: More New Stuff in PowerShell V5: Extra PowerShell Auditing
Hooking
As far as I know there is no direct way to hook calls in an existing module, but I have a crappy workaround.
You can effectively override existing cmdlets/functions by creating functions or aliases with the same name as the original.
Using this method, you could create wrappers around the specific functions you want to track. Consider something like this:
# Override Get-Process
function Track-GetProcess {
[CmdletBinding()]
param(
# All the parameters that the original function takes
)
# Run pre-execution hook here
& { "before" }
$params = #{}
foreach($h in $MyInvocation.MyCommand.Parameters.GetEnumerator()) {
try {
$key = $h.Key
$val = Get-Variable -Name $key -ErrorAction Stop | Select-Object -ExpandProperty Value -ErrorAction Stop
if (([String]::IsNullOrEmpty($val) -and (!$PSBoundParameters.ContainsKey($key)))) {
throw "A blank value that wasn't supplied by the user."
}
Write-Verbose "$key => '$val'"
$params[$key] = $val
} catch {}
}
Get-Process #params # call original with splatting
# run post execution hook here
& { "after" }
}
The middle there uses splatting to send the given parameters and sending them to the real cmdlet.
The hardest is part is manually recreating the parameter block. There are ways you could likely do that programmatically if you wanted to quickly run something to hook any function, but that's a bit beyond the scope of this answer. If you wanted to go that route, have a look at some of the code in this New-MofFile.ps1 function, which parses powershell code using powershell's own parser.

The result of same script is displayed in two formats while calling using powershell.invoke and pipeline.invoke

I am calling following script to display the local user accounts in a machine :
$adsi = [ADSI]'WinNT://localhost';
$adsi.Children | where {$_.SchemaClassName -eq 'user'} |Select-Object #{n='UserName';e={$_.Name}};
When the above script is executed using powershell.invoke, the result is
#{UserName=account17}
When the same script is executed using pipeline.invoke, the result is :
UserName
--------
account17
Why is there a difference in the output for the same script when invoked using powershell and pipeline?
Not sure but the powershell.invoke output looks like an object-output via write-host and the pipeline.invoke looks like write-output output even though both should return a psobject.
More code would be helpful

Resources