I'm trying to understand how .GetNewClosure() works within the context of a script cmdlet in PowerShell 2.
In essence I have a function that returns an object like so:
function Get-AnObject {
param(
[CmdletBinding()]
[Parameter(....)]
[String[]]$Id
..
[ValidateSet('Option1','Option2')]
[String[]]$Options
)
...
$T = New-Object PSCustomObject -Property #{ ..... }
$T | Add-Member -MemberType ScriptProperty -Name ExpensiveScriptProperty -Value {
$this | Get-ExpensiveStuff
}.GetNewClosure()
..
}
Providing I do not have the validate set options the closure appears to work fine. If it is included however the new closure fails with the following error.
Exception calling "GetNewClosure" with "0" argument(s): "Attribute cannot be added because it would cause the variable Options with value to become invalid."
Presumably the closure is trying to capture the context of the call to the Cmdlet. Since the parameter "Options" is not bound at all this is not nicely with the parameter validation.
I imagine it's possible to avoid this by placing validation as code within the body of the Cmdlet instead of making use of the [Validate*()] decorators -- but this seems nasty and quite obscure. Is there a way of fusing these two ideas?
The "Attribute cannot be added" message is (or was) a PowerShell bug, I've submitted it to Microsoft with this bug report. That particular issue seems to have been fixed, (perhaps around V5.1. but anyone interested in Powershell Closures may still find info below interesting.
There is a workaround which works in earlier versions, but first here's a simplified repro case that produces the same error:
function Test-ClosureWithValidation {
[CmdletBinding()]
param(
[Parameter()]
[ValidateSet('Option1','Option2')]
[String[]]$Options
)
[scriptblock] $closure = {"OK"}.GetNewClosure();
$closure.Invoke()
}
Test-ClosureWithValidation -Options Option1
The workaround depends on the fact that GetNewClosure() works by iterating over the local variables in the calling script's context, binding these local variables into the script's context. The bug occurs because its copying the $Options variable including the validation attribute. You can work around the bug by creating a new context with only the local variables you need. In the simple repro above, it is a one-line workaround:
[scriptblock] $closure = &{ {"OK"}.GetNewClosure();}
The line above now creates a scope with no local variables. That may be too simple for your case; If you need some values from the outer scope, you can just copy them into local variables in the new scope, e.g:
[scriptblock] $closure = &{
$options = $options;
{"OK $options"}.GetNewClosure();
}
Note that the second line above creates a new $options variable, assigning it the value of the outer variable, the attributes don't propagate.
Finally, I'm not sure in your example why you need to call GetNewClosure at all. The variable $this isn't a normal local variable, it will be available in your script property whether or not you create a closure. Example:
function Test-ScriptPropertyWithoutClosure {
[CmdletBinding()]
param(
[Parameter()]
[ValidateSet('Option1','Option2')]
[String[]]$Options
)
[pscustomobject]#{ Timestamp= Get-Date} |
Add-Member ScriptProperty ExpensiveScriptProperty {
$this | get-member -MemberType Properties| % Name
} -PassThru
}
Test-ScriptPropertyWithoutClosure -Options Option1 | fl
I believe this might work:
function Get-AnObject {
param(
[CmdletBinding()]
[Parameter(....)]
[String[]]$Id
..
[ValidateSet('Option1','Option2')]
[String[]]$Options
)
...
$sb = [scriptblock]::create('$this | Get-ExpensiveStuff')
$T = New-Object PSCustomObject -Property #{ ..... }
$T | Add-Member -MemberType ScriptProperty -Name ExpensiveScriptProperty -Value $sb
.. }
That delays creation of the script block until run time.
Related
We are working on a Powershell script that, among other things, performs a job import of multiple computers via a REST API. The normal job import also works flawlessly and gets an XML with all necessary information passed as parameter.
Now we want to parallelize this job import, so that several of these imports can take place at the same time to reduce the time of the import with a high number of computers.
For this purpose, we use a runspace pool and pass a worker - which contains the code for the job import - as well as all necessary parameters to the respective Powershell instance. Unfortunately, this doesn't seem to work, since even after measuring the import time, we couldn't see any speedup due to the parallelization of the job import. The measured time is always about the same as if we would perform the job import sequentially - i.e. without parallelization.
Here is the relevant code snippet:
function changeApplicationSequenceFromComputer {
param (
[Parameter(Mandatory=$True )]
[string]$tenant = $(throw "Parameter tenant is missing"),
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter newSequenceName is missing")
)
# Other things before parallelization
# Passing all local functions and imported modules in runspace pool to call it from worker
$InitialSessionState = [initialsessionstate]::CreateDefault()
Get-ChildItem function:/ | Where-Object Source -like "" | ForEach-Object {
$functionDefinition = Get-Content "Function:\$($_.Name)"
$sessionStateFunction = New-Object System.Management.Automation.Runspaces.SessionStateFunctionEntry -ArgumentList $_.Name, $functionDefinition
$InitialSessionState.Commands.Add($sessionStateFunction)
}
# Using a synchronized Hashtable to pass necessary global variables for logging purpose
$Configuration = [hashtable]::Synchronized(#{})
$Configuration.ScriptPath = $global:ScriptPath
$Configuration.LogPath = $global:LogPath
$Configuration.LogFileName = $global:LogFileName
$InitialSessionState.ImportPSModule(#("$global:ScriptPath\lib\MigrationFuncLib.psm1"))
# Worker for parallelized job-import in for-each loop below
$Worker = {
param($currentComputerObjectTenant, $currentComputerObjectDisplayName, $newSequenceName, $Credentials, $Configuration)
$global:ScriptPath = $Configuration.ScriptPath
$global:LogPath = $Configuration.LogPath
$global:LogFileName = $Configuration.LogFileName
try {
# Function handleComputerSoftwareSequencesXml creates the xml that has to be uploaded for each computer
# We already tried to create the xml outside of the worker and pass it as an argument, so that the worker just imports it. Same result.
$importXml = handleComputerSoftwareSequencesXml -tenant $currentComputerObjectTenant -computerName $currentComputerObjectDisplayName -newSequence $newSequenceName -Credentials $Credentials
$Result = job-import $importXml -Server localhost -Credentials $Credentials
# sleep 1 just for testing purpose
Log "Result from Worker: $Result"
} catch {
$Result = $_.Exception.Message
}
}
# Preparatory work for parallelization
$cred = $Credentials
$MaxRunspacesProcessors = ($env:NUMBER_OF_PROCESSORS) * $multiplier # we tried it with just the number of processors as well as with a multiplied version.
Log "Number of Processors: $MaxRunspacesProcessors"
$RunspacePool = [runspacefactory]::CreateRunspacePool(1, $MaxRunspacesProcessors, $InitialSessionState, $Host)
$RunspacePool.Open()
$Jobs = New-Object System.Collections.ArrayList
foreach ($computer in $computerWithOldApplicationSequence) {
# Different things to do before parallelization, i.e. define some variables
# Parallelized job-import
Log "Creating or reusing runspace for computer '$currentComputerObjectDisplayName'"
$PowerShell = [powershell]::Create()
$PowerShell.RunspacePool = $RunspacePool
Log "Before worker"
$PowerShell.AddScript($Worker).AddArgument($currentComputerObjectTenant).AddArgument($currentComputerObjectDisplayName).AddArgument($newSequenceName).AddArgument($cred).AddArgument($Configuration) | Out-Null
Log "After worker"
$JobObj = New-Object -TypeName PSObject -Property #{
Runspace = $PowerShell.BeginInvoke()
PowerShell = $PowerShell
}
$Jobs.Add($JobObj) | Out-Null
# For logging in Worker
$JobIndex = $Jobs.IndexOf($JobObj)
Log "$($Jobs[$JobIndex].PowerShell.EndInvoke($Jobs[$JobIndex].Runspace))"
}
<#
while ($Jobs.Runspace.IsCompleted -contains $false) {
Log "Still running..."
Start-Sleep 1
}
#>
# Closing/Disposing pool
} # End of the function
The rest of the script looks like this (simplified):
# Parameter passed when calling the script
param (
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter target is missing"),
[Parameter(Mandatory=$True)]
[float]$multiplier= $(throw "Parameter multiplier is missing")
)
# 'main' block
$timeToRun = (Measure-Command{
changeApplicationSequenceFromComputer -tenant "testTenant" -newSequenceName $newSequenceName
}).TotalSeconds
Log "Total time to run with multiplier $($multiplier) is $timeToRun"
Any ideas why the job import is obviously only executed sequentially despite runspace pool and corresponding parallelization?
We have found the error. The foreach contained the following code block:
# For logging in Worker
$JobIndex = $Jobs.IndexOf($JobObj)
Log "$($Jobs[$JobIndex].PowerShell.EndInvoke($Jobs[$JobIndex].Runspace))"
This had to be created outside the foreach so that the code looks like this:
function changeApplicationSequenceFromComputer {
param (
[Parameter(Mandatory=$True )]
[string]$tenant = $(throw "Parameter tenant is missing"),
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter newSequenceName is missing")
)
# ... Everything as before
$Jobs.Add($JobObj) | Out-Null
} #end of foreach
$Results = #()
foreach($Job in $Jobs ){
$Results += $Job.PowerShell.EndInvoke($Job.Runspace)
}
So the EndInvoke() has to be called outside the foreach.
I've spent some time trying to find a bottleneck in a powershell application, without ever suspecting it was just a slow parameter validation. The sample code illustrates the problem:
function Test-ValidatePerformance
{
param(
[ValidateNotNullOrEmpty()]
[Byte[]]
$Data
)
$sw.Stop()
Write-Host "Executing after $([Math]::Round($sw.Elapsed.TotalMilliSeconds))ms"
}
function Test-NoValidatePerformance
{
param(
[Byte[]]
$Data
)
$sw.Stop()
Write-Host "Executing after $([Math]::Round($sw.Elapsed.TotalMilliSeconds))ms"
}
$buf = [IO.File]::ReadAllBytes('C:\17MB_FILE.bin')
Write-Host "Calling with validation..."
$sw = [Diagnostics.Stopwatch]::StartNew()
Test-ValidatePerformance $buf
Write-Host "`nCalling without validation..."
$sw = [Diagnostics.Stopwatch]::StartNew()
Test-NoValidatePerformance $buf
Output:
Calling with validation...
Executing after 1981ms
Calling without validation...
Executing after 3ms
My question is: Why is [ValidateNotNullOrEmpty()] so slow considering that (as its name states) it just checks for a null or empty parameter?
When you add (most) validation attributes to a collection, it is applied to each item in the collection; not the collection as a whole, so that validation will be run against every individual byte.
mklement0 brought up an open issue on GitHub about this very thing.
The easiest way to test that it's not empty is just to make the parameter mandatory; an empty array won't be accepted then:
function Test-ValidatePerformance
{
param(
[Parameter(Mandatory)]
[Byte[]]
$Data
)
$sw.Stop()
Write-Host "Executing after $([Math]::Round($sw.Elapsed.TotalMilliSeconds))ms"
}
Note: as the original poster pointed out, and Patrick Meinecke confirmed in this GitHub issue, there is a bug in Windows PowerShell (fixed in Core), regarding the performance issue with Mandatory parameters.
If you want the parameter to be optional, but if supplied it must not be empty, you can use [ValidateCount()] instead, which should be quick:
function Test-ValidatePerformance
{
param(
[ValidateCount(1,[int]::MaxValue)]
[Byte[]]
$Data
)
$sw.Stop()
Write-Host "Executing after $([Math]::Round($sw.Elapsed.TotalMilliSeconds))ms"
}
Or you can just do the check in code instead of using validation attributes.
function Test-ValidatePerformance
{
param(
[Byte[]]
$Data
)
if (-not $Data -and $PSBoundParameters.ContainsKey('Data')) {
throw [System.ArgumentException]'An empty array is not allowed'
}
$sw.Stop()
Write-Host "Executing after $([Math]::Round($sw.Elapsed.TotalMilliSeconds))ms"
}
I have a few scripts that create multiple instances of PSDrive to remote instances. I want to make certain that each instance of PSDrive created is cleaned up.
I have a Powershell module like the following. This is a simplified version of what I actually run:
function Connect-PSDrive {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
$Root,
[String]
$Name = [Guid]::NewGuid().ToString(),
[ValidateSet("Registry","Alias","Environment","FileSystem","Function","Variable","Certificate","WSMan")]
[String]
$PSProvider = "FileSystem",
[Switch]
$Persist = $false,
[System.Management.Automation.PSCredential]
$Credential
)
$parameters = #{
Root = $Root;
Name = $Name;
PSProvider = $PSProvider;
Persist = $Persist;
}
$drive = $script:drives | Where-Object {
($_.Name -eq $Name) -or ($_.Root -eq $Root)
}
if (!$drive) {
if ($Credential) {
$parameters.Add("Credential", $Credential)
}
$script:drives += #(New-PSDrive #parameters)
if (Get-PSDrive | Where-Object { $_.Name -eq $Name }) {
Write-Host "The drive '$Name' was created successfully."
}
}
}
function Disconnect-PSDrives {
[CmdletBinding()]
param ()
$script:drives | Remove-PSDrive -Force
}
Each time I invoke the function Connect-PSDrive, I can see that a new drive is successfully created and a reference is added to $script:drives. At the end of the calling script, I have a finally block that invokes Disconnect-PSDrives and this fails with the following exception.
Remove-PSDrive : Cannot find drive. A drive with the name 'mydrive' does not exist.
At C:\git\ops\release-scripts\PSModules\PSDriveWrapper\PSDriveWrapper.psm1:132 char:22
+ $script:drives | Remove-PSDrive -Force
+ ~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (mydrive:String) [Remove-PSDrive], DriveNotFoundException
+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.RemovePSDriveCommand
I want to know why references to the PSDrive objects I created are available in $script:drives, and yet Remove-PSDrive fails to locate the objects.
I also want to know how I can manage these PSDrive instances without needing to return each instance to the calling script such that Disconnect-PSDrives works.
A few extra notes:
I'm creating these drives with the Persist flag as false.
Running these multiple times errors with too many multiple connections being made to a machine. This is why I think that connections are not being cleaned up. If my assumption is wrong, please kindly explain why connections are cleaned up.
I am a little surprised that it cannot remove from the object reference; but I assume that your issue is with scope. PSDrives are local scope by default so when your function exits, they are no longer visible. Use the -Scope parameter for New-PSDrive and you will likely be successful. (As a side note: during Disconnect-PSDrives you will likely want to clear the list in case of multiple calls.)
That being said, you should never need to clean up the PSDrives like you are doing. Likely the reason you are experiencing too many connections is, once again, a scoping issue (that is, they still exist but you no longer see them). Try running it multiple times where you close PowerShell and start a new instance each time--you will no longer see too many connections. Why? Because PowerShell cleans up all non-persistent drives at the end of your session. You do not need to clean up the drives between sessions/instances; and within an session/instance (assuming you have proper scoping) you can re-use the drives so there is no need to create duplicates; ergo, you should never really need this functionality. That being said, I might assume you have some niche use case for this?
Starting premise: very restrictive environment, Windows 7 SP1, Powershell 3.0. Limited or no possibility of using external libraries.
I'm trying to re-write a bash tool I created previously, this time using PowerShell. In bash I implemented autocompletion to make the tool more user friendly and I want to do the same thing for the PowerShell version.
The bash version worked like this:
./launcher <Tab> => ./launcher test (or dev, prod, etc.)
./launcher test <Tab> => ./launcher test app1 (or app2, app3, etc.)
./launcher test app1 <Tab> => ./launcher test app1 command1 (or command2, command3, etc.).
As you can see, everything was dynamic. The list of environments was dynamic, the list of application was dynamic, depending on the environment selected, the list of commands was also dynamic.
The problem is with the test → application connection. I want to show the correct application based on the environment already selected by the user.
Using PowerShell's DynamicParam I can get a dynamic list of environments based on a folder listing. I can't however (or at least I haven't found out how to) do another folder listing but this time using a variable based on the existing user selection.
Current code:
function ParameterCompletion {
$RuntimeParameterDictionary = New-Object Management.Automation.RuntimeDefinedParameterDictionary
# Block 1.
$AttributeCollection = New-Object Collections.ObjectModel.Collection[System.Attribute]
$ParameterName = "Environment1"
$ParameterAttribute = New-Object Management.Automation.ParameterAttribute
$ParameterAttribute.Mandatory = $true
$ParameterAttribute.Position = 1
$AttributeCollection.Add($ParameterAttribute)
# End of block 1.
$parameterValues = $(Get-ChildItem -Path ".\configurations" -Directory | Select-Object -ExpandProperty Name)
$ValidateSetAttribute = New-Object Management.Automation.ValidateSetAttribute($parameterValues)
$AttributeCollection.Add($ValidateSetAttribute)
$RuntimeParameter = New-Object Management.Automation.RuntimeDefinedParameter($ParameterName, [string], $AttributeCollection)
$RuntimeParameterDictionary.Add($ParameterName, $RuntimeParameter)
# Block 2: same thing as in block 1 just with 2 at the end of variables.
# Problem section: how can I change this line to include ".\configurations\${myVar}"?
# And what's the magic incantation to fill $myVar with the info I need?
$parameterValues2 = $(Get-ChildItem -Path ".\configurations" -Directory | Select-Object -ExpandProperty Name)
$ValidateSetAttribute2 = New-Object Management.Automation.ValidateSetAttribute($parameterValues2)
$AttributeCollection2.Add($ValidateSetAttribute2)
$RuntimeParameter2 = New-Object
Management.Automation.RuntimeDefinedParameter($ParameterName2, [string], $AttributeCollection2)
$RuntimeParameterDictionary.Add($ParameterName2, $RuntimeParameter2)
return $RuntimeParameterDictionary
}
function App {
[CmdletBinding()]
Param()
DynamicParam {
return ParameterCompletion "Environment1"
}
Begin {
$Environment = $PsBoundParameters["Environment1"]
}
Process {
}
}
I would recommend using argument completers, which are semi-exposed in PowerShell 3 and 4, and fully exposed in version 5.0 and higher. For v3 and v4, the underlying functionality is there, but you have to override the TabExpansion2 built-in function to use them. That's OK for your own session, but it's generally frowned upon to distribute tools that do that to other people's sessions (imagine if everyone tried to override that function). A PowerShell team member has a module that does this for you called TabExpansionPlusPlus. I know I said overriding TabExpansion2 was bad, but it's OK if this module does it :)
When I needed to support versions 3 and 4, I would distribute my commands in modules, and have the modules check for the existence of the 'Register-ArgumentCompleter' command, which is a cmdlet in v5+ and is a function if you have the TE++ module. If the module found it, it would register any completer(s), and if it didn't, it would notify the user that argument completion wouldn't work unless they got the TabExpansionPlusPlus module.
Assuming you have the TE++ module or PSv5+, I think this should get you on the right track:
function launcher {
[CmdletBinding()]
param(
[string] $Environment1,
[string] $Environment2,
[string] $Environment3
)
$PSBoundParameters
}
1..3 | ForEach-Object {
Register-ArgumentCompleter -CommandName launcher -ParameterName "Environment${_}" -ScriptBlock {
param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
$PathParts = $fakeBoundParameter.Keys | where { $_ -like 'Environment*' } | sort | ForEach-Object {
$fakeBoundParameter[$_]
}
Get-ChildItem -Path ".\configurations\$($PathParts -join '\')" -Directory -ErrorAction SilentlyContinue | select -ExpandProperty Name | where { $_ -like "${wordToComplete}*" } | ForEach-Object {
New-Object System.Management.Automation.CompletionResult (
$_,
$_,
'ParameterValue',
$_
)
}
}
}
For this to work, your current working directory will need a 'configurations' directory contained in it, and you'll need at least three levels of subdirectories (reading through your example, it looked like you were going to enumerate a directory, and you would go deeper into that structure as parameters were added). The enumerating of the directory isn't very smart right now, and you can fool it pretty easy if you just skip a parameter, e.g., launcher -Environment3 <TAB> would try to give you completions for the first sub directory.
This works if you will always have three parameters available. If you need a variable # of parameters, you could still use completers, but it might get a little trickier.
The biggest downside would be that you'd still have to validate the users' input since completers are basically just suggestions, and users don't have to use those suggestions.
If you want to use dynamic parameters, it gets pretty crazy. There may be a better way, but I've never been able to see the value of dynamic parameters at the commandline without using reflection, and at that point you're using functionality that could change at the next release (the members usually aren't public for a reason). It's tempting to try to use $MyInvocation inside the DynamicParam {} block, but it's not populated at the time the user is typing the command into the commandline, and it only shows one line of the command anyway without using reflection.
The below was tested on PowerShell 5.1, so I can't guarantee that any other version has these exact same class members (it's based off of something I first saw Garrett Serack do). Like the previous example, it depends on a .\configurations folder in the current working directory (if there isn't one, you won't see any -Environment parameters).
function badlauncher {
[CmdletBinding()]
param()
DynamicParam {
#region Get the arguments
# In it's current form, this will ignore parameter names, e.g., '-ParameterName ParameterValue' would ignore '-ParameterName',
# and only 'ParameterValue' would be in $UnboundArgs
$BindingFlags = [System.Reflection.BindingFlags] 'Instance, NonPublic, Public'
$Context = $PSCmdlet.GetType().GetProperty('Context', $BindingFlags).GetValue($PSCmdlet)
$CurrentCommandProcessor = $Context.GetType().GetProperty('CurrentCommandProcessor', $BindingFlags).GetValue($Context)
$ParameterBinder = $CurrentCommandProcessor.GetType().GetProperty('CmdletParameterBinderController', $BindingFlags).GetValue($CurrentCommandProcessor)
$UnboundArgs = #($ParameterBinder.GetType().GetProperty('UnboundArguments', $BindingFlags).GetValue($ParameterBinder) | where { $_ } | ForEach-Object {
try {
if (-not $_.GetType().GetProperty('ParameterNameSpecified', $BindingFlags).GetValue($_)) {
$_.GetType().GetProperty('ArgumentValue', $BindingFlags).GetValue($_)
}
}
catch {
# Don't do anything??
}
})
#endregion
$ParamDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
# Create an Environment parameter for each argument specified, plus one extra as long as there
# are valid subfolders under .\configurations
for ($i = 0; $i -le $UnboundArgs.Count; $i++) {
$ParameterName = "Environment$($i + 1)"
$ParamAttributes = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$ParamAttributes.Add((New-Object Parameter))
$ParamAttributes[0].Position = $i
# Build the path that will be enumerated based on previous arguments
$PathSb = New-Object System.Text.StringBuilder
$PathSb.Append('.\configurations\') | Out-Null
for ($j = 0; $j -lt $i; $j++) {
$PathSb.AppendFormat('{0}\', $UnboundArgs[$j]) | Out-Null
}
$ValidParameterValues = Get-ChildItem -Path $PathSb.ToString() -Directory -ErrorAction SilentlyContinue | Select-Object -ExpandProperty Name
if ($ValidParameterValues) {
$ParamAttributes.Add((New-Object ValidateSet $ValidParameterValues))
$ParamDictionary[$ParameterName] = New-Object System.Management.Automation.RuntimeDefinedParameter (
$ParameterName,
[string[]],
$ParamAttributes
)
}
}
return $ParamDictionary
}
process {
$PSBoundParameters
}
}
The cool thing about this one is that it can keep going as long as there are folders, and it automatically does parameter validation. Of course, you're breaking the laws of .NET by using reflection to get at all those private members, so I would consider this a terrible and fragile solution, no matter how fun it was to come up with.
I really like the way that ValidateSet works. It proposes the options as a list while you type your Cmdlet in the PowerShell ISE.
I would like to know if it's possible to retrieve values from a CSV-file (Import-CSV) and use them in the Param block so they become available in the drop down box of the PowerShell ISE when constructing the Cmdlet arguments? A bit in the same way that $Type works now, but then with values from the import file.
Function New-Name {
Param (
[parameter(Position=0, Mandatory=$true)]
[ValidateSet('Mailbox','Distribution','Folder','Role')]
[String]$Type,
[parameter(Position=1,Mandatory=$true)]
[String]$Name
)
Process { 'Foo' }
}
Here is something you can start with:
function New-Name {
param (
[parameter(Position=0, Mandatory=$true)]
[String]$Name
)
dynamicparam {
$attributes = new-object System.Management.Automation.ParameterAttribute
$attributes.ParameterSetName = "__AllParameterSets"
$attributes.Mandatory = $true
$attributeCollection = new-object -Type System.Collections.ObjectModel.Collection[System.Attribute]
$attributeCollection.Add($attributes)
$values = #('MailBox', 'Tralala', 'Trilili') # your Import-Csv here
$ValidateSet = new-object System.Management.Automation.ValidateSetAttribute($values)
$attributeCollection.Add($ValidateSet)
$dynParam1 = new-object -Type System.Management.Automation.RuntimeDefinedParameter("Type", [string], $attributeCollection)
$paramDictionary = new-object -Type System.Management.Automation.RuntimeDefinedParameterDictionary
$paramDictionary.Add("Type", $dynParam1)
return $paramDictionary
}
process { 'Foo' }
}
Credits where credits are due, this largely comes from the following article from the Scripting Guy.
The code isn't pretty, but it does what you want.
I know this post is quite old, but with Powershell 6.2 and above you can use a .NET class at the beginning of the script and have the set controlled by a csv for example.
This article hear does an excellent job of explaining:
https://adamtheautomator.com/powershell-validateset/
I prefer TabExpansion++ module though this doesn't technically validate, it has some nice functionality...
Here's an example of an msbuild overloaded command to add some intellisense for projects
Register-ArgumentCompleter -CommandName "msbuild" -ParameterName "target" -ScriptBlock {
param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
$projectName = $fakeBoundParameter['project']
$projectFile = Join-Path (Get-Location) $projectName
$projectXml = [xml](Get-Content $projectFile)
$targets = $projectXml.Project.Target | Where-Object { $_.Name.ToString().StartsWith($wordToComplete) }
foreach($target in $projectXml.Project.Target)
{
New-CompletionResult -CompletionText "$($target.Name)"
}
}