Change value in app.config within TeamCity - visual-studio

Within the Visual Studio solution that contains all our unit tests we have some text files. These text files are checked based on some results generated by our unit tests.
In order to load the files we have an app.config with:
<appSettings>
<add key="BaseTestDataPath" value="D:\MyPath\MySolution\" />
</appSettings>
Within TeamCity on each build run I want to:
Change the BaseTestsDataPath to the specific work path of the agent eg.
C:\TeamCity\buildAgent\work\1ca1a73fe3dadf57\MySolution\
I know the physical layout within the agent work folder so what I need to know is:
How to change the app.config file prior to the Nunit run against the solution in my build steps for TeamCity

There are a couple of approaches to this.
Just choose one of the following scripts, add it to your source control and setup a PowerShell build runner in your build configuration to run the script passing in the required parameters, before you run the NUnit step. If you choose option two then you'll also need to consider the transform dll.
AppSettingReplace.ps1
If you only want to change a single value you can achieve this with some simple PowerShell that will load up the config file into an xml document, iterate the app settings and change the one that matches.
# -----------------------------------------------
# Config Transform
# -----------------------------------------------
#
# Ver Who When What
# 1.0 Evolve Software Ltd 13-05-16 Initial Version
# Script Input Parameters
param (
[ValidateNotNullOrEmpty()]
[string] $ConfigurationFile = $(throw "-ConfigurationFile is mandatory, please provide a value."),
[ValidateNotNullOrEmpty()]
[string] $ApplicationSetting = $(throw "-ApplicationSetting is mandatory, please provide a value."),
[ValidateNotNullOrEmpty()]
[string] $ApplicationSettingValue = $(throw "-ApplicationSettingValue is mandatory, please provide a value.")
)
function Main()
{
$CurrentScriptVersion = "1.0"
Write-Host "================== Config Transform - Version"$CurrentScriptVersion": START =================="
# Log input variables passed in
Log-Variables
Write-Host
try {
$xml = [xml](get-content($ConfigurationFile))
$conf = $xml.configuration
$conf.appSettings.add | foreach { if ($_.key -eq $ApplicationSetting) { $_.value = $ApplicationSettingValue } }
$xml.Save($ConfigurationFile)
}
catch [System.Exception] {
Write-Output $_
Exit 1
}
Write-Host "================== Config Transform - Version"$CurrentScriptVersion": END =================="
}
function Log-Variables
{
Write-Host "ConfigurationFile: " $ConfigurationFile
Write-Host "ApplicationSetting: " $ApplicationSetting
Write-Host "ApplicationSettingValue: " $ApplicationSettingValue
Write-Host "Computername:" (gc env:computername)
}
Main
Usage
AppSettingReplace.ps1 "D:\MyPath\app.config" "BaseTestDataPath" "%teamcity.build.workingDir%"
XdtConfigTransform.ps1
The alternative approach to this is to provide full config transformation support using XDT - This does require Microsoft.Web.XmlTransform.dll to end up on the server somehow (which I normally put into source control).
The following script will transform one config file with another one.
# -----------------------------------------------
# Xdt Config Transform
# -----------------------------------------------
#
# Ver Who When What
# 1.0 Evolve Software Ltd 14-05-16 Initial Version
# Script Input Parameters
param (
[ValidateNotNullOrEmpty()]
[string] $ConfigurationFile = $(throw "-ConfigurationFile is mandatory, please provide a value."),
[ValidateNotNullOrEmpty()]
[string] $TransformFile = $(throw "-TransformFile is mandatory, please provide a value."),
[ValidateNotNullOrEmpty()]
[string] $LibraryPath = $(throw "-LibraryPath is mandatory, please provide a value.")
)
function Main()
{
$CurrentScriptVersion = "1.0"
Write-Host "================== Xdt Config Transform - Version"$CurrentScriptVersion": START =================="
# Log input variables passed in
Log-Variables
Write-Host
if (!$ConfigurationFile -or !(Test-Path -path $ConfigurationFile -PathType Leaf)) {
throw "File not found. $ConfigurationFile";
Exit 1
}
if (!$TransformFile -or !(Test-Path -path $TransformFile -PathType Leaf)) {
throw "File not found. $TransformFile";
Exit 1
}
try {
Add-Type -LiteralPath "$LibraryPath\Microsoft.Web.XmlTransform.dll"
$xml = New-Object Microsoft.Web.XmlTransform.XmlTransformableDocument;
$xml.PreserveWhitespace = $true
$xml.Load($ConfigurationFile);
$xmlTransform = New-Object Microsoft.Web.XmlTransform.XmlTransformation($TransformFile);
if ($xmlTransform.Apply($xml) -eq $false)
{
throw "Transformation failed."
}
$xml.Save($ConfigurationFile)
}
catch [System.Exception] {
Write-Output $_
Exit 1
}
Write-Host "================== Xdt Config Transform - Version"$CurrentScriptVersion": END =================="
}
function Log-Variables
{
Write-Host "ConfigurationFile: " $ConfigurationFile
Write-Host "TransformFile: " $TransformFile
Write-Host "LibraryPath: " $LibraryPath
Write-Host "Computername:" (gc env:computername)
}
Main
Usage
XdtConfigTransform.ps1 "D:\MyPath\app.config" "D:\MyPath\app.transform.config" "%teamcity.build.workingDir%\Library"
Note: The last parameter is the path to the directory that contains Microsoft.Web.XmlTransform.dll
Github Repository - teamcity-config-transform
Hope this helps

You can use File Content Replacer build feature to performe regular expression replacements in text files before a build. After the build, it restores the file content to the original state.

Optionally you can use nuget package id="SlowCheetah". That adds transformation for app.config.
On build it transforms so no need for extra scripts or dlls.

Related

No parallelization despite the use of a runspace pool with powershell 5.1

We are working on a Powershell script that, among other things, performs a job import of multiple computers via a REST API. The normal job import also works flawlessly and gets an XML with all necessary information passed as parameter.
Now we want to parallelize this job import, so that several of these imports can take place at the same time to reduce the time of the import with a high number of computers.
For this purpose, we use a runspace pool and pass a worker - which contains the code for the job import - as well as all necessary parameters to the respective Powershell instance. Unfortunately, this doesn't seem to work, since even after measuring the import time, we couldn't see any speedup due to the parallelization of the job import. The measured time is always about the same as if we would perform the job import sequentially - i.e. without parallelization.
Here is the relevant code snippet:
function changeApplicationSequenceFromComputer {
param (
[Parameter(Mandatory=$True )]
[string]$tenant = $(throw "Parameter tenant is missing"),
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter newSequenceName is missing")
)
# Other things before parallelization
# Passing all local functions and imported modules in runspace pool to call it from worker
$InitialSessionState = [initialsessionstate]::CreateDefault()
Get-ChildItem function:/ | Where-Object Source -like "" | ForEach-Object {
$functionDefinition = Get-Content "Function:\$($_.Name)"
$sessionStateFunction = New-Object System.Management.Automation.Runspaces.SessionStateFunctionEntry -ArgumentList $_.Name, $functionDefinition
$InitialSessionState.Commands.Add($sessionStateFunction)
}
# Using a synchronized Hashtable to pass necessary global variables for logging purpose
$Configuration = [hashtable]::Synchronized(#{})
$Configuration.ScriptPath = $global:ScriptPath
$Configuration.LogPath = $global:LogPath
$Configuration.LogFileName = $global:LogFileName
$InitialSessionState.ImportPSModule(#("$global:ScriptPath\lib\MigrationFuncLib.psm1"))
# Worker for parallelized job-import in for-each loop below
$Worker = {
param($currentComputerObjectTenant, $currentComputerObjectDisplayName, $newSequenceName, $Credentials, $Configuration)
$global:ScriptPath = $Configuration.ScriptPath
$global:LogPath = $Configuration.LogPath
$global:LogFileName = $Configuration.LogFileName
try {
# Function handleComputerSoftwareSequencesXml creates the xml that has to be uploaded for each computer
# We already tried to create the xml outside of the worker and pass it as an argument, so that the worker just imports it. Same result.
$importXml = handleComputerSoftwareSequencesXml -tenant $currentComputerObjectTenant -computerName $currentComputerObjectDisplayName -newSequence $newSequenceName -Credentials $Credentials
$Result = job-import $importXml -Server localhost -Credentials $Credentials
# sleep 1 just for testing purpose
Log "Result from Worker: $Result"
} catch {
$Result = $_.Exception.Message
}
}
# Preparatory work for parallelization
$cred = $Credentials
$MaxRunspacesProcessors = ($env:NUMBER_OF_PROCESSORS) * $multiplier # we tried it with just the number of processors as well as with a multiplied version.
Log "Number of Processors: $MaxRunspacesProcessors"
$RunspacePool = [runspacefactory]::CreateRunspacePool(1, $MaxRunspacesProcessors, $InitialSessionState, $Host)
$RunspacePool.Open()
$Jobs = New-Object System.Collections.ArrayList
foreach ($computer in $computerWithOldApplicationSequence) {
# Different things to do before parallelization, i.e. define some variables
# Parallelized job-import
Log "Creating or reusing runspace for computer '$currentComputerObjectDisplayName'"
$PowerShell = [powershell]::Create()
$PowerShell.RunspacePool = $RunspacePool
Log "Before worker"
$PowerShell.AddScript($Worker).AddArgument($currentComputerObjectTenant).AddArgument($currentComputerObjectDisplayName).AddArgument($newSequenceName).AddArgument($cred).AddArgument($Configuration) | Out-Null
Log "After worker"
$JobObj = New-Object -TypeName PSObject -Property #{
Runspace = $PowerShell.BeginInvoke()
PowerShell = $PowerShell
}
$Jobs.Add($JobObj) | Out-Null
# For logging in Worker
$JobIndex = $Jobs.IndexOf($JobObj)
Log "$($Jobs[$JobIndex].PowerShell.EndInvoke($Jobs[$JobIndex].Runspace))"
}
<#
while ($Jobs.Runspace.IsCompleted -contains $false) {
Log "Still running..."
Start-Sleep 1
}
#>
# Closing/Disposing pool
} # End of the function
The rest of the script looks like this (simplified):
# Parameter passed when calling the script
param (
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter target is missing"),
[Parameter(Mandatory=$True)]
[float]$multiplier= $(throw "Parameter multiplier is missing")
)
# 'main' block
$timeToRun = (Measure-Command{
changeApplicationSequenceFromComputer -tenant "testTenant" -newSequenceName $newSequenceName
}).TotalSeconds
Log "Total time to run with multiplier $($multiplier) is $timeToRun"
Any ideas why the job import is obviously only executed sequentially despite runspace pool and corresponding parallelization?
We have found the error. The foreach contained the following code block:
# For logging in Worker
$JobIndex = $Jobs.IndexOf($JobObj)
Log "$($Jobs[$JobIndex].PowerShell.EndInvoke($Jobs[$JobIndex].Runspace))"
This had to be created outside the foreach so that the code looks like this:
function changeApplicationSequenceFromComputer {
param (
[Parameter(Mandatory=$True )]
[string]$tenant = $(throw "Parameter tenant is missing"),
[Parameter(Mandatory=$True)]
[string]$newSequenceName = $(throw "Parameter newSequenceName is missing")
)
# ... Everything as before
$Jobs.Add($JobObj) | Out-Null
} #end of foreach
$Results = #()
foreach($Job in $Jobs ){
$Results += $Job.PowerShell.EndInvoke($Job.Runspace)
}
So the EndInvoke() has to be called outside the foreach.

Octopus Deploy Windows Scheduled Task

This question is for all who are using Octopus Deploy to run scheduled tasks.
https://library.octopusdeploy.com/step-template/actiontemplate-windows-scheduled-task-create
Has anyone encountered situation where you have to specify "Start in (optional):" parameter in the scheduled task?
I am wondering if this is possible with Octopus Deploy or if there is any work around?
Octopus deploy community steps are just Powershell scripts with variables. You can edit the Powershell to setup anyone variable for "Start in" path and pass that to the scheduled task. I can give you an example of you need one.
Update
After accuracy looking at the Posh script for the task, I think a better option would be to add a single parameter for the XML file that defines the task parameters and set that inside your Octopus deployment steps. That will give you the most fixability in case you need to provide any other parameters besides the "Start in" parameter.
Update 2
So I wrote a custom Step to do what you wanted, then looked at the community feed, silly me. There is already stemp template to create a scheduled task from XML file. The XML will let you set the working directory. The step template is called "Create Scheduled Tasks From XML" and you can find it at http://library.octopusdeploy.com/step-templates/26c779af-4cce-447e-98bb-4741c25e0b3c/actiontemplate-create-scheduled-tasks-from-xml.
In addition, here is where I was going with the custom step, it's just Powershell:
$ErrorActionPreference = "Stop";
Set-StrictMode -Version "Latest";
function New-ScheduledTask {
param (
[Parameter(Mandatory = $true)][hashtable] $octopusParameters
)
$arguments = #{
TaskName = $octopusParameters['TaskName']
User = $octopusParameters['RunAsUser']
}
if ($octopusParameters['RunAsPassword']) {
$arguments.Password = $runAsPassword
}
if ($octopusParameters.ContainsKey('RunWithElevatedPermissions')) {
if ([boolean]::Parse($octopusParameters['RunWithElevatedPermissions'])) {
$arguments.RunLevel = 'Highest'
}
}
switch ($octopusParameters['Schedule']) {
'Once' {
$triggerArguments.Once = $true
$triggerArguments.At = $runAt
}
'Daily' {
$triggerArguments.Daily = $true
$triggerArguments.At = $runAt
if ($interval) {
$triggerArguments.DaysInterval = $octopusParameters['Interval']
}
}
'Weekly' {
$triggerArguments.Weekly = $true
$triggerArguments.At = $runAt
if ($interval) {
$triggerArguments.WeeksInterval = $octopusParameters['Interval']
}
}
'Startup' {
$triggerArguments.AtStartup = $true
}
'Logon' {
$triggerArguments.AtLogOn = $true
}
}
$actionArguments = #{
Execute = $octopusParameters['Executable']
Argument = $octopusParameters['Arguments']
WorkingDirectory = $octopusParameters['WorkingDirectory']
}
$arguments.Action = New-ScheduledTaskAction #actionArguments
$triggerArguments = #{
TaskName = $taskName
User = $runAsUser
}
$arguments.Trigger = New-ScheduledTaskTrigger #triggerArguments
Write-Output "Creating Scheduled Task - $taskName"
Unregister-ScheduledTask -TaskName $taskName -Confirm:$false -ErrorAction:SilentlyContinue
Register-ScheduledTask #arguments | Out-Null
Write-Output "Successfully Created $taskName"
}
# only execute the step if it's called from octopus deploy,
# and skip it if we're runnning inside a Pester test
if (Test-Path -Path "Variable:octopusParameters") {
New-ScheduledTask $octopusParameters
}
After hitting the same problem I found that schtasks.exe does not take a Working Directory (Start In (optional)) parameter.
I did the following:
Create the scheduled task using the Octopus template (Windows Scheduled Task - Create - With Password)
Saved the scheduled task as XML using PowerShell
Edited the XML to add the working directory
Used the Octopus template (Create Scheduled Tasks From XML) using the updated XML to create the scheduled task.
Here is the PowerShell I used in Octopus to get the scheduled task as XML and insert the Working Directory Node:
$scheduleFolder = $OctopusParameters["ScheduledTaskFolder"]
$scheduleName = $OctopusParameters["ScheduledTaskName"]
$scheduleWorkingDirectory = $OctopusParameters["ScheduledTaskWorkingDirectory"]
$scheduleXmlFileName = $OctopusParameters["ScheduledTaskXmlFileName"]
$installFolder = $OctopusParameters["InstallFolder"]
Write-Output "Connecting to Schedule Service"
$schedule = New-Object -Com("Schedule.Service")
$schedule.Connect()
Write-Output "Getting $scheduleName task in folder $scheduleFolder as xml"
$task = $schedule.GetFolder($scheduleFolder).GetTasks(0) | Where {$_.Name -eq
$scheduleName}
$xml = [xml]$task.Xml
# Parent node
$execNode = $xml.Task.Actions.Exec
# Create WorkingDirectory node
$workingDirectoryElement = $xml.CreateElement("WorkingDirectory",
$execNode.NamespaceURI)
$workingDirectoryElement.InnerText = $scheduleWorkingDirectory
# Insert the WorkingDirectory node after the last child node
Write-Output "Inserting WorkingDirectory node in $execNode.Name node"
$numberExecNodes = $execNode.ChildNodes.Count
$execNode.InsertAfter($workingDirectoryElement, $execNode.ChildNodes[$numberExecNodes
- 1])
# Output the xml to a file
Write-Output "Saving $installFolder\$scheduleXmlFileName"
$xml.Save("$installFolder\$scheduleXmlFileName")
Another option is to save the XML file (with the working directory nodes) as part of your project and just deploy this using the Octopus template (Create Scheduled Tasks From XML).
...
<Actions Context="Author">
<Exec>
<Command>"C:\Program Files\Test Application\Application.exe"</Command>
<WorkingDirectory>C:\Program Files\Test Application</WorkingDirectory>
</Exec>
</Actions>
</Task>

Dynamic parameter value depending on another dynamic parameter value

Starting premise: very restrictive environment, Windows 7 SP1, Powershell 3.0. Limited or no possibility of using external libraries.
I'm trying to re-write a bash tool I created previously, this time using PowerShell. In bash I implemented autocompletion to make the tool more user friendly and I want to do the same thing for the PowerShell version.
The bash version worked like this:
./launcher <Tab> => ./launcher test (or dev, prod, etc.)
./launcher test <Tab> => ./launcher test app1 (or app2, app3, etc.)
./launcher test app1 <Tab> => ./launcher test app1 command1 (or command2, command3, etc.).
As you can see, everything was dynamic. The list of environments was dynamic, the list of application was dynamic, depending on the environment selected, the list of commands was also dynamic.
The problem is with the test → application connection. I want to show the correct application based on the environment already selected by the user.
Using PowerShell's DynamicParam I can get a dynamic list of environments based on a folder listing. I can't however (or at least I haven't found out how to) do another folder listing but this time using a variable based on the existing user selection.
Current code:
function ParameterCompletion {
$RuntimeParameterDictionary = New-Object Management.Automation.RuntimeDefinedParameterDictionary
# Block 1.
$AttributeCollection = New-Object Collections.ObjectModel.Collection[System.Attribute]
$ParameterName = "Environment1"
$ParameterAttribute = New-Object Management.Automation.ParameterAttribute
$ParameterAttribute.Mandatory = $true
$ParameterAttribute.Position = 1
$AttributeCollection.Add($ParameterAttribute)
# End of block 1.
$parameterValues = $(Get-ChildItem -Path ".\configurations" -Directory | Select-Object -ExpandProperty Name)
$ValidateSetAttribute = New-Object Management.Automation.ValidateSetAttribute($parameterValues)
$AttributeCollection.Add($ValidateSetAttribute)
$RuntimeParameter = New-Object Management.Automation.RuntimeDefinedParameter($ParameterName, [string], $AttributeCollection)
$RuntimeParameterDictionary.Add($ParameterName, $RuntimeParameter)
# Block 2: same thing as in block 1 just with 2 at the end of variables.
# Problem section: how can I change this line to include ".\configurations\${myVar}"?
# And what's the magic incantation to fill $myVar with the info I need?
$parameterValues2 = $(Get-ChildItem -Path ".\configurations" -Directory | Select-Object -ExpandProperty Name)
$ValidateSetAttribute2 = New-Object Management.Automation.ValidateSetAttribute($parameterValues2)
$AttributeCollection2.Add($ValidateSetAttribute2)
$RuntimeParameter2 = New-Object
Management.Automation.RuntimeDefinedParameter($ParameterName2, [string], $AttributeCollection2)
$RuntimeParameterDictionary.Add($ParameterName2, $RuntimeParameter2)
return $RuntimeParameterDictionary
}
function App {
[CmdletBinding()]
Param()
DynamicParam {
return ParameterCompletion "Environment1"
}
Begin {
$Environment = $PsBoundParameters["Environment1"]
}
Process {
}
}
I would recommend using argument completers, which are semi-exposed in PowerShell 3 and 4, and fully exposed in version 5.0 and higher. For v3 and v4, the underlying functionality is there, but you have to override the TabExpansion2 built-in function to use them. That's OK for your own session, but it's generally frowned upon to distribute tools that do that to other people's sessions (imagine if everyone tried to override that function). A PowerShell team member has a module that does this for you called TabExpansionPlusPlus. I know I said overriding TabExpansion2 was bad, but it's OK if this module does it :)
When I needed to support versions 3 and 4, I would distribute my commands in modules, and have the modules check for the existence of the 'Register-ArgumentCompleter' command, which is a cmdlet in v5+ and is a function if you have the TE++ module. If the module found it, it would register any completer(s), and if it didn't, it would notify the user that argument completion wouldn't work unless they got the TabExpansionPlusPlus module.
Assuming you have the TE++ module or PSv5+, I think this should get you on the right track:
function launcher {
[CmdletBinding()]
param(
[string] $Environment1,
[string] $Environment2,
[string] $Environment3
)
$PSBoundParameters
}
1..3 | ForEach-Object {
Register-ArgumentCompleter -CommandName launcher -ParameterName "Environment${_}" -ScriptBlock {
param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
$PathParts = $fakeBoundParameter.Keys | where { $_ -like 'Environment*' } | sort | ForEach-Object {
$fakeBoundParameter[$_]
}
Get-ChildItem -Path ".\configurations\$($PathParts -join '\')" -Directory -ErrorAction SilentlyContinue | select -ExpandProperty Name | where { $_ -like "${wordToComplete}*" } | ForEach-Object {
New-Object System.Management.Automation.CompletionResult (
$_,
$_,
'ParameterValue',
$_
)
}
}
}
For this to work, your current working directory will need a 'configurations' directory contained in it, and you'll need at least three levels of subdirectories (reading through your example, it looked like you were going to enumerate a directory, and you would go deeper into that structure as parameters were added). The enumerating of the directory isn't very smart right now, and you can fool it pretty easy if you just skip a parameter, e.g., launcher -Environment3 <TAB> would try to give you completions for the first sub directory.
This works if you will always have three parameters available. If you need a variable # of parameters, you could still use completers, but it might get a little trickier.
The biggest downside would be that you'd still have to validate the users' input since completers are basically just suggestions, and users don't have to use those suggestions.
If you want to use dynamic parameters, it gets pretty crazy. There may be a better way, but I've never been able to see the value of dynamic parameters at the commandline without using reflection, and at that point you're using functionality that could change at the next release (the members usually aren't public for a reason). It's tempting to try to use $MyInvocation inside the DynamicParam {} block, but it's not populated at the time the user is typing the command into the commandline, and it only shows one line of the command anyway without using reflection.
The below was tested on PowerShell 5.1, so I can't guarantee that any other version has these exact same class members (it's based off of something I first saw Garrett Serack do). Like the previous example, it depends on a .\configurations folder in the current working directory (if there isn't one, you won't see any -Environment parameters).
function badlauncher {
[CmdletBinding()]
param()
DynamicParam {
#region Get the arguments
# In it's current form, this will ignore parameter names, e.g., '-ParameterName ParameterValue' would ignore '-ParameterName',
# and only 'ParameterValue' would be in $UnboundArgs
$BindingFlags = [System.Reflection.BindingFlags] 'Instance, NonPublic, Public'
$Context = $PSCmdlet.GetType().GetProperty('Context', $BindingFlags).GetValue($PSCmdlet)
$CurrentCommandProcessor = $Context.GetType().GetProperty('CurrentCommandProcessor', $BindingFlags).GetValue($Context)
$ParameterBinder = $CurrentCommandProcessor.GetType().GetProperty('CmdletParameterBinderController', $BindingFlags).GetValue($CurrentCommandProcessor)
$UnboundArgs = #($ParameterBinder.GetType().GetProperty('UnboundArguments', $BindingFlags).GetValue($ParameterBinder) | where { $_ } | ForEach-Object {
try {
if (-not $_.GetType().GetProperty('ParameterNameSpecified', $BindingFlags).GetValue($_)) {
$_.GetType().GetProperty('ArgumentValue', $BindingFlags).GetValue($_)
}
}
catch {
# Don't do anything??
}
})
#endregion
$ParamDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
# Create an Environment parameter for each argument specified, plus one extra as long as there
# are valid subfolders under .\configurations
for ($i = 0; $i -le $UnboundArgs.Count; $i++) {
$ParameterName = "Environment$($i + 1)"
$ParamAttributes = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$ParamAttributes.Add((New-Object Parameter))
$ParamAttributes[0].Position = $i
# Build the path that will be enumerated based on previous arguments
$PathSb = New-Object System.Text.StringBuilder
$PathSb.Append('.\configurations\') | Out-Null
for ($j = 0; $j -lt $i; $j++) {
$PathSb.AppendFormat('{0}\', $UnboundArgs[$j]) | Out-Null
}
$ValidParameterValues = Get-ChildItem -Path $PathSb.ToString() -Directory -ErrorAction SilentlyContinue | Select-Object -ExpandProperty Name
if ($ValidParameterValues) {
$ParamAttributes.Add((New-Object ValidateSet $ValidParameterValues))
$ParamDictionary[$ParameterName] = New-Object System.Management.Automation.RuntimeDefinedParameter (
$ParameterName,
[string[]],
$ParamAttributes
)
}
}
return $ParamDictionary
}
process {
$PSBoundParameters
}
}
The cool thing about this one is that it can keep going as long as there are folders, and it automatically does parameter validation. Of course, you're breaking the laws of .NET by using reflection to get at all those private members, so I would consider this a terrible and fragile solution, no matter how fun it was to come up with.

Use ValidateSet with the contents loaded from a CSV file

I really like the way that ValidateSet works. It proposes the options as a list while you type your Cmdlet in the PowerShell ISE.
I would like to know if it's possible to retrieve values from a CSV-file (Import-CSV) and use them in the Param block so they become available in the drop down box of the PowerShell ISE when constructing the Cmdlet arguments? A bit in the same way that $Type works now, but then with values from the import file.
Function New-Name {
Param (
[parameter(Position=0, Mandatory=$true)]
[ValidateSet('Mailbox','Distribution','Folder','Role')]
[String]$Type,
[parameter(Position=1,Mandatory=$true)]
[String]$Name
)
Process { 'Foo' }
}
Here is something you can start with:
function New-Name {
param (
[parameter(Position=0, Mandatory=$true)]
[String]$Name
)
dynamicparam {
$attributes = new-object System.Management.Automation.ParameterAttribute
$attributes.ParameterSetName = "__AllParameterSets"
$attributes.Mandatory = $true
$attributeCollection = new-object -Type System.Collections.ObjectModel.Collection[System.Attribute]
$attributeCollection.Add($attributes)
$values = #('MailBox', 'Tralala', 'Trilili') # your Import-Csv here
$ValidateSet = new-object System.Management.Automation.ValidateSetAttribute($values)
$attributeCollection.Add($ValidateSet)
$dynParam1 = new-object -Type System.Management.Automation.RuntimeDefinedParameter("Type", [string], $attributeCollection)
$paramDictionary = new-object -Type System.Management.Automation.RuntimeDefinedParameterDictionary
$paramDictionary.Add("Type", $dynParam1)
return $paramDictionary
}
process { 'Foo' }
}
Credits where credits are due, this largely comes from the following article from the Scripting Guy.
The code isn't pretty, but it does what you want.
I know this post is quite old, but with Powershell 6.2 and above you can use a .NET class at the beginning of the script and have the set controlled by a csv for example.
This article hear does an excellent job of explaining:
https://adamtheautomator.com/powershell-validateset/
I prefer TabExpansion++ module though this doesn't technically validate, it has some nice functionality...
Here's an example of an msbuild overloaded command to add some intellisense for projects
Register-ArgumentCompleter -CommandName "msbuild" -ParameterName "target" -ScriptBlock {
param($commandName, $parameterName, $wordToComplete, $commandAst, $fakeBoundParameter)
$projectName = $fakeBoundParameter['project']
$projectFile = Join-Path (Get-Location) $projectName
$projectXml = [xml](Get-Content $projectFile)
$targets = $projectXml.Project.Target | Where-Object { $_.Name.ToString().StartsWith($wordToComplete) }
foreach($target in $projectXml.Project.Target)
{
New-CompletionResult -CompletionText "$($target.Name)"
}
}

Creating a zipped/compressed folder in Windows using Powershell or the command line

I am creating a nightly database schema file and would like to put all the files created each night, one for each database, into a folder and compress that folder.
I have a PowerShell script that creates the schema.Only creation script of the db's and then adds all the files to a new folder. The problem lies within the compression portion of this process.
Does anybody have any idea if this can be accomplished with the pre-installed Windows utility that handles folder compression?
It would be best to use that utility if possible rather than something like 7zip (I don't feel like installing 7zip on every customers' server and it may take IT years to do it if I ask them).
A native way with latest .NET 4.5 framework, but entirely feature-less:
Creation:
Add-Type -Assembly "System.IO.Compression.FileSystem" ;
[System.IO.Compression.ZipFile]::CreateFromDirectory("c:\your\directory\to\compress", "yourfile.zip") ;
Extraction:
Add-Type -Assembly "System.IO.Compression.FileSystem" ;
[System.IO.Compression.ZipFile]::ExtractToDirectory("yourfile.zip", "c:\your\destination") ;
As mentioned, totally feature-less, so don't expect an overwrite flag.
Here's a couple of zip-related functions that don't rely on extensions: Compress Files with Windows PowerShell.
The main function that you'd likely be interested in is:
function Add-Zip
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
foreach($file in $input)
{
$zipPackage.CopyHere($file.FullName)
Start-sleep -milliseconds 500
}
}
Usage:
dir c:\demo\files\*.* -Recurse | Add-Zip c:\demo\myzip.zip
There is one caveat: the shell.application object's NameSpace() function fails to open up the zip file for writing if the path isn't absolute. So, if you passed a relative path to Add-Zip, it'll fail with a null error, so the path to the zip file must be absolute.
Or you could just add a $zipfilename = resolve-path $zipfilename at the beginning of the function.
As of PowersShell 5 there is a Compress-Archive cmdlet that does the task out of the box.
This compresses .\in contents to .\out.zip with System.IO.Packaging.ZipPackage following the example here
$zipArchive = $pwd.path + "\out.zip"
[System.Reflection.Assembly]::Load("WindowsBase,Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35")
$ZipPackage=[System.IO.Packaging.ZipPackage]::Open($zipArchive, [System.IO.FileMode]"OpenOrCreate", [System.IO.FileAccess]"ReadWrite")
$in = gci .\in | select -expand fullName
[array]$files = $in -replace "C:","" -replace "\\","/"
ForEach ($file In $files) {
$partName=New-Object System.Uri($file, [System.UriKind]"Relative")
$part=$ZipPackage.CreatePart($partName, "application/zip", [System.IO.Packaging.CompressionOption]"Maximum")
$bytes=[System.IO.File]::ReadAllBytes($file)
$stream=$part.GetStream()
$stream.Write($bytes, 0, $bytes.Length)
$stream.Close()
}
$ZipPackage.Close()
Used voithos' answer to zip files up in powershell, just had one problem with the Add-Zip function, the Start-sleep -milliseconds 500 caused problems if the file couldn't be fully zipped up in that time -> the next one starting before it was complete caused errors and some files not to be zipped.
So after playing around for a bit, first trying to get a counter going to check the count of the $zipPackage.Items() and only continuing after the items count increased (which did not work as it would return 0 in some cases when it should not) I found that it will return 0 if the package is still zipping/copying the files up (I think, haha). Added a simple while loop with the start-sleep inside of it, waiting for the zipPackage.Items().count to be a non-zero value before continuing and this seems to solve the problem.
function Add-Zip
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
foreach($file in $input)
{
$zipPackage.CopyHere($file.FullName)
do
{
Start-sleep -milliseconds 250
}
while ($zipPackage.Items().count -eq 0)
}
}
Using PowerShell Version 3.0:
Copy-ToZip -File ".\blah" -ZipFile ".\blah.zip" -Force
Hope this helps.

Resources