How do I prevent users from accidentally deploying to too many tenants in Octopus? - octopus-deploy

Over the years we’ve had a few instances of Octopus Deploy users accidentally releasing a version to multiple environments, based on a tenant tag.
I usually tell users to check that the Tenant list under ‘Preview and customize’ only contains the 1 environment intended, but we still have some slip through where Octopus deploys to 20+ environments, which I then need to roll back.
Is there a way to alert users when there are more than 1 Tenant in the list, so they can sense check it before proceeding to deploy?

This can be achieved with a PowerShell script, which you add as a deployment step. The script will check for more than one tenant, and alert the user to perform manual intervention if it detects more than 1 target
To have the script work successfully, you must perform the following:
Set a Secret Project Variable named APIKey with the value of an API-Key with access to the deployments within that space.
Change the $octopusURL variable in the Script to match your Octopus Hostname.
Place the Script at the beginning of your deployment inside a "Run A Script" step.
Create a Manual Intervention step as the 2nd process step in your project with the run condition set to: #{Octopus.Action[Run a Script].Output.MultipleTenants}
If you decide to change the name of the step that contains the Script below, be sure to place the step name within the square brackets ([]) on the run condition variable.
$ErrorActionPreference = "Stop";
# Define working variables
$octopusURL = "http://OctopusURL"
$octopusAPIKey = "$APIKey"
$header = #{ "X-Octopus-ApiKey" = $octopusAPIKey }
$spaceId = $OctopusParameters["Octopus.Space.Id"]
# Get Release Deployment Data
$releaseData = Invoke-RestMethod -method GET -uri "$($octopusURL)/api/$($spaceId)/releases/#{Octopus.Release.Id}/deployments/" -Headers $header
# Get DateTime of Deployment Created
$checkDate = Get-Date $OctopusParameters["Octopus.Deployment.Created"] -format "yyyy-MM-dd HH:mm:ss"
write-host "The following tenants are being deployed to at $($checkDate):"
# Instantiate list
$tenantList = #()
# ForEach Deployment Item inside the Release do the following:
foreach($item in $releaseData.Items){
# Generate compatible DateTime for comparison with deployment Time
$date = Get-Date $item.Created.Substring(0,19) -format "yyyy-MM-dd HH:mm:ss"
# Check date equivalence, if equal then do the following:
if($date -eq $checkDate){
write-host "The tenant with ID: $($item.TenantId) is included in this deployment at $($date)."
# Add tenant to list
$tenantList = $TenantList + ($item.TenantId)
# If a release is redeployed, previous tenants may exist in the JSON items, this elseif generates the list of tenants deployed to previously, but not in THIS deployment:
}elseif($tenantList -notcontains $item.TenantId){
write-host "The tenant with ID: $($item.TenantId) is not included in this deployment as it was deployed at $($date)."
}
}
# Condition check list size, create output variable if more than one Tenant.
if($tenantList.Count -gt 1){
Set-OctopusVariable -name "MultipleTenants" -value "True"
}

Related

Windows Audit Policy/Registry Key Command Check To Only Apply On Domain Controllers

I am trying to craft a command that would run against all of my Windows machines to check if the "Audit Distribution Group Management" audit policy setting is set to "Success and Failure". I would only like to apply this check to Domain Controller servers and for any other server type to echo out something like "NoCheckRequired", is this possible?
I tried to create an if-else statement on PowerShell for this, but it was not successful.
I tried to use the "wmic.exe ComputerSystem get DomainRole" command to find out the type of machine, values 4 / 5 mean DC server from my understanding, and using an IF statement, I tried to match those values and check if the group policy audit settings were set and for any other values returned other than 4 / 5
wmic.exe ComputerSystem get DomainRole outputs the property name on a separate line before outputting the actual value, so comparing to the number 4 (as an example) will not work.
Instead, use the Get-CimInstance cmdlet:
$CS = Get-CimInstance Win32_ComputerSystem
if($CS.DomainRole -in 4,5){
# We're on a Domain Controller
}
elseif($CS.DomainRole -in 1,3) {
# We're on a Domain member
}
else {
# We're on a workgroup machine
}
Get-ADComputer -Filter 'primarygroupid -eq "516"'
Will filter the Domain controller

Dynamic Template using dynamic variable group causing an issue when downloading a Secure File

I have a CI/CD Multistage template where my CD stages are dependent on a parameter I provide in a yaml file
Pipeline points to pipeline.yml
servers:
DEV:
- srv-apimgmt37p
and in my template I have a loop that checks the servers and passes the value so it can dynamically produce my CI/CD pipeline depending on the above parameter. In my CD stage I have the following variable groups that I pass:
variables:
- group: ${{ variables['Build.DefinitionName'] }}_MS_${{env.key}}
- group: DevSecOps_${{ variables['Build.DefinitionName'] }}_MS_${{env.key}}
In one of those groups I have a variable which is the name of file that is stored in my secure files. Going back to my CD template, I have a Download Secure File task which will download the secure file using the name of the variable from the group called $(test)
- task: DownloadSecureFile#1
displayName: 'Download kafka keytab'
condition: "eq(ne(variables['test'], ''), true)"
inputs:
secureFile: "$(test)"
retryCount: 5
The problem is that when the pipeline starts running, it tries to download the secure file first, but it cannot find it because it doesn't know yet the value of $(test). What should I do as a best practice in this scenario? I'm a little stuck on what a good solution would be.
DownloadSecureFile task is a pre-job. You may try use the powershell to download the Secure File as case Download secure file with PowerShell mentioned.
I was able to download Secure Files using a REST API, the task's
Access Token, and an Accept header for application/octet-stream. I
enabled "Allow scripts to access the OAuth token". Here my task.json
is using a secureFile named "SecureFile."
$secFileId = Get-VstsInput -Name SecureFile -Require
$secTicket = Get-VstsSecureFileTicket -Id $secFileId
$secName = Get-VstsSecureFileName -Id $secFileId
$tempDirectory = Get-VstsTaskVariable -Name "Agent.TempDirectory" -Require
$collectionUrl = Get-VstsTaskVariable -Name "System.TeamFoundationCollectionUri" -Require
$project = Get-VstsTaskVariable -Name "System.TeamProject" -Require
$filePath = Join-Path $tempDirectory $secName
$token= Get-VstsTaskVariable -Name "System.AccessToken" -Require
$user = Get-VstsTaskVariable -Name "Release.RequestedForId" -Require
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $User, $token)))
$headers = #{
Authorization=("Basic {0}" -f $base64AuthInfo)
Accept="application/octet-stream"
}
Invoke-RestMethod -Uri "$($collectionUrl)$project/_apis/distributedtask/securefiles/$($secFileId)?ticket=$($secTicket)&download=true&api-version=5.0-preview.1" -Headers $headers -OutFile $filePath
I am using "$(Build.QueuedById)" to get the user id in build tasks,
but honestly I don't think it matters what string you use there.
If you don't have the Accept header, you'll get JSON metadata back for
the file you're attempting to download.
I decided to slap a parameter in my yaml file that resides in my repo. So in my template, I use the task as such
- ${{ if parameters.keytab[env.key] }}:
- task: DownloadSecureFile#1
name: kafkakeytab
displayName: 'Download kafka keytab'
inputs:
secureFile: ${{parameters.keytab[env.key]}}
retryCount: 5
and in my YAML file, I just reference a parameter as such:
keytab:
DEV: bobbobob.keytab
UAT: blablauat.keytab
This means that if I don't pass this parameter, the pipeline will not include the task in the pipeline, which is what I want. This way, I didn't have to create my own Powershell task to try to achieve this!

Issue with Set-CMTaskSequenceDeployment

Seems that New-CMTaskSequenceDeployment / Set-CMTaskSequenceDeployment cmdlet option -DeploymentOption does not work as expected.
I'm trying to automate a Task Sequence Deployment via Powershell. I use New-CMTaskSequenceDeployment cmdlet to create the deployment. The content of the TS should be downloaded before the start of the TS.
Works ok, but the -DeploymentOption DownloadAllContentLocallyBeforeStartingTaskSequence seems not to have any effect, when I check the deployment after the script ran, the option "pre-download content for this task sequence" isn't checked.
Same issue when I try Set-CMTaskSequenceDeployment.
Any hint from the community what I'm doing wrong?
...
# Create deployment for all waves now
foreach ($StrCollectionName in $ArrCollectionName)
{
$SchedulePhase2 = New-CMSchedule -Nonrecurring -Start $DateScheduleStartPhase2
Try {
$Deployment = New-CMTaskSequenceDeployment -CollectionName $StrCollectionName -TaskSequencePackageId $StrTaskSequenceID -DeployPurpose Required -AvailableDateTime $DateAvailablePhase1 -DeploymentOption DownloadAllContentLocallyBeforeStartingTaskSequence -SoftwareInstallation $False -SystemRestart $False -Schedule $SchedulePhase2 -RerunBehavior RerunIfFailedPreviousAttempt -AllowUsersRunIndependently $True -SendWakeUpPacket $True
Write-Host "Success - Deployment $Deployment created!"
}
Catch {
Write-Host "Error - Exception caught in creating deployment : $error[0]"
Exit
}
}
...
Looks like unfortunately (and unexpected) the pre-download behavior is different for package/program deployment and task sequence deployment.
In case of a package/program deployment the content download will start after start time in case the deployment has a mandatory time defined.
A TS deployment behaves different. It will start download when mandatory time (schedule) has been reached. Start time will be ignored.
This difference is independently from how the deployment has been started (Console or powershell cmdlet) therefore it is not an issue of the cmdlet.
First of all, you can check the picture below to make sure not to confuse these two options.
Difference between Preload content checkbox and Download all content locally before starting TS
Once done Here is my proposition :
Just by clicking, try to retrieve the property of you TSDeployment before and after you clicked the checkbox. You will see that one property changed. AdvertFlags
PS MUZ:\> (Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5).AdvertFlags
[Convert]::ToString((Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5).AdvertFlags,2)
Output :
34275328
10000010110000000000000000
From there, you can read from the MS doc : https://learn.microsoft.com/en-us/configmgr/develop/reference/core/servers/configure/sms_advertisement-server-wmi-class
From this, I learn that I need to change the 12th bit like this :
$advertflag = Get-CMTaskSequenceDeployment -DeploymentID MUZ200C5
$advertflag.AdvertFlags = $advertflag.AdvertFlags -bor "0x00001000"
$advertflag.put()
I hope it will help someone someday :)

Is there a way to modify TeamCity system properties in a shell script?

I'm trying to figure out how to modify some custom system properties that I've defined in the build configurations parameters.
For example, if I have a system property named system.TestProperty with value 0 and I want to modify it's value from shell, I've tryed using ##teamcity[setParameter name='system.TestProperty' value='1'] as explained here but the next time I get it's value, it gives me 0 again.
The script i'm using to test:
Write-Host "-------------"
$testProperty = "%system.TestProperty%"
Write-Host "system.TestProperty: $testProperty"
Write-Host "##teamcity[setParameter name='system.TestProperty' value='1']"
$testProperty = "%system.TestProperty%"
Write-Host "system.TestProperty: $testProperty"
Write-Host "-------------"
What I'm getting:
-------------
system.TestProperty: 0
##teamcity[setParameter name='system.TestProperty' value='1']
system.TestProperty: 0
-------------
You wont see the parameter updated in the same script. If you split the check into another build step, you should see it there.

How do I gracefully take a web app offline during Octopus deployment?

I was a bit remiss to find that Octopus, as amazing as it is, doesn't do anything cute or clever about shutting down your web app before it is upgraded.
In our solution we have two web apps (a website and a separate API web app) that rely on the same database, so while one is being upgraded the other is still live and there is potential that web or API requests are still being serviced while the database is being upgraded.
Not clean!
Clean would be for Octopus to shut down the web apps, wait until they are shut-down and then go ahead with the upgrade, bring the app pools back online once complete.
How can that be achieved?
Selfie-answer!
It is easy to make Octopus-deploy take a little extra care with your deployments, all you need is a couple of extra Execute-Powershell steps in your deployment routine.
Add a new first step to stop the app pool:
# Settings
#---------------
$appPoolName = "PushpayApi" # Or we could set this from an Octopus environment setting.
# Installation
#---------------
Import-Module WebAdministration
# see http://technet.microsoft.com/en-us/library/ee790588.aspx
cd IIS:\
if ( (Get-WebAppPoolState -Name $appPoolName).Value -eq "Stopped" )
{
Write-Host "AppPool already stopped: " + $appPoolName
}
Write-Host "Shutting down the AppPool: " + $appPoolName
Write-Host (Get-WebAppPoolState $appPoolName).Value
# Signal to stop.
Stop-WebAppPool -Name $appPoolName
do
{
Write-Host (Get-WebAppPoolState $appPoolName).Value
Start-Sleep -Seconds 1
}
until ( (Get-WebAppPoolState -Name $appPoolName).Value -eq "Stopped" )
# Wait for the apppool to shut down.
And then add another step at the end to restart the app pool:
# Settings
#---------------
$appPoolName = "PushpayApi"
# Installation
#---------------
Import-Module WebAdministration
# see http://technet.microsoft.com/en-us/library/ee790588.aspx
cd IIS:\
if ( (Get-WebAppPoolState -Name $appPoolName).Value -eq "Started" )
{
Write-Host "AppPool already started: " + $appPoolName
}
Write-Host "Starting the AppPool: " + $appPoolName
Write-Host (Get-WebAppPoolState $appPoolName).Value
# To restart the app pool ...
Start-WebAppPool -Name $appPoolName
Get-WebAppPoolState -Name $appPoolName
The approach we took was to deploy an _app_offline.htm (App Offline) file with the application. That way we get a nice message explaining why the site is down.
Then when it is time for deployment we use Mircrosofts Webdeploy to rename the it to app_offline.htm. We put the code for the rename in a powershell script that runs as the first step of our Octopus Deployment.
write-host "Website: $WebSiteName"
# Take Website Offline
$path = "$WebDeployPath";
$path
$verb = "-verb:sync";
$verb
# Take root Website offline
$src = "-source:contentPath=```"$WebSiteName/_app_offline.htm```"";
$src
$dest = "-dest:contentPath=```"$WebSiteName/app_offline.htm```"";
$dest
Invoke-Expression "&'$path' $verb $src $dest";
# Take Sub Website 1 offline
$src = "-source:contentPath=```"$WebSiteName/WebApp1/_app_offline.htm```"";
$dest = "-dest:contentPath=```"$WebSiteName/WebApp1/app_offline.htm```"";
Invoke-Expression "&'$path' $verb $src $dest";
$WebSiteName is usually "Default Web Site". Also note that the ` are not single quotes but actually the backtick character (usually found with the tilda on your keyboard).
Now if octopus is deploying your web site to a new location, your web site will come back online automatically. If you don't want that, you can deploy the new website with the app_offline file allready in place. Then you can use the following script to remove it.
write-host $WebSiteName
# & "c:\Program Files (x86)\IIS\Microsoft Web Deploy V2\msdeploy.exe" -verb:delete -dest:contentPath="$WebSiteName/app_offline.htm"
# those arn't QUOTES!!!!, they are the back accent thing.
write-host "Website: $WebSiteName"
# Put Web app Online.
$path = "$WebDeployPath";
$path
$verb = "-verb:delete";
$verb
$dest = "-dest:contentPath=```"$WebSiteName/app_offline.htm```"";
$dest
Invoke-Expression "&'$path' $verb $dest";
# Put Sub Website Online
$dest = "-dest:contentPath=```"$WebSiteName/WebApp1/app_offline.htm```"";
Invoke-Expression "&'$path' $verb $dest";
Stopping apppool and/or setting App_Offline file is not enough for me. Both didn't give proper explanation to clients why site is down. Especially App_Offline. I need to clean up bin folder and this causes YSOD (http://blog.kurtschindler.net/more-app_offline-htm-woes/).
My solution:
First task redirects deployed site to different folder containing only index.html with proper message. Last task brings back original folder.
A better solution would be to use a network load balancer such as the f5 LTM. You can set up multiple servers to receive traffic for your site and then, when you are deploying, you can just disable the one node in the NLB so all the other traffic goes to the other machine.
I like the f5 because it is very scriptable. When we deploy to our websites we take no outage whatsoever. all traffic to the site is just pointed to the server that is not currently being upgraded.
There are caveats:
You must script the downing disable of the pool member in the NLM so that it works with your site. If your site requires sessions (such as depending on session state or shared objects) then you have to bleed the traffic from the NLB nodes. in f5 you can just disable them and then watch for the connection count to go to zero (also scriptable).
You must enforce a policy with your deveopers / dbas that states that all database changes MUST NOT cause degradation or failure in the existing code. This means that you have to be very careful with the databases and configurations. That way you can do your database updates before you even start deploying to the first pool of your website.

Resources