Octopus Deploy - variable replacement for deployment target machine name - octopus-deploy

Problem:
I've got a manual intervention step with textual steps for the person performing the deployment to follow.
I'd like to pass in the name of the target server so the the person doesn't need to lookup the server name being targeted.
For example as you see below, I need them to unzip to a location on the target server.
**SECTION 1: (Main installation)**
1. Navigate to: #{InstallationZipLocation}.
2. Download zip file named: #{ZipFileName}
3. Unzip to the desktop on: #{DeploymentTargetMachineName} --need help here
4. Run executable named: #{ExecutableName}
5. Accept default settings
What I have tried:
Octopus Deploy - System Variables Documentation offers:
#{Octopus.Deployment.Machines} results in: Machines-6
#{Octopus.Deployment.SpecificMachines} results in: (empty string)
What I expect to see:
3. Unzip to the desktop on: FTPServer05
Additional Comment:
I realize I could set the name of the target server in my variables list for each target environment/scope, resulting in only 4 variables (not a big deal, and easy to maintain), but I was curious if there was a way to simplify it. We are running Octopus Deploy 3.12.9.

So I was looking for an easier approach, but stumbled on something that I found to be rather interesting so I went ahead and implemented it.
Output variables. . . "After a step runs, Octopus captures the output variables, and keeps them for use in subsequent steps."
What I did to resolve my issue:
I setup a custom step-template which sole purpose is to set "output variables" to use in my subsequent step. You could have this be your first step in your project, or at a minimum come before the step that references the variable you are setting.
Custom step setup:
Powershell:
Write-Host "TargetDeploymentMachineName $TargetDeploymentMachineName"
Set-OctopusVariable -name "TargetDeploymentMachineName" -value $TargetDeploymentMachineName
Parameters:
Then in my Manual Intervention step, I use the output value like so:
3. Unzip to the desktop on: #{Octopus.Action[MyProject-Set-Output-Variables].Output.TargetDeploymentMachineName}
(Where [MyProject-Set-Output-Variables] represents the name of the step in my deployment project which is responsible for assigning the output variables)
Explanation for why I was having trouble in my question:
Turns out the variable binding syntax to use for my question would have been:
Octopus.Machine.Name = The name that was used to register the
machine in Octopus. Not the same as Hostname
However, the Manual Intervention step specifically does not have a "Deployment Target":
It instead just runs on "Octopus Server":
So I am pretty sure that is why I was not getting a value for the "target". For example, I simply tested another new basic step that used the "Deployment Targets" radio buttons, which resulted in the FTPServer05 value I was expecting.

Related

How to change or specify a DVC experiment name?

How do I change the name of the experiment? I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Tried: I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Expected experiment name on iterative studio interface to display the name
Actually happened: Github SHA value instead of the name
There are two types of experiments in DVC ecosystem that we need to distinguish and there are a few different approaches on naming them.
First, is what we sometimes call "ephemeral" experiments, those that do not create commits in your Git history unless you explicitly say so. They are described in this Get Started section. For each of those experiments a name is auto-generated (e.g. angry-upas in one of the examples from the doc) or you could use dvc exp run -n ... to pass a particular name to it.
Another way to create those experiments (+ send them to Studio) is to use DVC logger (DVCLive), e.g. it's described here. Those experiments will be visible in Studio with an auto-generated names (or a name that was provided when they were created).
Now, we have another type of an experiment - a commit. Something that someone decided to make persistent and share with the team via PR and/or via a regular commit. Those are presented in the image that is shared in the Question.
Since they are regular commits, the regular Git rules apply to them - they have hash, they have descriptions, and most relevant to this discussion is that they have tags. All this information will be reflected in Studio UI.
E.g. in this public example-get-started repo:
It think in your case, tags would be the most natural way to rename those. May be we can introduce a way to push exp name as a git tag along with the experiment when it's being saved. WDYT?
Let me know if that answers your question.

Setting a DevOps task drop down option from a variable

I am writing a release pipeline to upload an apk (installer) to Google Play. I am using the Google Play - Release task to do this. This is in a classic pipeline (our code is in TFS.)
One of the options is the Track to upload the apk to. The options are:
I want to set this option based on a variable that is set in one of the previous tasks. I have a previous task that sets the release.task variable to either Internal test or Production based on whether it is a public release or not. I am using it in the Google Play task like this:
However when I run the pipeline it does not recognise the value, even though it is one of the valid options:
Is there a way to get around this? I need to control which track the pipeline writes to based on a value in our code base.
I've found the issue with this scenario - you can set the value from a variable but the values are different (for this task) from those on display in the UI. For the two tracks I am using the values are:
Production -> production (note the difference in the case)
Internal test -> internal

Good practice GCE + Windows: computer name

I have some Windows Server 2016 instances on GCE (for Jenkins agents).
I'm wondering what is the best/good practice when it comes to computer name.
Currently, when I want to create a new node, I clone an instance (create images from disks + create template + create instance from template).
On this clone, I change the computer name (in Windows) so that it has the same name as on GCE. Is it useful? recommended? bad? needed?
I know that the name of the Jenkins node needs to be the same as the name of the GCE instance (to be picked up easily). However, I don't think the Windows computer name matters.
So, should I pick an identical generic name for all of them? A prefix+random generated name? Continue with the instance=computer=node name?
The node name that I use in Jenkins is always retrieved from env.NODE_NAME (when needed), so that should not break any pipeline. Not sure thought, as I may be missing something (internal to Jenkins).
Bonus question: After cloning, I have to do some modifications on the clone for Perforce (p4) to work.
I temporarily set some env variables
I duplicate the workspace: p4 client -t prefix-buildX-suffix prefix-buildY-suffix
I setup the stream (not sure if doable in one step)
Then regenerate the list of files: p4 sync -k <root_folder_to_be_generated>/...#YYYY/MM/DD
So, here also there's a name prefix-buildY-suffix which is the same as the one from the instance=computer=node (buildY). It may be a separate question, but as it's still from the same context, I'm putting it here: should I recreate a new workspace all the time? Knowing that it's on several machines, I'd say yes. Otherwise, I "imagine" that p4 would have contradictory information about the state of this workspace. So, here also, I currently need to customize the name. So, even if I make the Windows computer name generic, I would still need to customize the p4 workspace name, wouldn't I?
Jenkins must have the same computer name as the one on the network.
So, all three names must be identical.

Octopus - SQL Deploy DACPAC Community Contributed Step

I am using the SQL Deploy DACPAC community contributed step to deploy my dacpac to the server within Octopus.
It has been setup correctly and has been working fine until the below situation occurs.
I have a situation where I am dropping columns but the deploy keeps failing due to rows being detected. I am attempting to use /p:BlockOnPossibleDataLoss=false as an "Additional deployment contributor arguments" but it seems to be ignored.
Can anyone guide me to what is wrong?
The publish properties should have DropObjectsNotInSource, try to set it to True.
You might want to fine tune it, to avoid dropping users, permissions, etc.
After multiple updates by the original author, this issue was still not resolved. The parameter has actually since been completely removed since version 11.
Initially, I added a pre-deployment script that copied all the data from the tables that were expected to fail, delete all the data, allow the table schema to update as normal, and in a post-deployment script re-insert all the data into the new structure. The problem with this was that for data that could be lost, a pre-deployment and post-deployment script was required when it wasn't really needed.
Finally, I got around this by duplicating the community step "SQL - Deploy DACPAC" (https://library.octopus.com/step-templates/58399364-4367-41d5-ad35-c2c6a8258536/actiontemplate-sql-deploy-dacpac) by saving it as a copy from within Octopus. I then went into the code, into the function Invoke-DacPacUtility, and added the following code:
[bool]$BlockOnPossibleDataLoss into the parameter list
Write-Debug (" Block on possible data loss: {0}" -f $BlockOnPossibleDataLoss) into the list of debugging
if (!$BlockOnPossibleDataLoss) { $dacProfile.DeployOptions.BlockOnPossibleDataLoss = $BlockOnPossibleDataLoss; } into the list of deployment options
Then, I went into the list of parameters and added as follows:
Variable name: BlockOnPossibleDataLoss
Label: Block on possible data loss
Help test: True to stop deployment if possible data loss if detected; otherwise, false. Default is true.
Control type: Checkbox
Default value: true
With this, I am able to change the value of this parameter with the checkbox when using the step in the process of the project.

How to use environment variables in a Jenkins pipeline job?

I posted this in the Jenkins users Google group, but thought I'd post it here too.
I have a Jenkins Pipeline job, and in its Configuration page, I use a "Pipeline script from SCM" as my pipeline. One of this block's parameters is "Branch to build" of course. How can I used an environment variable for the text block? I tried, for example, $branchToBuild, ${branchToBuild} or "${branchToBuild}" and it just takes those as literal values and does not interpolate the string. I do have that variable defined and use it in other jobs.
Someone suggested using ${env.branchToBuild}, so I tried env.branchToBuild, $env.branchToBuild, ${env.branchToBuild}, and "${env.branchToBuild}" all to NO avail, that is, they are also just taken as literal strings and not interpolated.
Is it just not possible to do this?
You have to uncheck Lightweight checkout box in order to use a variable as Branch name to build.
It's a known Jenkins bug, here is more information : How to pass project parameter as branch name to build in Jenkins
Apparently the code path is very different if you are using the
lightweight checkout, and that has not been resolved, apparently.
Another source : https://cleverbuilder.com/notes/jenkins-dynamic-git-branch/

Resources