I need to pass the variables into templates. For example, the output of Small (found below) should be passed as input to medium. How is this possible?
template: windows-tests.yml
parameters:
integration_tests: ["Small", "Medium"]
check_status: "$(Powershell1.var)"
In addition to the above, I would like to add
o/p from build step will be stored in checkstatus variable and will be passed to template.
Powershell1.var will be used by small test and then generated a value which needs to be used by medium.
I need to pass the variables into templates.
You can refer to this document for instructions on how to pass variables in templates.
the output of Small (found below) should be passed as input to Medium.
Here is the script:
Template:
parameters:
- name: Medium
type: {data type of `Medium`}
default: {defualt value of `Medium`}
Pipeline:
extends:
template: {template file}
parameters:
Medium: $(Small)
I would like to add o/p from build step will be stored in checkstatus variable and will be passed to template.
Here is the script:
Template:
parameters:
- name: check_status
type: string
default: "$(Powershell1.var)"
Pipeline:
extends:
template: {template file}
parameters:
check_status: "$(Powershell1.var)"
Related
I have a strange problem that I can't seem to get my head around. I am trying to define some variables for use as part of the job that will deploy bicep files via Azure CLI and execute PowerShell tasks.
I get this validation error when I try and execute the pipeline: While parsing a block mapping, did not find expected key
The line that it refers to is: - name: managementResourceDNSPrivateResolverName
On the research that I have done on this problem, it sounds like an indentation problem but on the face of it, it seems to look fine.
jobs:
- job: 'Deploy_Management_Resources'
pool:
vmImage: ${{ parameters.buildAgent }}
variables:
- name: managementResourceDNSPrivateResolverName
value: 'acme-$[ lower(parameters['environmentCode']) ]-$[ lower(variables['resourceLocationShort']) ]-private-dns-resolver'
- name: managementResourceGroupManagement
value: 'acme-infrastructure-rg-management'
- name: managementResourceRouteTableName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-route-table'
- name: managementResourceVirtualNetworkName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-vnet-internal-mng'
Thanks!
The error message ...parsing a block mapping, did not find expected key is usually a side-effect of malformed yaml. You'll see if often with variables if you have mixed formats of arrays and property elements
variables: # an array of objects
# variable group reference object
- group: myvariablegroup
# variable template reference object
- template: my-variables.yml
# variable object
- name: myVariable
value: 'value1'
# variable shorthand syntax
myVariable: 'value1' # this fails because it's a property instead of an array element
While it doesn't appear that the sample you've provided is malformed, I am curious about the use of $[ ] which is a runtime expression. The expression $[ lower(parameters['environmentcode']) ] refers to parameters which is are only available at compile time.
Change:
$[ lower(parameters['environmentCode']) ] to ${{ lower(parameters.environmentCode) }}
I need rundeck to parse an option based on the value selected in another option. I have an option ${option.env} and other options like ${option.id_dev}, ${option.id_qa}
I want to achieve something like below for extra-vars, so that "env" option value determines which id(dev or qa) to read.
ansible-playbook /build.yml -e id=${option.id_${option.env.value}}
Is this possible or Could I pass extra-vars like a conditional case based on env value ?. I'm using rundeck 3.0.X
Update :
To give clear info, If I select 'dev' for the option 'env', I need to use its value like ${option.id_${option.env.value}} , so it translates to ${option.id_dev} to get other option in the command line
You can use cascade remote options in a tricky way. The explanation is at the end of this answer.
I made a little example to see how to achieve this:
The branches.json file (referenced on the job options as "branches"):
[
{"name":"branch1", "value":"branch1.json"},
{"name":"branch2", "value":"branch2.json"},
{"name":"branch3", "value":"branch3.json"}
]
The branch1.json is the first tentative value of the branches option:
[
{"name":"v1", "value":"1"},
{"name":"v2", "value":"2"},
{"name":"v3", "value":"3"}
]
The branch2.json is the second tentative value of the branches option:
[
{"name":"v4", "value":"4"},
{"name":"v5", "value":"5"},
{"name":"v6", "value":"6"}
]
The branch3.json is the third tentative value of the branches option:
[
{"name":"v7", "value":"7"},
{"name":"v8", "value":"8"},
{"name":"v9", "value":"9"}
]
Full job definition to test:
- defaultTab: summary
description: ''
executionEnabled: true
id: ed0d84fe-135b-41ee-95b6-6daeaa94894b
loglevel: INFO
name: CascadeTEST
nodeFilterEditable: false
options:
- enforced: true
name: branches
valuesUrl: file:/Users/myuser/branches.json
- enforced: true
name: level2
valuesUrl: file:/Users/myuser/${option.branches.value}
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |+
#!/bin/sh
# getting the options
first=#option.branches#
second=#option.level2#
# this just an example
echo "this an example: ${first%%.*}.$second"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: ed0d84fe-135b-41ee-95b6-6daeaa94894b
Explanation
As you see, the value of the first option is always a file name, if you take the value directly you always obtain an "option1.json" like string, so, the trick here is to cut the file extension and take only the name as the value, for that, I used this approach.
So, with the first selection value and the second one, you can do anything later in a script, for example, launch the ansible-playbook command in the inline script.
Check the example result.
UPDATED ANSWER:
The closest way is just using two options and concatenating them in the command step but isn't possible to get the option value inside the other one as you say.
Just like this:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
loglevel: INFO
name: NewJob
nodeFilterEditable: false
options:
- enforced: true
name: env
value: qa
values:
- qa
- prod
- stage
valuesListDelimiter: ','
- enforced: true
name: id_dev
value: '1'
values:
- '1'
- '2'
- '3'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.env}_${option.id_dev}
keepgoing: false
strategy: node-first
uuid: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
Result.
(the cascade option is another approach)
This is my first time working with YAML, but I am running into an issue where it seems like if I want to include a variable-group (i.e., signing certificate password) with local pipeline-related variables then I cannot use the simplified naming convention where the variable's name and value can both be defined and set on the same line.
For example, wat I want is something similar to this (I made sure spacing is correct in the YAML):
variables:
solutionName: Foo.sln
projectName: Bar
buildPlatform: x64
buildConfiguration: development
major: '1'
minor: '0'
build: '0'
revision: $[counter('rev', 0)]
vhdxSize: '200'
- group: legacy-pipeline
signingCertPwd: $[variables.SigningCertificatePassword]
But, this results in a parsing error. As a result, I have to use a denser, but more bloated looking, format of:
variables:
- name: solutionName
value: Foo.sln
- name: projectName
value: Bar
- name: buildPlatform
value: x64
- name: buildConfiguration
value: development
- name: major
value: '1'
- name: minor
value: '0'
- name: build
value: '0'
- name: revision
value: $[counter('rev', 0)]
- name: vhdxSize
value: '200'
- group: legacy-pipeline
- name: signingCertPwd
value: $[variables.SigningCertificatePassword]
It seems like the simplified naming format is only available if I use it for local variables, but if I add a variable-group then the simplified format goes away. I have tried searching the web for a solution for this, but I am not able find anything useful for this. Is what I am trying to achieve possible or no? If yes, how can it be done?
Unfortunately mixing the styles is not possible, but you can work around that using templates:
# pipeline.yaml
stages:
- stage: declare_vars
variables:
- template: templates/vars.yaml
- group: my-group
- template: templates/inline-vars.yaml
parameters:
vars:
inline_var: yes!
and_more: why not
jobs:
- job:
steps:
- pwsh: |
echo 'foo=$(foo)'
echo 'bar=$(bar)'
echo 'var1=$(var1)'
echo 'inline_var=$(inline_var)'
# templates/vars.yaml
variables:
foo: bar
bar: something else
# templates/inline-vars.yaml
parameters:
- name: vars
type: object
default: {}
variables:
${{ each var in parameters.vars}}:
${{var.key}}: ${{var.value}}
templates/vars.yaml is just simply moving variables to another file.
templates/inline-vars.yaml lets you define inline variables using the denser syntax together with referencing groups, but there's additional ceremony of writing template:, parameters:, vars:.
My question: How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
I have Gthub repo that contains a file imageManifest.json which looks like this:
{
"image_id": "docker.pkg.github.com/my-org/my-repo/my-app",
"image_version": "1.0.1"
}
I want my AWS Codepipeline Source stage to be able to read the value of image_version from imageManifest.json and pass it as a parameter to a CloudFormation action in a subsequent stage of my pipeline.
For reference, here is my source stage.
Stages:
- Name: GitHubSource
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: '1'
Provider: GitHub
OutputArtifacts:
- Name: SourceCodeArtifact
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
OAuthToken: !Ref GitHubAuthToken
And here is my deploy stage:
- Name: DevQA
Actions:
- Name: DeployInfrastructure
InputArtifacts:
- Name: SourceCodeArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
StackName: !Ref AppName
Capabilities: CAPABILITY_NAMED_IAM
RoleArn: !GetAtt [CloudFormationRole, Arn]
ParameterOverrides: !Sub '{"ImageId": "${image_version??}"}'
Note that image_version in the last line above is just my aspirational placeholder to illustrate how I hope to use the image_version json value.
How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
StepFunctions? Lambda? CodeBuild?
You can use a CodeBuild step in between Source and Deploy stages.
In CodeBuild step, read the image_version from SourceArtifact (artifact produced by soruce stage) and write to an artifact 'Template configuration' file 1 which is a configuration property of the CloudFormation action. This file can hold parameter values for your CloudFormation stack. Use this file instead of ParameterOverrides you are currently using.
Fn::GetParam is what you want. It can returns a value from a key-value pair in a JSON-formatted file. And the JSON file must be included in an artifact.
Here is the documentation and it gives you some examples: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html#w2ab1c13c20b9
It should be something like:
ParameterOverrides: |
{
"ImageId" : { "Fn::GetParam" : ["SourceCodeArtifact", "imageManifest.json", "image_id"]}
}
When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)