With Microsoft breaking 2 tasks this week and breaking many pipelines until the fix was released, I am looking into workarounds. I already documented a few here.
One option I'm investigating is the use of a YAML transformation in a Pipeline Decorator.
I've crafted the following YAML transformation to try and replace the version of the npmAuthenticate#0 task to be exactly and always: 0.200.0:
- steps:
- ${{ each step in job.steps }}:
- ${{ if in(step.task.id, 'npmAuthenticate', 'ad884ca2-732e-4b85-b2d3-ed71bcbd2788')}}
- ${{ each task in step }}:
- ${{ each pair in task }}:
${{ if eq(pair.key, 'version') }}:
${{ pair.key }}: 0.200.0
${{ if ne(pair.key, 'version') }}:
${{ pair.key }}: ${{ pair.value }}
But it's not working. Unfortunately I haven't found a way to locally debug these transformers, so the current process of trying out new versions takes forever.
My ask:
Is this even possible from a pipeline decorator?
What's wrong with the syntax above?
Is there a way to debug these things locally?
Here's the Pipeline Decoractor Context debug information:
2023-01-13T09:38:23.6524788Z job={
2023-01-13T09:38:23.6525354Z "steps": [
...
2023-01-13T09:38:23.6538541Z {
2023-01-13T09:38:23.6539075Z "task": {
2023-01-13T09:38:23.6539824Z "id": "ad884ca2-732e-4b85-b2d3-ed71bcbd2788",
2023-01-13T09:38:23.6540412Z "name": "npmAuthenticate",
2023-01-13T09:38:23.6541005Z "version": "0.208.1"
2023-01-13T09:38:23.6542056Z },
2023-01-13T09:38:23.6542581Z "env": {},
2023-01-13T09:38:23.6543177Z "inputs": {
2023-01-13T09:38:23.6545385Z "workingFile": "npmrc",
2023-01-13T09:38:23.6545995Z "customEndpoint": ""
2023-01-13T09:38:23.6546519Z },
2023-01-13T09:38:23.6547100Z "condition": "succeeded()",
2023-01-13T09:38:23.6547694Z "continueOnError": false,
2023-01-13T09:38:23.6548265Z "name": "npmAuthenticate",
2023-01-13T09:38:23.6548890Z "displayName": "npm Authenticate npmrc",
2023-01-13T09:38:23.6549457Z "enabled": true
2023-01-13T09:38:23.6549971Z },
...
2023-01-13T09:38:23.6786723Z ]
2023-01-13T09:38:23.6787219Z }
2023-01-13T09:38:23.7641252Z ##[section]Finishing: Pipeline decorator context (Windows)
No dice either:
- steps:
- ${{ each task in job.steps.*.task }}:
- ${{ if eq(task.id, 'ad884ca2-732e-4b85-b2d3-ed71bcbd2788')}}
${{ if eq(pair.key, 'version') }}:
${{ pair.key }}: 0.200.0
${{ if ne(pair.key, 'version') }}:
${{ pair.key }}: ${{ pair.value }}
nor:
- steps:
- ${{ each step in job.steps.* }}:
- ${{ if eq(step.task.id, 'ad884ca2-732e-4b85-b2d3-ed71bcbd2788')}}:
- ${{ each prop in step.task }}:
${{ if eq(prop.key, 'version') }}:
${{ prop.key }}: 0.200.0
${{ if ne(prop.key, 'version') }}:
${{ prop.key }}: ${{ prop.value }}
The vss-extension.json:
{
"manifestVersion": 1,
"id": "replace-npmauthenticate-decorator",
"publisher": "jessehouwing",
"version": "1.0.0",
"name": "replace-npmauthenticate-decorator",
"description": "Azure DevOps Extension",
"categories": [
"Azure Pipelines"
],
"icons": {
"default": "logo.png"
},
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
],
"contributions": [
{
"id": "replace-npmauthenticate-decorator",
"type": "ms.azure-pipelines.pipeline-decorator",
"targets": [
"ms.azure-pipelines-agent-job.pre-job-tasks"
],
"properties": {
"template": "replace-npmauthenticate-decorator.yml"
}
}
],
"files": [
{
"path": "replace-npmauthenticate-decorator.yml",
"addressable": true,
"contentType": "text/plain"
}
]
}
It is not possible to debug a decorator, since the debugging of agents is removed:
https://github.com/microsoft/azure-pipelines-agent/issues/2479
Alternatives are:
split up and peal off the loop
Use your own pre / post AzDo task instead
I've experimented with over 40 different versions of YAML transformations, tried 8 suggestions by ChatGTP, but ultimately must conclude that this approach does not seem to be viable.
I've seen many different tools to validate YAML structures, but nothing that validates a transform template or allows me to run one with a given context.
Azure DevOps' feedback towards errors is about none existent. If the yaml is syntactically correct it will say there is an error, but won't tell you what the error is, as such I must conclude that most of my 40 attempts are valid YAML, but that they don't work for some other undisclosed reason.
I've looked most of Azure DevIps Server in IlSpy to see if there is a place I could hook into to locally run the decorator, but so far no such luck, the code that interprets YAML is firmly embedded in the execution context of the pipeline and would require quite a bit if work to run locally out-of-server.
Related
I'm getting below wired error when trying to add disk to RHVM. I have checked in the documentation. all parameters seems legit to me. I need extra eye to valiate this case. thank you in-advanced
error msg as follows
"msg": "Unsupported parameters for (ovirt_disk) module: activate Supported parameters include: auth, bootable, description, download_image_path, fetch_nested, force, format, id, image_provider, interface, logical_unit, name, nested_attributes, openstack_volume_type, poll_interval, profile, quota_id, shareable, size, sparse, sparsify, state, storage_domain, storage_domains, timeout, upload_image_path, vm_id, vm_name, wait"
}
Parameters in the role as follows
"module_args": {
"vm_name": "Jxyxyxyxy01",
"activate": true,
"storage_domain": "Data-xxx-Txxx",
"description": "Created using Jira ticket CR-329",
"format": "cow",
"auth": {
"timeout": 0,
"url": "https://xxxxxx.com/ovirt-engine/api",
"insecure": true,
"kerberos": false,
"compress": true,
"headers": null,
"token": "xxcddsvsdvdsvsdvdEFl0910KES84qL8Ff5NReA",
"ca_file": null
},
"state": "present",
"sparse": true,
"interface": "virtio_scsi",
"wait": true,
"size": "20GiB",
"name": "Jxyxyxyxy01_123"
}
Playbook as follows.
- name: Create New Disk size of {{ disk_size }} on {{ hostname }} using storage domain {{ vm_storage_domain }}
ovirt_disk:
auth: "{{ ovirt_auth }}"
description: "Created using Jira ticket {{ issueKey }}"
storage_domain: "{{ vm_storage_domain }}"
name: "{{ hostname }}_123" # name of the disk
vm_name: "{{ hostname }}" #name of the virtual machine
interface: "virtio_scsi"
size: "{{ disk_size }}GiB"
sparse: yes
format: cow
activate: yes
wait: yes
state: present
register: vm_disk_results
The activate parameter was added in ansible 2.8.
Upgrade your ansible installation or drop that parameter.
Terraform module provided by ansible works well for creating aws resources with S3 backend config for statefile.
but not able to get the terraform plan output using this module.
We want the output should list something like:
Plan: 1 to add, 0 to change, 0 to destroy.
and give details of the resources to be created/destroyed/changed
Have tried below task in ansible, but not able to generate output as expected.
below is ansible task for creating plan:
- name: "create file"
shell: "touch {{playbook_dir}}/tfplan && ls -larth ../terraform/{{role_name}} "
- name: "Run terraform project with plan file"
terraform:
state: planned
backend_config:
bucket: "{{bootstrap_prefix}}-{{aws_account_type}}-{{caller_facts.account}}"
region: "{{ bootstrap_aws_region }}"
kms_key_id: "{{ kms_id.stdout }}"
encrypt: true
workspace_key_prefix: "{{ app_parent }}-{{ app_name }}"
key: "terraform.tfstate"
force_init: true
project_path: "../terraform/{{role_name}}"
plan_file: "{{playbook_dir}}/tfplan"
variables:
app_name: "{{ app_name }}"
workspace: "{{ app_env }}"
Output of above ansible task:
ok: [localhost] => {
"changed": false,
"command": "/usr/local/bin/terraform -lock=true /root/project/ansible/tfplan",
"invocation": {
"module_args": {
"backend_config": {
"bucket": "XXXXXXXX2440728499",
"encrypt": true,
"key": "terraform.tfstate",
"kms_key_id": "XXXXXXXX",
"region": "XXXXXXXX",
"workspace_key_prefix": "XXXXXX"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": "/root/project/ansible/tfplan",
"project_path": "../terraform/applications",
"purge_workspace": false,
"state": "planned",
"state_file": null,
"targets": [],
"variables": {
"app_name": "application"
},
"variables_file": null,
"workspace": "uat"
}
},
"outputs": {},
"state": "planned",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "uat"
}
It works fine with state: present (terraform apply) , but want it to work with state:planned (terraform plan)
In current ansible documentation: To just run a terraform plan, use check mode.
In addition, you should add to terraform module parameters:
- name: "Run terraform project with plan file"
terraform:
state: planned
check_mode: true
I am trying to deregister EC2 instances from target groups using Automation document in SSM, which I am attempting to write in YAML but I am having major issues with getting my head around YAML lists and arrays.
Here are the relevant parts of the code:
parameters:
DeregisterInstanceId:
type: StringList
description: (Required) Identifies EC2 instances for patching
default: ["i-xxx","i-yyy"]
Further down I am trying to read this DeregisterInstanceId as a list but it's not working - getting various errors regarding expected one type of variable but received another.
name: RemoveLiveInstancesFromTG
action: aws:executeAwsApi
inputs:
Service: elbv2
Api: DeregisterTargets
TargetGroupArn: "{{ TargetGroup }}"
Targets: "{{ DeregisterInstanceId }}"
isEnd: true
What Targets input really needs to look like, is like this:
Targets:
- Id: "i-xxx"
- Id: "i-yyy"
...but I am not sure how to pass my StringList to create the above.
I tried:
Targets:
- Id: "{{ DeregisterInstanceId }}"
and
Targets:
Id: "{{ DeregisterInstanceId }}"
But no go.
I used to have the exact same problem although I created the document in json.
Please checkout the following working script to de-register an instance from a load balancer target group
Automation document v. 74
{
"description": "LoadBalancer deregister targets",
"schemaVersion": "0.3",
"assumeRole": "{{ AutomationAssumeRole }}",
"parameters": {
"TargetGroupArn": {
"type": "String",
"description": "(Required) TargetGroup of LoadBalancer"
},
"Target": {
"type": "String",
"description": "(Required) EC2 Instance(s) to deregister"
},
"AutomationAssumeRole": {
"type": "String",
"description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf.",
"default": ""
}
},
"mainSteps": [
{
"name": "DeregisterTarget",
"action": "aws:executeAwsApi",
"inputs": {
"Service": "elbv2",
"Api": "DeregisterTargets",
"TargetGroupArn": "{{ TargetGroupArn }}",
"Targets": [
{
"Id": "{{ Target }}"
}
]
}
}
]
}
Obviously the point of interest is the targets parameter, it needs an json array to work (forget about the cli format, it seems to need json).
It also allows for specifying multiple targets and also allows usage of ports and availability groups, but all I need it for is to choose one instance and pull it out.
Hope it might be of use for someone.
I've been doing ansible for a while, but have recently started doing some more advanced things such as pulling data to drive actions from outside sources. This has resulted in me having to dig a little deeper into how ansible allows logic and parsing of variables, requiring me to dig a little into jinja2.
In my playbook I am attempting to pull data in from etcd, allowing me to construct authorized sudo spec files, which I then pass onto a role to add to the appropriate systems.
My datasource, in addition to storing data that is needed to construct the specs has metadata used for auditing and logging purposes.
A key aspect of my datasource is that no metadata, i.e. that user XYZ had password less sudo for a period of 10 days, should be deleted when access is removed. So many aspects have a state field that may be active, inactive or in the case of sudo specs grant or revoke.
I have successfully constructed a lookup that pulls back a dictionary similar to below - which I then parse using the subsequent ansible statements. I am able to succcessfully process and extract all data except for groups/users that have their specs in a grant state.
When specs are in a "grant" state, I need to extract the linuxName field, to be passed onto the role that configures sudo.
I have tried a number of variations of filters, most of which end in me getting a reject or similar message, or a NULL value instead of the list of values that I want.
Anyone have an idea on how to accomplish this?
Thanks in advance.
Sample Data
ok: [serverName] => {
"sudoInfraSpecs": [
{
"infra_admins": {
"addedBy": "someUser",
"commands": "FULL_SUDO",
"comment": "platform support admins",
"dateAdded": "20180720",
"defaults": "!requiretty",
"hosts": "SERVERS",
"name": "infra_admins",
"operators": "ROOT",
"state": "active",
"tags": "PASSWD",
"users": {
"admingroup1": {
"addedBy": "someUser",
"dateAdded": "20180719",
"linuxName": "%admingroup1",
"name": "admingroup1",
"state": "grant"
},
"admingroup2": {
"addedBy": "someUser",
"dateAdded": "20180719",
"linuxName": "%admingroup2",
"name": "admingroup2",
"state": "grant"
}
}
},
"ucp_service_account": {
"addedBy": "someUser",
"commands": "FULL_SUDO",
"comment": "platform service account",
"dateAdded": "20180720",
"defaults": "!requiretty",
"hosts": "SERVERS",
"name": "platform_service_account",
"operators": "ROOT",
"state": "active",
"tags": "NOPASSWD,LOG_OUTPUT",
"users": {
"platformUser": {
"addedBy": "someUser",
"dateAdded": "20180719",
"linuxName": "platformUser",
"name": "platformUser",
"state": "grant"
}
}
}
}
]
}
Ansible snippet
- name: Translate infraAdmins sudoers specs from etcd into a list for processing [1]
set_fact:
tempInfraSpecs:
name: "{{ item.value.name}}"
comment: "{{ item.value.comment }}"
users: "{{ item.value.users | list }}"
hosts: "{{ item.value.hosts.split(',') }}"
operators: "{{ item.value.operators.split(',') }}"
tags: "{{ item.value.tags.split(',') }}"
commands: "{{ item.value.commands.split(',') }}"
defaults: "{{ item.value.defaults.split(',') }}"
with_dict: "{{ sudoInfraSpecs }}"
when: item.value.state == 'active'
register: tempsudoInfraSpecs
- name: Translate infraAdmins sudoers specs from etcd into a list for processing [2]
set_fact:
sudoInfraSpecs_fact: "{{ tempsudoInfraSpecs.results | selectattr('ansible_facts','defined')| map(attribute='ansible_facts.tempInfraSpecs') | list }}"
Rough Desired output dictionary:
sudoInfraSpecs:
- infra_admins:
addedBy: someUser
commands: FULL_SUDO
comment: platform support admins
dateAdded: '20180720'
defaults: "!requiretty"
hosts: SERVERS
name: infra_admins
operators: ROOT
state: active
tags: PASSWD
users:
"%admingroup1"
"%admingroup2"
- ucp_service_account:
addedBy: someUser
commands: FULL_SUDO
comment: platform service account
dateAdded: '20180720'
defaults: "!requiretty"
hosts: SERVERS
name: platform_service_account
operators: ROOT
state: active
tags: NOPASSWD,LOG_OUTPUT
users:
"platformUser"
I ended up doing this by creating a custom filter for use in my playbook that parsed the nested dictionary that makes up users:
#!/usr/bin/python
def getSpecActiveMembers(my_dict):
thisSpecActiveMembers = []
for i, value in my_dict.iteritems():
if value['state'] == 'grant':
thisSpecActiveMembers.append(value['linuxName'])
return thisSpecActiveMembers
class FilterModule(object):
def filters(self):
return {
'getSpecActiveMembers': getSpecActiveMembers
}
This ends up flattening the users from the source listed above, to the desired output.
Here is my example, not sure if this can be done, but I would like to output the value from the sites array, specifically item[1].site, in item[0].dest, but it looks like it is escaping {{ item[1].site }}, to {# item[1].site #}. Is there a way to prevent it from escaping the string?
- name: Put files into docker directory
template: src={{ item[0].src }} dest={{ item[0].src }}
with_nested:
- [
{ src: 'Dockerfile.j2', dest: "/opt/docker-apache2-fpm/{{ item[1].site }}/Dockerfile" },
]
- sites
Here is the output:
failed: [192.168.200.87] => (item=[{'dest': u'/opt/docker-apache2-fpm/{# item[1].site #}/Dockerfile', 'src': 'Dockerfile.j2'}, {'site': 'admin.mysite.com', 'user': 'mysite', 'uid': 11004}]) => {"failed": true, "item": [{"dest": "/opt/docker-apache2-fpm/{# item[1].site #}/Dockerfile", "src": "Dockerfile.j2"}, {"site": "admin.mysite.com", "uid": 11004, "user": "mysite"}]}
msg: Destination directory does not exist
That's not working, because Ansible is not re-evaluating the elements. But you can work with the replace filter to archive your goal.
- name: Put files into docker directory
template: src={{ item[0].src }} dest={{ item[0].dest | replace("%site%", item[1].site) }}
with_nested:
- [
{ src: 'Dockerfile.j2', dest: "/opt/docker-apache2-fpm/%site%/Dockerfile" },
]
- sites
In that example I assumed you meant item[0].dest and not item[0].src in your dest assignment.