How to add Custom Script Extension to Azure VM using Ansible - windows

Following advice on https://superuser.com/questions/1210215/how-to-bootstrap-windows-hosts-with-remote-powershell-for-use-with-ansible I am trying to add Custom Script extension to existing VM.
Below is my playbook
- name: Create VM playbook
hosts: localhost
connection: local
tasks:
- name: Custom Script Extension
azure_rm_deployment:
state: present
location: 'uk west'
resource_group_name: 'AnsibleRG'
template: "{{ lookup('file', '/etc/ansible/playbooks/extension.json') | from_json }}"
deployment_mode: incremental
This is extension.json
{
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.4",
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File ConfigureRemotingForAnsible.ps1"
}
}
When I run the playbook I get following error on azure
The request content was invalid and could not be deserialized: 'Could
not find member 'publisher' on object of type 'Template'. Path
'properties.template.publisher', line 1, position 64.'.
Can anyone please point me in right direction?
Thanks

You still need to provide a valid template
You need to provide proper type for the resource, extensions isnt a proper type
Your name has to include the vm name, as this is how the template supposed to figure out which vm apply this extension to.
Example:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string"
}
},
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/ConfigureRemotingForAnsible')]",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File ConfigureRemotingForAnsible.ps1"
}
}
}
]
}

Related

join multiple arrays in complex data structure with jmespath

I'm trying to transform NFS exports, described in complex data structure, to config option accepted by nfs-server daemon which later be used in ansible.
I have:
nfs_exports:
- path: /export/home
state: present
options:
- clients: "192.168.0.0/24"
permissions:
- "rw"
- "sync"
- "no_root_squash"
- "fsid=0"
- path: /export/public
state: present
options:
- clients: "192.168.0.0/24"
permissions:
- "rw"
- "sync"
- "root_squash"
- "fsid=0"
- clients: "*"
permissions:
- "ro"
- "async"
- "all_squash"
- "fsid=1"
which must become:
[
{
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)",
"path": "/export/home",
"state": "present"
},
{
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0) *(ro,async,all_squash,fsid=1)",
"path": "/export/public",
"state": "present"
}
]
So far I was able, using {{ nfs_exports | json_query(query) }}
query: "[].{path:path,state:state,options:options.join(` `,[].join(``,[clients,`(`,join(`,`,permissions),`)`]))}"
get
{
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)",
"path": "/export/home",
"state": "present"
},
{
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0)*(ro,async,all_squash,fsid=1)",
"path": "/export/public",
"state": "present"
}
It's probably simple but I can't get pass that last options join, space ' ' gets removed.
So if someone knows the correct query additional explanation will be much appreciated.
Given the query:
[].{ path: path, state: state, options: join(' ', options[].join('', [clients, '(', join(',', permissions), ')'])) }
On the JSON
{
"nfs_exports": [
{
"path": "/export/home",
"state": "present",
"options": [
{
"clients": "192.168.0.0/24",
"permissions": [
"rw",
"sync",
"no_root_squash",
"fsid=0"
]
}
]
},
{
"path": "/export/public",
"state": "present",
"options": [
{
"clients": "192.168.0.0/24",
"permissions": [
"rw",
"sync",
"root_squash",
"fsid=0"
]
},
{
"clients": "*",
"permissions": [
"ro",
"async",
"all_squash",
"fsid=1"
]
}
]
}
]
}
It would give you your expected output:
[
{
"path": "/export/home",
"state": "present",
"options": "192.168.0.0/24(rw,sync,no_root_squash,fsid=0)"
},
{
"path": "/export/public",
"state": "present",
"options": "192.168.0.0/24(rw,sync,root_squash,fsid=0) *(ro,async,all_squash,fsid=1)"
}
]
Please mind: the string litteral `` wont work on a space character string, because, as pointed in the documentation, it will be parsed as JSON:
A literal expression is an expression that allows arbitrary JSON objects to be specified
Source: https://jmespath.org/specification.html#literal-expressions
This is quite easy when you get to the point of:
[].{ path: path, state: state, options: options[].join('', [clients, '(', join(',', permissions), ')']) }
Which is something you seems to have achived, that gives
[
{
"path": "/export/home",
"state": "present",
"options": [
"192.168.0.0/24(rw,sync,no_root_squash,fsid=0)"
]
},
{
"path": "/export/public",
"state": "present",
"options": [
"192.168.0.0/24(rw,sync,root_squash,fsid=0)",
"*(ro,async,all_squash,fsid=1)"
]
}
]
Because you are just left with joining the whole array in options with a space as glue character.

Find the LVM and VG name from Ansible inputting the Mount point name

I'm trying to find if we have any options in setup or any combination of ansible modules to find the LVM / VG name if we give the mount point name as input to the playbook, please suggest if you have any options, as of now I can only see the only option in setup is to fetch the device name "device": "/dev/mapper/rhel-root", using the ansible_mounts.device. But to split the LV and VG name from the "/dev/mapper/rhel-root" is another challenge. Kindly suggest if any options.
This is actually possible since 2015 through the ansible_lvm facts returned by setup and part of the hardware subset.
To get a result, you need to run setup as root and the lvm utilities must be installed on the target.
You can make a quick test on your local machine (if relevant, adapt to whatever target where you have privilege escalation rights):
ansible localhost -b -m setup \
-a 'gather_subset=!all,!min,hardware' -a 'filter=ansible_lvm'
Here is an example output from the first test vm I could connect to:
localhost | SUCCESS => {
"ansible_facts": {
"ansible_lvm": {
"lvs": {
"docker_data": {
"size_g": "80.00",
"vg": "docker"
},
"root": {
"size_g": "16.45",
"vg": "system"
},
"swap": {
"size_g": "3.00",
"vg": "system"
}
},
"pvs": {
"/dev/sda2": {
"free_g": "0.05",
"size_g": "19.50",
"vg": "system"
},
"/dev/sdb": {
"free_g": "0",
"size_g": "80.00",
"vg": "docker"
}
},
"vgs": {
"docker": {
"free_g": "0",
"num_lvs": "1",
"num_pvs": "1",
"size_g": "80.00"
},
"system": {
"free_g": "0.05",
"num_lvs": "2",
"num_pvs": "1",
"size_g": "19.50"
}
}
}
},
"changed": false
}
This would be a very simple .yaml file, doing the same as what #Zeitounator suggested:
---
- name: detect lvm setup
setup:
filter: "ansible_lvm"
gather_subset: "!all,!min,hardware"
register: lvm_facts
- debug: var=lvm_facts

Dynamic inventory groups from ansible plugin: nmap

I'm trying to use the nmap plugin in ansible to create a dynamic inventory, and then group things that the plugin returns. Unfortunately, I'm missing something, because I can't seem to get a group to be created.
In this scenario, I have a couple hosts named unknownxxxxxxxx that I would like to group.
plugin: nmap
strict: false
address: 10.0.1.0/24
ports: no
groups:
unknown: "'unknown' in hostname"
I run my plugin -
ansible-inventory -i nmap.yml --export --output=inv --list
but the return is always the same...
By now, I've resorted to guessing possible var names
host, hosts, hostnames, hostname, inventory_hostname, hostvars, host.fqdn, and the list goes on and on...
I'm obviously missing something basic, but I can't seem to find anything via search that has yielded any results.
Can someone help me understand what I'm doing wrong with jinja?
Perhaps I need to use compose: and keyed_groups: ?
I'm obviously missing something basic...
I'm not sure that you are. I agree that according to the documentation the nmap plugin is supposed to work the way you're trying to use it, but like you I'm not able to get the groups or compose keys to work as described.
Fortunately, we can work around that problem by directly using the constructed inventory plugin.
We'll need to use an inventory directory, rather than an inventory file, since we need multiple inventory files. We'll put the following into our ansible.cfg:
[defaults]
inventory = inventory
And then we'll create a directory inventory, into which we'll place two files. First, we'll put your nmap inventory in inventory/10nmap.yml. It will look like this:
plugin: nmap
strict: false
address: 10.0.1.0/24
ports: false
And then we'll put the configuration for the constructed plugin to inventory/20constructed.yml:
plugin: constructed
strict: False
groups:
unknown: "'unknown' in inventory_hostname"
We've named the file 10nmap.yml and 20constructed.yml because we need to ensure that the constructed plugin runs after the nmap plugin (also, we're checking against inventory_hostname here because that's the canonical name of a host in your Ansible inventory).
With all this in place, you should see the behavior you're looking for: hosts with unknown in the inventory_hostname variable will end up in the unknown group.
I believe that groups apply only to attributes that the plugin returns, so you should be looking at its output.
For example, running ansible-inventory -i nmap-inventory.yml --list with
---
plugin: nmap
address: 192.168.122.0/24
strict: false
ipv4: yes
ports: yes
sudo: true
groups:
without_hostname: "'192.168.122' in name"
with_ssh: "ports | selectattr('service', 'equalto', 'ssh')"
produces
{
"_meta": {
"hostvars": {
"192.168.122.102": {
"ip": "192.168.122.102",
"name": "192.168.122.102",
"ports": [
{
"port": "22",
"protocol": "tcp",
"service": "ssh",
"state": "open"
}
]
},
"192.168.122.204": {
"ip": "192.168.122.204",
"name": "192.168.122.204",
"ports": [
{
"port": "22",
"protocol": "tcp",
"service": "ssh",
"state": "open"
},
{
"port": "8080",
"protocol": "tcp",
"service": "http",
"state": "open"
}
]
},
"fedora": {
"ip": "192.168.122.1",
"name": "fedora",
"ports": [
{
"port": "53",
"protocol": "tcp",
"service": "domain",
"state": "open"
},
{
"port": "6000",
"protocol": "tcp",
"service": "X11",
"state": "open"
}
]
}
}
},
"all": {
"children": [
"ungrouped",
"with_ssh",
"without_hostname"
]
},
"ungrouped": {
"hosts": [
"fedora"
]
},
"with_ssh": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
},
"without_hostname": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
}
}
As you can see, I'm using name and ports because the entries have these attributes. I could've also used ip.
To further clarify the point, when I run the plugin with ports: no, the with_ssh grouping filter doesn't produce anything because there are no ports in the output.
{
"_meta": {
"hostvars": {
"192.168.122.102": {
"ip": "192.168.122.102",
"name": "192.168.122.102"
},
"192.168.122.204": {
"ip": "192.168.122.204",
"name": "192.168.122.204"
},
"fedora": {
"ip": "192.168.122.1",
"name": "fedora"
}
}
},
"all": {
"children": [
"ungrouped",
"without_hostname"
]
},
"ungrouped": {
"hosts": [
"fedora"
]
},
"without_hostname": {
"hosts": [
"192.168.122.102",
"192.168.122.204"
]
}
}

YAML lists and variables

I am trying to deregister EC2 instances from target groups using Automation document in SSM, which I am attempting to write in YAML but I am having major issues with getting my head around YAML lists and arrays.
Here are the relevant parts of the code:
parameters:
DeregisterInstanceId:
type: StringList
description: (Required) Identifies EC2 instances for patching
default: ["i-xxx","i-yyy"]
Further down I am trying to read this DeregisterInstanceId as a list but it's not working - getting various errors regarding expected one type of variable but received another.
name: RemoveLiveInstancesFromTG
action: aws:executeAwsApi
inputs:
Service: elbv2
Api: DeregisterTargets
TargetGroupArn: "{{ TargetGroup }}"
Targets: "{{ DeregisterInstanceId }}"
isEnd: true
What Targets input really needs to look like, is like this:
Targets:
- Id: "i-xxx"
- Id: "i-yyy"
...but I am not sure how to pass my StringList to create the above.
I tried:
Targets:
- Id: "{{ DeregisterInstanceId }}"
and
Targets:
Id: "{{ DeregisterInstanceId }}"
But no go.
I used to have the exact same problem although I created the document in json.
Please checkout the following working script to de-register an instance from a load balancer target group
Automation document v. 74
{
"description": "LoadBalancer deregister targets",
"schemaVersion": "0.3",
"assumeRole": "{{ AutomationAssumeRole }}",
"parameters": {
"TargetGroupArn": {
"type": "String",
"description": "(Required) TargetGroup of LoadBalancer"
},
"Target": {
"type": "String",
"description": "(Required) EC2 Instance(s) to deregister"
},
"AutomationAssumeRole": {
"type": "String",
"description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf.",
"default": ""
}
},
"mainSteps": [
{
"name": "DeregisterTarget",
"action": "aws:executeAwsApi",
"inputs": {
"Service": "elbv2",
"Api": "DeregisterTargets",
"TargetGroupArn": "{{ TargetGroupArn }}",
"Targets": [
{
"Id": "{{ Target }}"
}
]
}
}
]
}
Obviously the point of interest is the targets parameter, it needs an json array to work (forget about the cli format, it seems to need json).
It also allows for specifying multiple targets and also allows usage of ports and availability groups, but all I need it for is to choose one instance and pull it out.
Hope it might be of use for someone.

ARM Template with DSC extension: Powershell file location from local destop

I would like to send the ARM template Project with related PS scripts for DSC to a third party. They would probably deploy option in VS to do the deployment. It is possible to attach the DSC script as part of the ARM project and on deploy picks up the dsc script from local disk? Under settings we have "ModulesUrl" it is possible to replace this with another parameter which points to local disk something like c:\myproject\IISInstall.ps1.zip
{
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('webSrvVmName'))]"
],
"location": "[resourceGroup().location]",
"name": "qawebsrv/iisinstall",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.19",
"autoUpgradeMinorVersion": true,
"settings": {
"ModulesUrl": "https://dscscript.blob.core.windows.net/dscscripts/IISInstall.ps1.zip",
"ConfigurationFunction": "[variables('configurationFunction')]",
"Properties": {},
"SasToken": "",
"wmfVersion": "4.0"
},
"protectedSettings": {}
},
"tags": {
"displayName": "VM Extensions"
},
"type": "Microsoft.Compute/virtualMachines/extensions"
}
If you're planning to use VS to do the deployment, then VS can stage the DSC package for you - the deployment script in VS does this... it can actually build the DSC package as well, but that has some limitations.
There's nothing magic about the VS script - this repo has a DSC sample that uses the same script used by VS - see: https://github.com/bmoore-msft/AzureRM-Samples/tree/master/VMDSCInstallFile
for a "Hello World" example...
No, this is not possible. Closest you can get to this is upload your script to a publicly available place and the VM will pull it.
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "https://github.com/xxx.zip",
"script": "scriptname.ps1",
"function": "main"
},
"configurationArguments": {}
},
"protectedSettings": {}
}

Resources