ansible use of wild card in with_items and loop - ansible

Team,
Using single full defined string in with_item I have my task running fine. However, at scale i would like to loop with string inside with_items changing. any hints?
- name: "Fetch all CPU nodes from clusters using K8s beta.kubernetes.io/instance-type"
k8s_facts:
kind: Node
label_selectors:
- "beta.kubernetes.io/instance-type=e1.xlarge"
verify_ssl: no
register: cpu_class_list
failed_when: cpu_class_list == ''
output:
ok: [localhost] => {
"nodes_class_label": [
{
"instanceType": "e1.xlarge,
"nodeType": "cpu",
"node_name": "hostA"
},
{
"instanceType": "e1.xlarge,
"nodeType": "cpu",
"node_name": "hostB"
}
]
}
I would like to pull all the nodes matching any name with wildcard.
label_selectors:
- "beta.kubernetes.io/instance-type=e1.xlarge"
- "beta.kubernetes.io/instance-type=f1.xlarge"
- "beta.kubernetes.io/instance-type=g1.xlarge"
expected output:
list all e1 label nodes output
list all f1 label nodes output
list all g1 label nodes output
my attempted solution:
- name: "Fetch all CPU nodes from clusters using K8s beta.kubernetes.io/instance-type"
k8s_facts:
kind: Node
label_selectors:
- "beta.kubernetes.io/instance-type=*.xlarge"
verify_ssl: no
register: cpu_class_list
failed_when: cpu_class_list == ''

Unfortunately this is not possible. It is a limitation based on the implementation of the Kubernetes API, unrelated to Ansible or the k8s_facts module.
There are a couple of workarounds, but ultimately I think using a set-based selector is your best option. This would look similar to:
- k8s_facts:
kind: Node
label_selectors:
- "beta.kubernetes.io/instance-type in (e1.xlarge, f1.xlarge, g1.xlarge)"
You should also be able to pull the instance types out into a variable for code readability and maintenance purposes.
The other option is to just loop over the k8s_facts task for each instance type, which I'm guessing you have already considered: "beta.kubernetes.io/instance-type={{ item }}".
Finally, one of your examples in the question will not work:
label_selectors:
- "beta.kubernetes.io/instance-type=e1.xlarge"
- "beta.kubernetes.io/instance-type=f1.xlarge"
- "beta.kubernetes.io/instance-type=g1.xlarge"
This is looking for nodes that meet all of those criteria (i.e. e1.xlarge && f1.xlarge && g1.xlarge), which will always be none.

Related

Using parse_xml in an Ansible playbook

I've been trying to parse XML data in Ansible. I can get it to work using the xml module but I think that using parse_xml would better suit my needs.
I don't seem to be able to match any of the data in the xml with my specs file.
Here is the xml data:
<data xmlns=\"urn:ietf:params:xml:ns:netconf:base:1.0\" xmlns:nc=\"urn:ietf:params:xml:ns:netconf:base:1.0\">
<ntp xmlns=\"http://cisco.com/ns/yang/Cisco-IOS-XR-ip-ntp-oper\">
<nodes>
<node>
<node>0/0/CPU0</node>
<associations>
<is-ntp-enabled>true</is-ntp-enabled>
<sys-leap>ntp-leap-no-warning</sys-leap>
<peer-summary-info>
<peer-info-common>
<host-mode>ntp-mode-client</host-mode>
<is-configured>true</is-configured>
<address>10.1.1.1</address>
<reachability>0</reachability>
</peer-info-common>
<time-since>-1</time-since>
</peer-summary-info>
<peer-summary-info>
<peer-info-common>
<host-mode>ntp-mode-client</host-mode>
<is-configured>true</is-configured>
<address>172.16.252.29</address>
<reachability>255</reachability>
</peer-info-common>
<time-since>991</time-since>
</peer-summary-info>
</associations>
</node>
</nodes>
</ntp>
</data>
This is what the spec file looks like:
---
vars:
ntp_peers:
address: "{{ item.address }}"
reachability: "{{ item.reachability}}"
keys:
result:
value: "{{ ntp_peers }}"
top: data/ntp/nodes/node/associations
items:
address: peer-summary-info/peer-info-common/address
reachability: peer-summary-info/peer-info-common/reachability
and the task in the yaml file:
- name: parse ntp reply
set_fact:
parsed_ntp_data: "{{ NTP_STATUS.stdout | parse_xml('specs/iosxr_ntp.yaml') }}"
but the data does not return any results:
TASK [debug parsed_ntp_data] **************************************************************************************************************************************************************************
ok: [core-rtr01] => {
"parsed_ntp_data": {
"result": []
}
}
ok: [dist-rtr01] => {
"parsed_ntp_data": {
"result": []
}
}
I had never seen parse_xml before, so that was a fun adventure
There appear to be two things conspiring against you: the top: key is evaluated from the root Element, and your XML (unlike the rest of the examples) uses XML namespaces (the xmlns= bit) which means your XPaths have to be encoded in the Element.findall manner
For the first part, since Element.findall is run while sitting on the <data> Element, that means one cannot reference data/... in an XPath because that would be applicable to a structure <data><data>. I tried being sneaky by just making the XPath absolute /data/... but Python's XPath library throws up in that circumstance. So, at the very least your top: key needs to not start with data anything
Then, the xmlns= in your snippet stood out to me because that means those element's names are actually NS+":"+localName for every element, and thus an XPath of ntp does NOT match ns0:ntp because they're considered completely separate names (that being the point of the namespace, after all). It may very well be possible to use enough //*[localname() = "ntp"] silliness to avoid having to specify the namespace over and over, but I didn't try it
Again, as a concession to Python's XPath library, they encode the fully qualified name in an xpath as {the-namespace}local-name and there does not seem to be any way short of modifying network.py to pass in namespaces :-(
Thus, the "hello world" version that I used to confirm my theory:
vars:
ntp_peers:
address: "{{ item.address }}"
keys:
result:
value: "{{ ntp_peers }}"
top: '{http://cisco.com/ns/yang/Cisco-IOS-XR-ip-ntp-oper}ntp/{http://cisco.com/ns/yang/Cisco-IOS-XR-ip-ntp-oper}nodes/{http://cisco.com/ns/yang/Cisco-IOS-XR-ip-ntp-oper}node'
items:
address: '{http://cisco.com/ns/yang/Cisco-IOS-XR-ip-ntp-oper}node'
cheerfully produced
ok: [localhost] => {
"msg": {
"result": [
{
"address": "0/0/CPU0"
}
]
}
}

Multiple ports and mount points in AWS ECS Fargate Task Definition using Ansible

I went through the documentation provided here
https://docs.ansible.com/ansible/latest/collections/community/aws/ecs_taskdefinition_module.html
It gives me nice examples of setting of Fargate task definition. However it showcases example with only one port mapping and there is no mount point shown here.
I want to dynamically add port mappings ( depending on my app) and volume/mount points
For that I am defining my host_var for app as below ( there can be many such apps with different mount points and ports)
---
task_count: 4
task_cpu: 1028
task_memory: 2056
app_port: 8080
My Task definition yaml file looks like below
- name: Create/Update Task Definition
ecs_taskdefinition:
aws_access_key: "{{....}}"
aws_secret_key: "{{....}}"
security_token: "{{....}}"
region: "{{....}}"
launch_type: FARGATE
network_mode: awsvpc
execution_role_arn: "{{ ... }}"
task_role_arn: "{{ ...}}"
containers:
- name: "{{...}}"
environment: "{{...}}"
essential: true
image: "{{ ....}}"
logConfiguration: "{{....}}"
portMappings:
- containerPort: "{{app_port}}"
hostPort: "{{app_port}}"
cpu: "{{task_cpu}}"
memory: "{{task_memory}}"
state: present
I am able to create/update the task definition.
New requirements are that
Instead of one port, now we can have multiple(or none) port mappings.
We will have multiple (or none) mount points and volumes as well
Here is what I think the modified ansible host_var should look like below for ports
[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
task_count: 4
task_cpu: 1028
task_memory: 2056
#[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
app_ports: [8080:80, 8081:8081, 5703:5703]
I am not sure what to do in ansible playbook to run through this list of ports.
Another part of the problem is that, although I was able to achieve creating volume and mouting in container thorough aws console, I was not able to do same using ansible.
here is the snippet of json for the AWS fargate looks like ( for volume part). There can be many such mounts depending on the application. I want to achieve that dynamically by defining mount points and volumes in host_vars
-
-
-
"mountPoints": [
{
"readOnly": null,
"containerPath": "/mnt/downloads",
"sourceVolume": "downloads"
}
-
-
-
-
-
-
"volumes": [
{
"efsVolumeConfiguration": {
"transitEncryptionPort": ENABLED,
"fileSystemId": "fs-ecdg222d",
"authorizationConfig": {
"iam": "ENABLED",
"accessPointId": null
},
"transitEncryption": "ENABLED",
"rootDirectory": "/vol/downloads"
},
"name": "downloads",
"host": null,
"dockerVolumeConfiguration": null
}
I am not sure how to do that.
Official documentation offers very little help.

Override Ansible hosts on a specific role

I have a playbook like below
- name: Do something
hosts: "view-servers"
roles:
- { role: role1, var1: "abc" }
- { role: role2, var2: "def" }
- { role: role2, var2: "ghi" }
The servers in view-servers are identical and replicated. So there is no difference from variable point of view except the host name.
On the role1 above, I need to actually run it just for 1 of the view servers. Something like view-servers[0].
Is there a way to do it?
The playbook yaml is actually a list, which is why they all start with - hosts: (err, or - name: in your case, but most don't have named playbooks)
Thus:
- hosts: view-servers
roles:
- role: role1
- hosts: view-servers[0]
roles:
- role: role1
And because they are a list, it will run them in the order they exist in the file; so if you want that view-servers[0] to run first, move it before the - hosts: view-servers, else it'll run them all, and then re-connect to the first one of the group and apply the specified roles to it.
Be forewarned that view-servers[0] is highly dependent upon your inventory, so be careful that the 0th item in that group is always the server you intend. If you need more exacting control, you can use a dynamic inventory script, or you can use the add_host: task to choose, or create, a host and add it to a (new or existing) group as a side-effect of your playbook.

Sending messages to multiple elastic search indices

We are running an ELK stack to aggregate all our logs and we have multiple systems. Currently, we have Filebeat configured to log to specific indices based on the system (SystemA, SystemB, SystemC).
I would like to, additionally, send all logs with level ERROR to another index where I would like to collect all errors across systems, but somehow I can't figure out how to get Filebeat to send one message to multiple indices
According to the documentation, the first condition that matches will define the index to be used, which sounds to me as if it's not possible to send a message that would match multiple patterns to multiple indices?
What I want to do:
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "filebeat-external-%{+yyyy.MM.dd}"
indices:
- index: "filebeat-error-logs-%{+yyyy.MM.dd}"
when:
or:
- equals:
level: "ERROR"
- equals:
level: "error"
- index: "filebeat-service-a-%{+yyyy.MM.dd}"
when:
regexp:
container.name: "^service-a-"
- index: "filebeat-service-b-%{+yyyy.MM.dd}"
when:
regexp:
container.name: "^service-b-"
The only way I currently see is to have multiple indices per system and aggregate them in Kibana:
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS}'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
index: "filebeat-external-%{+yyyy.MM.dd}"
indices:
- index: "error-log-service-a-%{+yyyy.MM.dd}"
when:
and:
- equals:
level: "ERROR"
- regexp:
container.name: "^service-a-"
- index: "service-log-service-a-%{+yyyy.MM.dd}"
when:
and:
- not:
- equals:
level: "ERROR"
- regexp:
container.name: "^service-a-"
But this would double our number of indices and is code duplication. Am I missing something here, is there an easier way to have a general error-index but still have errors go to the service-specific indices as well?

Ansible Dict and Tags

I have a playbook creating EC2 by using a dictionary declared in vars: then registering the IPs into a group to be used later on.
The dict looks like this:
servers:
serv1:
name: tag1
type: t2.small
region: us-west-1
image: ami-****
serv2:
name: tag2
type: t2.medium
region: us-east-1
image: ami-****
serv3:
[...]
I would like to apply tags to this playbook in the simplest way so I can create just some of them using tags. For example, running the playbook with --tags tag1,tag3 would only start EC2 matching serv1 and serv3.
Applying tags on the dictionary doesn't seem possible and I would like to avoid doing multiplying tasks like:
Creatinge EC2
Register infos
Getting private IP from previously registered infos
adding host to group
While I already have a working loop for the case I want to create all EC2 at once, is there any way to achieve that (without relying on --extra-vars, which would need key=value) ? For example, filtering out the dictionary by keeping only what is tagged before running the EC2 loop ?
I doubt you can do this out of the box. And not sure this is good idea at all.
Because tags are used to filter tasks in Ansible, so you will have to mark all tasks with tags: always.
You can accomplish this with custom filter plugin, for example (./filter_plugins/apply_tags.py):
try:
from __main__ import cli
except ImportError:
cli = False
def apply_tags(src):
if cli:
tags = cli.options.tags.split(',')
res = {}
for k,v in src.iteritems():
keep = True
if 'name' in v:
if v['name'] not in tags:
keep = False
if keep:
res[k] = v
return res
else:
return src
class FilterModule(object):
def filters(self):
return {
'apply_tags': apply_tags
}
And in your playbook:
- debug: msg="{{ servers | apply_tags }}"
tags: always
I found a way to match my needs without touching to the rest so I'm sharing it in case other might have a similar need.
I needed to combine dictionaries depending on tags, so my "main" dictionary wouldn't be static.
Variables became :
- serv1:
- name: tag1
type: t2.small
region: us-west-1
image: ami-****
- serv2:
- name: tag2
type: t2.medium
region: us-east-1
image: ami-****
- serv3:
[...]
So instead of duplicating my tasks, I used set_fact with tags like this:
- name: Combined dict
# Declaring empty dict
set_fact:
servers: []
tags: ['always']
- name: Add Server 1
set_fact:
servers: "{{ servers + serv1 }}"
tags: ['tag1']
- name: Add Server 2
set_fact:
servers: "{{ servers + serv2 }}"
tags: ['tag2']
[..]
20 lines instead of multiply tasks for each server, change vars from dictionary to lists, a few tags and all good :) Now if I add a new server it will only take a few lines.

Resources