Ansible deep merge hash array in vars - ansible

A want declare hash array in vars/main.yml or default/main.yml of some role, e.g.:
mysql:
instances:
new:
port: 3306
dir: /mydir
config:
innodb_log_file_size: '128M'
tmp_table_size: '128M'
innodb_buffer_pool_size: '10G'
...
And I want change only some values of keys in YAML inventory or group_vars. Other values must be taken from vars/main.yml of some role:
mysql:
instances:
new:
config:
innodb_buffer_pool_size: '2G'
I want result for used in jinja2 template:
mysql:
instances:
new:
port: 3306
dir: /mydir
config:
innodb_log_file_size: '128M'
tmp_table_size: '128M'
innodb_buffer_pool_size: '2G'
...

Question: "Want to change only some values of keys in YAML inventory or group_vars. Other values must be taken from vars/main.yml of some role".
1) Service role (nginx, mysql, pgsql ...). In this role, I describe the default settings, ...
2) Then I create a project role in which I can include the service role and I will use most of the default settings described in the service role. Only a small part of the service settings can be changed in the project role.
Answer:
In the "service" roles create in defaults special variables for parameters that might be changed later. For example
mysql_port: "3306"
mysql_dir: "mydir"
mysql_innodb_log_file_size: "128M"
mysql:
instances:
new:
port: "{{ mysql_port }}"
dir: "{{ mysql_dir }}"
config:
innodb_log_file_size: "{{ mysql_innodb_log_file_size }}"
...
In the "project" role any variable with higher precedence will override the role's defaults.

Related

Ansible - Passing a dictionary to a module parameter

I'm using fortinet.fortios.system_global module as describe here: https://docs.ansible.com/ansible/latest/collections/fortinet/fortios/fortios_system_global_module.html#ansible-collections-fortinet-fortios-fortios-system-global-module
My goal is to pass a dictionary to the system_global parameter with the allowed sub-parameters. I have the dictionary as follows for example:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key:value>
This dictionary lives in a separate file called forti.yml.
I then use include_vars to pull this yml file into my play as follows:
vars_files:
- /path/to/forti.yml
And then I use the system_global module:
- name: Configure system_global task
fortios_system_global:
access: "{{ access_token }}"
system_global: "{{ forti }}"
However, when I run the play it throws an error like so:
"msg": "Unsupported parameters for (fortios_system_global) module: system_global.admin-concurrent, system_global.admin-console-timeout, system_global.admin-hsts-max-age,<and so on>. Supported parameters include: member_path, member_state, system_global, vdom, enable_log, access_token."
I tried putting the key:value pairs in the vars: in the play level and passed it to the module the same way and it worked.
vars:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key: value>
What am I missing? They're both type: dict, the data are exactly the same. Not sure what I'm missing here. Can someone please help?
You have - and the parameters are supposed to be _ so it is telling you the module parameter does not exist
vars:
forti:
admin-concurrent: enable
admin-console-timeout: 0
admin-hsts-max-age: 15552000
<more key: value>
should be
vars:
forti:
admin_concurrent: enable
admin_console_timeout: 0
admin_hsts_max_age: 15552000
<more key: value>
Keep on automating!
Just look at module examples here: https://docs.ansible.com/ansible/latest/collections/fortinet/fortios/fortios_system_global_module.html#ansible-collections-fortinet-fortios-fortios-system-global-module

Ansible Fact - Parsing Ansible Fact Variable to Dictionary

I'm using Ansible
os_project_facts module to gather admin project id of OpenStack.
This is the ansible_fact log:
ansible_facts:
openstack_projects:
- description: Bootstrap project for initializing the cloud.
domain_id: default
enabled: true
id: <PROJECT_ID>
is_domain: false
is_enabled: true
location:
cloud: envvars
project:
domain_id: default
domain_name: null
id: default
name: null
region_name: null
zone: null
name: admin
options: {}
parent_id: default
properties:
options: {}
tags: []
tags: []
Apparently, this is not a dictionary, and I can't get openstack_projects.id since it is not a dictionary. How can I retrieve PROJECT_ID and use it in other tasks?
Since the openstack_projects facts contains single list element with a dictionary, we can use the array indexing method to get the id, i.e. openstack_projects[0]['id'].
You can use it directly, or use something like set_fact:
- name: get the project id
set_fact:
project_id: "{{ openstack_projects[0]['id'] }}"

Multiple ports and mount points in AWS ECS Fargate Task Definition using Ansible

I went through the documentation provided here
https://docs.ansible.com/ansible/latest/collections/community/aws/ecs_taskdefinition_module.html
It gives me nice examples of setting of Fargate task definition. However it showcases example with only one port mapping and there is no mount point shown here.
I want to dynamically add port mappings ( depending on my app) and volume/mount points
For that I am defining my host_var for app as below ( there can be many such apps with different mount points and ports)
---
task_count: 4
task_cpu: 1028
task_memory: 2056
app_port: 8080
My Task definition yaml file looks like below
- name: Create/Update Task Definition
ecs_taskdefinition:
aws_access_key: "{{....}}"
aws_secret_key: "{{....}}"
security_token: "{{....}}"
region: "{{....}}"
launch_type: FARGATE
network_mode: awsvpc
execution_role_arn: "{{ ... }}"
task_role_arn: "{{ ...}}"
containers:
- name: "{{...}}"
environment: "{{...}}"
essential: true
image: "{{ ....}}"
logConfiguration: "{{....}}"
portMappings:
- containerPort: "{{app_port}}"
hostPort: "{{app_port}}"
cpu: "{{task_cpu}}"
memory: "{{task_memory}}"
state: present
I am able to create/update the task definition.
New requirements are that
Instead of one port, now we can have multiple(or none) port mappings.
We will have multiple (or none) mount points and volumes as well
Here is what I think the modified ansible host_var should look like below for ports
[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
task_count: 4
task_cpu: 1028
task_memory: 2056
#[container_port1:host_port1, container_port2:host_port2, container_port3:host_port3]
app_ports: [8080:80, 8081:8081, 5703:5703]
I am not sure what to do in ansible playbook to run through this list of ports.
Another part of the problem is that, although I was able to achieve creating volume and mouting in container thorough aws console, I was not able to do same using ansible.
here is the snippet of json for the AWS fargate looks like ( for volume part). There can be many such mounts depending on the application. I want to achieve that dynamically by defining mount points and volumes in host_vars
-
-
-
"mountPoints": [
{
"readOnly": null,
"containerPath": "/mnt/downloads",
"sourceVolume": "downloads"
}
-
-
-
-
-
-
"volumes": [
{
"efsVolumeConfiguration": {
"transitEncryptionPort": ENABLED,
"fileSystemId": "fs-ecdg222d",
"authorizationConfig": {
"iam": "ENABLED",
"accessPointId": null
},
"transitEncryption": "ENABLED",
"rootDirectory": "/vol/downloads"
},
"name": "downloads",
"host": null,
"dockerVolumeConfiguration": null
}
I am not sure how to do that.
Official documentation offers very little help.

Override Ansible hosts on a specific role

I have a playbook like below
- name: Do something
hosts: "view-servers"
roles:
- { role: role1, var1: "abc" }
- { role: role2, var2: "def" }
- { role: role2, var2: "ghi" }
The servers in view-servers are identical and replicated. So there is no difference from variable point of view except the host name.
On the role1 above, I need to actually run it just for 1 of the view servers. Something like view-servers[0].
Is there a way to do it?
The playbook yaml is actually a list, which is why they all start with - hosts: (err, or - name: in your case, but most don't have named playbooks)
Thus:
- hosts: view-servers
roles:
- role: role1
- hosts: view-servers[0]
roles:
- role: role1
And because they are a list, it will run them in the order they exist in the file; so if you want that view-servers[0] to run first, move it before the - hosts: view-servers, else it'll run them all, and then re-connect to the first one of the group and apply the specified roles to it.
Be forewarned that view-servers[0] is highly dependent upon your inventory, so be careful that the 0th item in that group is always the server you intend. If you need more exacting control, you can use a dynamic inventory script, or you can use the add_host: task to choose, or create, a host and add it to a (new or existing) group as a side-effect of your playbook.

Ansible Dict and Tags

I have a playbook creating EC2 by using a dictionary declared in vars: then registering the IPs into a group to be used later on.
The dict looks like this:
servers:
serv1:
name: tag1
type: t2.small
region: us-west-1
image: ami-****
serv2:
name: tag2
type: t2.medium
region: us-east-1
image: ami-****
serv3:
[...]
I would like to apply tags to this playbook in the simplest way so I can create just some of them using tags. For example, running the playbook with --tags tag1,tag3 would only start EC2 matching serv1 and serv3.
Applying tags on the dictionary doesn't seem possible and I would like to avoid doing multiplying tasks like:
Creatinge EC2
Register infos
Getting private IP from previously registered infos
adding host to group
While I already have a working loop for the case I want to create all EC2 at once, is there any way to achieve that (without relying on --extra-vars, which would need key=value) ? For example, filtering out the dictionary by keeping only what is tagged before running the EC2 loop ?
I doubt you can do this out of the box. And not sure this is good idea at all.
Because tags are used to filter tasks in Ansible, so you will have to mark all tasks with tags: always.
You can accomplish this with custom filter plugin, for example (./filter_plugins/apply_tags.py):
try:
from __main__ import cli
except ImportError:
cli = False
def apply_tags(src):
if cli:
tags = cli.options.tags.split(',')
res = {}
for k,v in src.iteritems():
keep = True
if 'name' in v:
if v['name'] not in tags:
keep = False
if keep:
res[k] = v
return res
else:
return src
class FilterModule(object):
def filters(self):
return {
'apply_tags': apply_tags
}
And in your playbook:
- debug: msg="{{ servers | apply_tags }}"
tags: always
I found a way to match my needs without touching to the rest so I'm sharing it in case other might have a similar need.
I needed to combine dictionaries depending on tags, so my "main" dictionary wouldn't be static.
Variables became :
- serv1:
- name: tag1
type: t2.small
region: us-west-1
image: ami-****
- serv2:
- name: tag2
type: t2.medium
region: us-east-1
image: ami-****
- serv3:
[...]
So instead of duplicating my tasks, I used set_fact with tags like this:
- name: Combined dict
# Declaring empty dict
set_fact:
servers: []
tags: ['always']
- name: Add Server 1
set_fact:
servers: "{{ servers + serv1 }}"
tags: ['tag1']
- name: Add Server 2
set_fact:
servers: "{{ servers + serv2 }}"
tags: ['tag2']
[..]
20 lines instead of multiply tasks for each server, change vars from dictionary to lists, a few tags and all good :) Now if I add a new server it will only take a few lines.

Resources