Define which users belong to which host - ansible

I have an inventory:
[dbs]
server1.intranet
[webservices]
server2.intranet
[apps]
server3.intranet
And a file with variables:
users:
- { name: user1, ssh_key: <SSH_KEY> }
- { name: user2, ssh_key: <SSH_KEY> }
(1) My first question is: How can I tell in the inventory which user is part of each server? (without having to copy and duplicate the user information at every host) Note that users can change and can belong to multiple servers.
(2) The final objective is to do some tasks at each host. For example, to create the users at each host, and add the corresponding user SSH key at each server, something like:
- name: SSH
ansible.posix.authorized_key:
user: "item.name"
state: present
key: "item.ssh_key"
with_items: "{{ users[??] }}"
Of course the users variable should only have the users for the specific host iterating.
How can I do this?

I didn't understand your second point, but this solution could be helpful.
Define destination hosts as an array:
users:
- { name: user1, ssh_key: <SSH_KEY>,hosts: ['test-001','test-002'] }
- { name: user2, ssh_key: <SSH_KEY>,hosts: ['test-002'] }
Use selectattr filter for your loop to search the running hostname in the hosts list defined in the vars:
- name: SSH
ansible.posix.authorized_key:
user: "{{ item.name }}"
state: present
key: "{{ item.ssh_key }}"
loop: "{{ users | selectattr('hosts', 'search', inventory_hostname) }}"
ok: [test-001] => (item={'name': 'user1', 'ssh_key': '<SSH_KEY1>', 'hosts': ['test-001', 'test-002']}) => {
"msg": "user1"
}
ok: [test-002] => (item={'name': 'user1', 'ssh_key': '<SSH_KEY1>', 'hosts': ['test-001', 'test-002']}) => {
"msg": "user1"
}
ok: [test-002] => (item={'name': 'user2', 'ssh_key': '<SSH_KEY2>', 'hosts': ['test-002']}) => {
"msg": "user2"
}

Related

How to loop over a list of dict elements deep 3 in Ansible

I have the following variables in my var file:
repo_type:
hosted:
data:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
proxy:
data:
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
group:
data:
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
I want to configure a task to to loop over (hosted,proxy and group) and body over data dict.
Here is the task:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.key}}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item }}"
status_code: 201
no_log: no
with_dict: "{{ repo_type}}"
I have tried with_items, with_dict and with_nested but nothing helped.
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'data'
Any help would be appreciated!
If your goal is to loop over the contents of the data keys as a flat list, you could do it like this:
- debug:
msg: "repo {{ item.name }} write_policy {{ item.storage.write_policy }}"
loop_control:
label: "{{ item.name }}"
loop: "{{ repo_type | json_query('*.data[]') }}"
That uses a JMESPath expression to get the data key from each
top-level dictionary, and then flatten the resulting nested list. In
other words, it transforms you original structure into:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
When run using your example data, this produces as output:
TASK [debug] *********************************************************************************************************
ok: [localhost] => (item=hosted_repo1) => {
"msg": "repo hosted_repo1 write_policy allow_once"
}
ok: [localhost] => (item=hosted_repo2) => {
"msg": "repo hosted_repo2 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo1) => {
"msg": "repo proxy_repo1 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo2) => {
"msg": "repo proxy_repo2 write_policy allow_once"
}
ok: [localhost] => (item=group_repo1) => {
"msg": "repo group_repo1 write_policy allow_once"
}
ok: [localhost] => (item=group_repo2) => {
"msg": "repo group_repo2 write_policy allow_once"
}
If you're trying to do something else, please update your question so
that it clearly shows the values you expect for each iteration of your
loop.
As reported by #larsk, you actually didn't manage to explain clearly how you are trying to loop over your data and what your api call is actually expecting.
But you are in luck this time since I have messed around with Nexus quite a bit (and I think I actually recognize those variable names and overall task layout)
The Nexus Repository POST /v1/repositories/pypi/[hosted|proxy|group] API endpoints are expecting one call for each of your repos in data. To implement your requirement you need to loop over the keys in repo_type to select the appropriate endpoint and then loop again over each element in data to send the repo definition to create.
This is actually possible combining a dict2items and subelements filters in your loop as in the below playbook (not directly tested).
The basics of the transformation are as follow:
transform your dict to a key/value list using dict2items e.g. (shortened example)
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
[...]
use the subelements filter to combine each top element with each element in value.data e.g.:
- # <- This is the entry for first repo i.e. `item` in your loop
- key: hosted # <- this is the start of the corresponding top element i.e. `item.0` in your loop
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 1> # <- this is the sub-element i.e. `item.1` in your loop
- # <- this is the entry for second repo
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 2>
[...]
Following one of your comments and from my experience, I added to my example an explicit json serialization of the repo definition using the to_json filter.
Putting it all together this gives:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.0.key }}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item.1 | to_json }}"
status_code: 201
no_log: no
loop: "{{ repo_type | dict2items | subelements('value.data') }}"

Ansible how to run task only on groups mentioned in playbook and skip other groups even though same host part of other group

I have a role that is common for both mongo replicas and arbiters and hosts groups separately for each replica and arbiter because the role should support the arbiter on the same host & different host.
hosts:
[replicas]
127.0.0.1
127.0.0.2
[arbiter]
127.0.0.2
the task inside role:
- name: Run only on replicas
debug msg=" Only on replica"
when: '"replicas" in group_names'
- name: Run only on the arbiter
debug: msg="Only on the arbiter"
when: '"arbiter" in group_names'
playbook:
- hosts: replicas
roles:
- role: "common"
- role: "replica"
- hosts: arbiter
roles:
- role: "common"
- role: "arbiter'
Expected output while running on replicas:
TASK [debug] *********************************************************************************************************************************************
ok: [127.0.0.1] => {
"msg": " Only on replica"
}
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
TASK [debug(arbiter)] *********************************************************************************************************************************************
skipping: [127.0.0.1]
skipping: [127.0.0.2]
But is not skipping on arbiter task as expected as the same host is part of replicas group. Below is the actual output.
Actual output:
TASK [debug] *********************************************************************************************************************************************
ok: [127.0.0.1] => {
"msg": " Only on replica"
}
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
TASK [debug(arbiter)] *********************************************************************************************************************************************
skipping: [127.0.0.1]
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
How to run on a specific group that playbook delegated?
hello you can use this method:
Playbook:
- hosts: replicas
roles:
- { role: common, vars: { group: "replicas" } }
- { role: replica, vars: { group: "replicas" } }
- hosts: arbiter
roles:
- { role: common, vars: { group: "arbiter" } }
- { role: arbiter, vars: { group: "arbiter" } }
and inside your role:
- name: Run only on replicas
debug msg=" Only on replica"
when: group == "replicas"
- name: Run only on the arbiter
debug: msg="Only on the arbiter"
when: group == "arbiter"
I hope that can help you to resolve your issue.

Ansible play is not able to take variable

C:\CYGWIN64\ETC\ANSIBLE\ANSIBLE-ACI-CONFIG
├───environments
│ ├───houston
│ └───munich
├───group_vars
├───plays
├───plugins
│ └───filter
│ └───__pycache__
└───roles
├───aci-fabric-onboarding
│ └───tasks
variable file:
oob_nodes:
- { node_id: "101", obb_address: "10.10.10.10", obb_cidr: "27" , obb_gateway: "10.10.10.1" }
- { node_id: "102", obb_address: "10.10.10.11", obb_cidr: "27" , obb_gateway: "10.10.10.1" }
- { node_id: "201", obb_address: "10.10.10.12", obb_cidr: "27" , obb_gateway: "10.10.10.1" }
play
========
- name: Setup ACI Fabric
hosts: "{{ target }}"
gather_facts: no
any_errors_fatal: true
tasks:
- include_vars:
file: "{{ ACI_SSoT_path }}/fabricsetup.yml"
- include_vars:
file: "{{ ACI_SSoT_path }}/oob.yml"
# Intent Statement
- include_role:
name: aci-fabric-onboarding
roles
==============
# Adding OBB address
- name: Add OBB address
delegate_to: localhost
aci_rest:
host: "{{ aci_ip }}"
username: ansible
private_key: ansible.key
certificate_name: ansible
use_ssl: yes
validate_certs: false
path: /api/node/mo/uni/tn-mgmt/mgmtp-default/oob-default/rsooBStNode-[topology/pod-1/node-"{{item.node_id}}"].json
method: post
content:
{
"mgmtRsOoBStNode":{
"attributes":{
"tDn":"topology/pod-1/node-101",
"addr":"25.96.131.61/27",
"gw":"25.96.131.33",
"status":"created"
},
"children":[
]
}
}
with_items: "{{ oob_nodes }}"
error:
TASK [aci-fabric-onboarding : Add OBB address] *****************************************************************************************************************************************************
task path: /etc/ansible/Ansible-Aci-config/roles/aci-fabric-onboarding/tasks/apply-oob-config.yml:4
fatal: [25.96.131.30]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'item' is undefined\n\nThe error appears to be in '/etc/ansible/Ansible-Aci-config/roles/aci-fabric-onboarding/tasks/apply-oob-config.yml': line 4, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Adding OBB address\n- name: Add OBB address\n ^ here\n"
}
That looks like an indention error to me. You have with_items with the same indention as aci_rest:
# Adding OBB address
- name: Add OBB address
delegate_to: localhost
aci_rest:
host: "{{ aci_ip }}"
username: ansible
private_key: ansible.key
certificate_name: ansible
use_ssl: yes
validate_certs: false
path: /api/node/mo/uni/tn-mgmt/mgmtp-default/oob-default/rsooBStNode-[topology/pod-1/node-"{{ item.node_id }}"].json
method: post
content:
{
"mgmtRsOoBStNode":{
"attributes":{
"tDn":"topology/pod-1/node-101",
"addr":"25.96.131.61/27",
"gw":"25.96.131.33",
"status":"created"
},
"children":[
]
}
}
with_items: "{{ oob_nodes }}"
Have a look at the documentation as well.

Using url module with jinja2 templates

I know how to process jinja2 templates files and let them create files. I also know how to POST to webservices using the url module.
For now I use some code like this, which successfully posts hardcoded JSON to my remote service:
tasks:
- name: GSA app definition
uri:
url: "http://localhost:8764/api/apps?relatedObjects=false"
method: POST
force_basic_auth: yes
user: "{{ admin_name }}"
password: "{{ admin_pass }}"
body_format: json
body: "{\"name\":\"My new app\", \"description\":\"A really great new app\" }"
follow_redirects: all
status_code: 200
timeout: 15
register: app_gsa_cfg
But the JSON is static, how can I process a jinja2 template and POST its content ? I would prefer not having to create temporary files on disk and POST them, what I am looking for is a direct connection or perhaps an approach that puts the template processing result into a string.
For starters a jinja2 template could look like this, later I will add variables too:
{#
This file creates the basic GSA app in Fusion. See https://doc.lucidworks.com/fusion-server/4.2/reference-guides/api/apps-api.html#create-a-new-app for details
#}
{
"name": "GSA",
"description": "Contains all configuration specific to the migrated GSA legacy searches"
}
(I know that this has little advantage over a static json included into the playbook. But is is easier to edit and offers me the opportunity to have (jinja style) comments in Json, which is normally not possible)
In my case, what I do is the following:
I have an API, so I do the following:
- name: Change API Status
uri:
url: "{{ enpoint }}/v1/requests/{{ whatever }}"
method: PATCH
user: "{{ tokenid }}"
password: x
headers:
X-4me-Account: "myaccount"
body: '{ "status":"{{ reqstatus }}" }'
body_format: json
status_code:
- 201
- 200
force_basic_auth: true
validate_certs: false
return_content: true
Then your reqstatus var will change.
Even you can add your whole text as yaml, import into a variable and convert with filters {{ some_variable | to_json }}
Note: Have a look to the formatting without escaping quotes. That will help.
It makes no sense creating a file with jinja2 if you are not going to copy it remotely. Ansible supports jinja natively but its strength is the possibility to have plugins for better maintainability. There is no difference between template (or win_template) modules unless (as said) you copy the file somewhere. Look this example:
---
- name: Adhoc Jinja
hosts: localhost
connection: local
gather_facts: false
vars:
mytemplate:
- name: "GSA"
description: "Contains all configuration specific to the migrated GSA legacy searches"
- name: "Another Name"
description: "Contains Another Var"
tasks:
- name: Read Vars Loop
debug:
msg: "{{ item | to_json }}"
with_items: "{{ mytemplate }}"
- name: Include Vars
include_vars: adhocjinja2.yml
- name: Read Vars Loop
debug:
msg: "{{ item | to_json }}"
with_items: "{{ mytemplate }}"
And adhocjinja2.yml:
mytemplate:
- name: "GSA2"
description: "Contains all configuration specific to the migrated GSA legacy searches"
- name: "Another Name 2"
description: "Contains Another Var"
The output is:
TASK [Read Vars Loop] **************************************************************************************
ok: [localhost] => (item={'name': 'GSA', 'description': 'Contains all configuration specific to the migrated GSA legacy searches'}) => {
"msg": "{\"name\": \"GSA\", \"description\": \"Contains all configuration specific to the migrated GSA legacy searches\"}"
}
ok: [localhost] => (item={'name': 'Another Name', 'description': 'Contains Another Var'}) => {
"msg": "{\"name\": \"Another Name\", \"description\": \"Contains Another Var\"}"
}
TASK [Include Vars] ****************************************************************************************
ok: [localhost]
TASK [Read Vars Loop] **************************************************************************************
ok: [localhost] => (item={'name': 'GSA2', 'description': 'Contains all configuration specific to the migrated GSA legacy searches'}) => {
"msg": "{\"name\": \"GSA2\", \"description\": \"Contains all configuration specific to the migrated GSA legacy searches\"}"
}
ok: [localhost] => (item={'name': 'Another Name 2', 'description': 'Contains Another Var'}) => {
"msg": "{\"name\": \"Another Name 2\", \"description\": \"Contains Another Var\"}"
}
You can manage your variables as you want and create your json on the fly as Ansible has jinja and json it its heart.

ansible 2.0.0-0.3.beta1 ec2_vpc_route_table ec2_vpc_subnet

I am creating an ansible playbook using version 2.0.0-0.3.beta1
I'd like to get the subnet id after i create a subnet. I'm refering the official ansible docs : http://docs.ansible.com/ansible/ec2_vpc_route_table_module.html
- name: Create VPC Public Subnet
ec2_vpc_subnet:
state: present
resource_tags: '{"Name":"{{ prefix }}_subnet_public_0"}'
vpc_id: "{{ vpc.vpc_id }}"
az: "{{ az0 }}"
cidr: 172.16.0.0/24
register: public
- name: Create Public Subnet Route Table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
tags:
Name: Public
subnets:
- "{{ public.subnet_id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
after running the playbook i received following error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'dict object' has no attribute 'subnet_id'"}
Try using: public.subnet.id instead of public.subnet-id
Its useful to debug by running this task:
- debug: msg="{{ public }}"

Resources