Ansible how to set_fact with condition or var base on condition - ansible

I have 3 types of servers: dev, qa and prod. I need to send files to server specific home directories, i.e:
Dev Home Directory: /dev/home
QA Home Directory:/qa/home
PROD Home Directory: /prod/home
I have var set as boolean to determine the server type and think of using set_fact with condition to assign home directories for the servers. My playbook looks like this:
---
- hosts: localhost
var:
dev: "{{ True if <hostname matches dev> | else False }}"
qa: "{{ True if <hostname matches qa> | else False }}"
prod: "{{ True if <hostname matches prod> | else False }}"
tasks:
- set_facts:
home_dir: "{{'/dev/home/' if dev | '/qa/home' if qa | default('prod')}}"
However, then I ran the playbook, I was getting error about 'template expected token 'name', got string> Anyone know what I did wrong? Thanks!

Use match test. For example, the playbook
shell> cat pb.yml
- hosts: localhost
vars:
dev_pattern: '^dev_.+$'
qa_pattern: '^qa_.+$'
prod_pattern: '^prod_.+$'
dev: "{{ hostname is match(dev_pattern) }}"
qa: "{{ hostname is match(qa_pattern) }}"
prod: "{{ hostname is match(prod_pattern) }}"
tasks:
- set_fact:
home_dir: /prod/home
- set_fact:
home_dir: /dev/home
when: dev|bool
- set_fact:
home_dir: /qa/home
when: qa|bool
- debug:
var: home_dir
gives (abridged)
shell> ansible-playbook pb.yml -e hostname=dev_007
home_dir: /dev/home
Notes:
The variable prod is not used because /prod/home is default.
prod/home is assigned to home_dir first because it's the default. Next tasks can conditionally overwrite home_dirs.
Without the variable hostname defined the playbook will crash.
A simpler solution that gives the same result is creating a dictionary of 'pattern: home_dir'. For example
- hosts: localhost
vars:
home_dirs:
dev_: /dev/home
qa_: /qa/home
prod_: /prod/home
tasks:
- set_fact:
home_dir: "{{ home_dirs|
dict2items|
selectattr('key', 'in' , hostname)|
map(attribute='value')|
list|
first|default('/prod/home') }}"
- debug:
var: home_dir

Adding an alternative method to achieve this. This kind of extends the group_vars suggestion given by #mdaniel in his comment.
Ansible has a ready-made mechanism to build the variables based on the inventory hosts and groups. If you organize your inventory, you can avoid a lot of complication in trying to match host patterns.
Below is a simplified example, please go through the link above for more options.
Consider an inventory file /home/user/ansible/hosts:
[dev]
srv01.dev.example
srv02.dev.example
[qa]
srv01.qa.example
srv02.qa.example
[prod]
srv01.prod.example
srv02.prod.example
Using group_vars:
Then you can have below group_var files in /home/user/ansible/group_vars/ (matching inventory group names):
dev.yml
qa.yml
prod.yml
In dev.yml:
home_dir: "/dev/home"
In qa.yml:
home_dir: "/qa/home"
In prod.yml:
home_dir: "/prod/home"
Using host_vars:
Or you can have variables specific to hosts in host_vars directory /home/user/ansible/host_vars/:
srv01.dev.example.yml
srv01.prod.example.yml
# and so on
In srv01.dev.example.yml:
home_dir: "/dev/home"
In srv01.prod.example.yml:
home_dir: "/prod/home"
These variables will be picked based on which hosts you run the playbook, for example the below playbook:
---
- hosts: dev
tasks:
- debug:
var: home_dir
# will be "/dev/home"
- hosts: prod
tasks:
- debug:
var: home_dir
# will be "/prod/home"
- hosts: srv01.dev.example
tasks:
- debug:
var: home_dir
# will be "/dev/home"
- hosts: srv01.prod.example
tasks:
- debug:
var: home_dir
# will be "/prod/home"

Related

ansible playbook: print custom message if hosts variable is not defined

The simple playbook
---
- name: NTP client
# hosts: all:!ntpserver
hosts: "{{ target_hosts }}"
roles:
- ntp_client
can be executed using
ansible-playbook -i hosts.yml -e target_hosts=<target host> <path to playbook> --ask-become-pass
This works just fine when target_hosts is set. In case target_hosts isn't set it obviously fails with
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'target_hosts' is undefined
How do I print a custom message such as
The variable 'target_hosts' is not set, use e.g. '-e target_hosts=all:!ntpserver'.
Note, I am aware that a default can be set via a hosts line like
hosts: "{{ target_hosts | default('all:!ntpserver') }}"
Add a localhost play for validating input arguments as given below. This will fail with a custom message if the 'target_hosts' var is not defined.
---
- name: Validate target_host variable
hosts: localhost
tasks:
- fail:
msg: The variable 'target_hosts' is not set, use e.g. '-e target_hosts=all:!ntpserver'.
when: target_hosts is not defined
- name: NTP client
# hosts: all:!ntpserver
hosts: "{{ target_hosts }}"
roles:
- ntp_client
Use the filter mandatory. For example,
- hosts: "{{ target_hosts|mandatory('The variable target_hosts is mandatory') }}"
Example of a complete playbook
shell> cat pb.yml
- hosts: "{{ target_hosts|mandatory(err001) }}"
vars:
err001: >-
The variable 'target_hosts' is not set,
use e.g. '-e target_hosts=all:!ntpserver'
tasks:
- debug:
msg: hello
gives
shell> ansible-playbook pb.yml
ERROR! The variable 'target_hosts' is not set, use e.g. '-e target_hosts=all:!ntpserver'

Ansible: Get Variable with inventory_hostname

I have the following passwords file vault.yml:
---
server1: "pass1"
server2: "pass2"
server3: "pass3"
I am loading these values in a variable called passwords:
- name: Get Secrets
set_fact:
passwords: "{{ lookup('template', './vault.yml')|from_yaml }}"
delegate_to: localhost
- name: debug it
debug:
var: passwords.{{ inventory_hostname }}
The result of the debugging task shows me the result I want to get: The password for the specific host.
But if I set the following in a variables file:
---
ansible_user: root
ansible_password: passwords.{{ inventory_hostname }}
This will not give me the desired result. The ansible_password takes "passwords" literally and not as a variable.
How can I achieve the same result I got when debugging the passwords.{{ inventory_hostname }}?
Regarding the part
... if I set the following in a variables file ...
I am not sure since I miss some information about your use case and data flow. However, in general the syntax ansible_password: "{{ PASSWORDS[inventory_hostname] }}" might work for you.
---
- hosts: localhost
become: false
gather_facts: false
vars:
PASSWORDS:
SERVER1: "pass1"
SERVER2: "pass2"
SERVER3: "pass3"
localhost: "pass_local"
tasks:
- name: Debug var
debug:
var: PASSWORDS
- name: Set Fact 'ansible_password'
set_fact:
ansible_password: "{{ PASSWORDS[inventory_hostname] }}"
- name: Debug var
debug:
var: ansible_password
In that way you can access a element by name.

Ansible: Retrieve Data from inventory

I need to retrieve the IP of bob_server from the inventory file. I am not clear as to what combination to use in filter, lookup, and when? Depending on the inventory filebob_server and alice_server names can change, but app_type won't change. My playbook logic is obviously wrong, Can someone guide me the correct way to fetch IP address when app_type = bob
My current Inventory file:
---
all:
hosts:
children:
bob_server:
hosts: 10.192.2.6
vars:
app_type: bob
alice_server:
hosts: 10.192.2.53
vars:
app_type: alice
My Playbook
---
- hosts: localhost
name: Retrive data
tasks:
- name: Set Ambari IP
set_fact:
ambariIP: "{{ lookup('hosts', children) }}"
when: "hostvars[app_type] == 'bob'"
Given the inventory
shell> cat hosts-01
---
all:
hosts:
children:
bob_server:
hosts: 10.192.2.6
vars:
app_type: bob
alice_server:
hosts: 10.192.2.53
vars:
app_type: alice
The simple option is using ansible-inventory, e.g.
- hosts: localhost
tasks:
- command: ansible-inventory -i hosts-01 --list
register: result
- set_fact:
my_inventory: "{{ result.stdout|from_yaml }}"
- debug:
var: my_inventory.bob_server.hosts
gives
my_inventory.bob_server.hosts:
- 10.192.2.6
If you want to parse the file on your own read it into a dictionary and flatten the paths, e.g. (install ansible.utils ansible-galaxy collection install ansible.utils)
- include_vars:
file: hosts-01
name: my_hosts
- set_fact:
my_paths: "{{ lookup('ansible.utils.to_paths', my_hosts) }}"
- debug:
var: my_paths
gives
my_paths:
all.children.alice_server.hosts: 10.192.2.53
all.children.alice_server.vars.app_type: alice
all.children.bob_server.hosts: 10.192.2.6
all.children.bob_server.vars.app_type: bob
all.hosts: null
Now select the keys ending bob_server.hosts
- set_fact:
bob_server_hosts: "{{ my_paths|
dict2items|
selectattr('key', 'match', '^.*bob_server\\.hosts$')|
items2dict }}"
gives
bob_server_hosts:
all.children.bob_server.hosts: 10.192.2.6
and select the IPs
- set_fact:
bob_server_ips: "{{ bob_server_hosts.values()|list }}"
gives
bob_server_ips:
- 10.192.2.6
The inventory is missing the concept of groups. See Inventory basics: formats, hosts, and groups. Usually, the value of children is a group of hosts. In this inventory, the value of children is the single host. This is conceptually wrong but still valid, .e.g
- hosts: bob_server
gather_facts: false
tasks:
- debug:
var: inventory_hostname
gives
shell> ansible-playbook -i hosts-01 playbook.ym
...
TASK [debug] ****************************************************
ok: [10.192.2.6] =>
inventory_hostname: 10.192.2.6

How to set fact witch is visible on all hosts in Ansible role

I'm setting fact in a role:
- name: Check if manager already configured
shell: >
docker info | perl -ne 'print "$1" if /Swarm: (\w+)/'
register: swarm_status
- name: Init cluster
shell: >-
docker swarm init
--advertise-addr "{{ ansible_default_ipv4.address }}"
when: "'active' not in swarm_status.stdout_lines"
- name: Get worker token
shell: docker swarm join-token -q worker
register: worker_token_result
- set_fact:
worker_token: "{{ worker_token_result.stdout }}"
Then I want to access worker_token on another hosts. Here's my main playbook, the fact is defined in the swarm-master role
- hosts: swarm_cluster
become: yes
roles:
- docker
- hosts: swarm_cluster:&manager
become: yes
roles:
- swarm-master
- hosts: swarm_cluster:&node
become: yes
tasks:
- debug:
msg: "{{ worker_token }}"
I'm getting undefined variable. How to make it visible globally?
Of course it works perfectly if I run debug on the same host.
if your goal is just to access worker_token from on another host, you can use hostvars variable and iterate through the group where you've defined your variable like this:
- hosts: swarm_cluster:&node
tasks:
- debug:
msg: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
If your goal is to define the variable globally, you can add a step to define a variable on all hosts like this:
- hosts: all
tasks:
- set_fact:
worker_token_global: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
- hosts: swarm_cluster:&node
tasks:
- debug:
var: worker_token_global

Return Variable from Included Ansible Playbook

I have seen how to register variables within tasks in an ansible playbook and then use those variables elsewhere in the same playbook, but can you register a variable in an included playbook and then access those variables back in the original playbook?
Here is what I am trying to accomplish:
This is my main playbook:
- include: sub-playbook.yml job_url="http://some-jenkins-job"
- hosts: localhost
roles:
- some_role
sub-playbook.yml:
---
- hosts: localhost
tasks:
- name: Collect info from Jenkins Job
script: whatever.py --url "{{ job_url }}"
register: jenkins_artifacts
I'd like to be able to access the jenkins_artifacts results back in main_playbook if possible. I know you can access it from other hosts in the same playbook like this: "{{ hostvars['localhost']['jenkins_artifacts'].stdout_lines }}"
Is it the same idea for sharing across playbooks?
I'm confused what this question is about. Just use the variable name jenkins_artifacts:
- include: sub-playbook.yml job_url="http://some-jenkins-job"
- hosts: localhost
debug:
var: jenkins_artifacts
This might seem complicated but I love doing this in my Playbooks:
rc defines the name of the variable which contains the return value
ar gives the arguments to the include tasks
master.yml:
- name: verify_os
include_tasks: "verify_os/main.yml"
vars:
verify_os:
rc: "isos_present"
ar:
image: "{{ os.ar.to_os }}"
verify_os/main.yml:
---
- name: check image on device
ios_command:
commands:
- "sh bootflash: | inc {{ verify_os.ar.image }}"
register: image_check
- name: check if available
shell: "printf '{{ image_check.stdout_lines[0][0] }}\n' | grep {{ verify_os.ar.image }} | wc -l"
register: image_available
delegate_to: localhost
- set_fact: { "{{ verify_os.rc }}": "{{ true if image_available.stdout == '1' else false }}" }
...
I can now use the isos_present variable anywhere in the master.yml to access the returned value.

Resources