I am working on a playbook (for use in AWX) to handle some backend processing for one of our web apps.
One of the extra variables passed to the playbook via a REST call to AWX is used to pass the hosts
to the playbook
hosts: {{target}}
target can be a single server or a list of servers.
Question: how can I use patterns to skip a host if it is not a member of an inventory group
e.g if I want the playbook to skip a server if it's in the staging group in inventory
I have tried the following:
hosts: "{{target}}:!staging"
this only works if only one server is sent as target var, however it fails if called with a list.
This should work if you do use : as delimiter for your hosts and not ,.
The syntax host1:host2:host3:!staging works, but host1,host2,host3:!staging, on the other hand, does generates a warning
[WARNING]: Could not match supplied host pattern, ignoring: host3:!staging
and this could well be the issue you are facing too.
The two syntaxes are documented here
Given the inventory:
all:
hosts:
host1:
host2:
host3:
children:
staging:
hosts:
host2:
And the playbook:
- hosts: host1:host2:host3:!staging
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }}"
This yields the recap:
PLAY [host1:host2:host3:!staging] *********************************************************************************
TASK [debug] ******************************************************************************************************
ok: [host1] => {
"msg": "host1"
}
ok: [host3] => {
"msg": "host3"
}
PLAY RECAP ********************************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And it gives the exact same recap when using the playbook:
- hosts: "{{ target }}:!staging"
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }}"
Run via:
ansible-playbook play.yml -i inventory.yml -e "target=host1:host2:host3"
Related
I am trying to re-design a playbook, where I am using idiomatic Ansible in order to create new configuration objects. I am basically sending the following command, in order to start the playbook:
ansible-playbook test-playbook.yaml -v -i roles/test-role/vars/inventory
The playbook calls various roles, which in turn use delegations and override the inventory_hostname variable in order to use the correct target hostname. For example, in this playbook snippet:
- name: "Upload certificate"
import_role:
name: certificate_manager
tasks_from: certificate_upload
delegate_to: localhost
vars:
inventory_hostname: "{{ target_hostname }}"
This of course uses the files contained within the roles/test-role/vars/inventory folder as an inventory file; this contains the configuration.yml file which specifies the details of the configuration object which will be created.
If useful, the configuration.yml file is formatted like so:
virtual_servers:
hosts:
test-1.test.local
name: test-1.test.local
type: 'standard'
description: 'Test'
...
This, however, closes the possibility to use the actual inventory, which also contains other useful variables - also considering that within the playbook I am using tasks from other roles.
Since the approach to use the actual inventory (without the -i override) I think is better, especially considering that it is going to be centrally updated if the need arises, is there a better (and maybe simpler) way to run the configuration file with the hosts definition, without impacting the actual inventory?
Thanks in advance for any inputs you might have.
Q: "Override the inventory_hostname variable in order to use the correct target hostname."
A: The variable inventory_hostname is a special variable provided for your convenience. It does not make sense to override it because this variable doesn't control any connection plugin. Test it. For example,
- hosts: test_11:test_12:test_13
tasks:
- command: hostname
register: hostname
vars:
inventory_hostname: foo.bar.baz
- debug:
var: hostname.stdout
gives
PLAY [test_11:test_12:test_13] ***************************************************************
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_12]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
hostname.stdout: test_11
ok: [test_12] =>
hostname.stdout: test_12
ok: [test_13] =>
hostname.stdout: test_13
PLAY RECAP ***********************************************************************************
test_11: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_12: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can see that overridden inventory_hostname didn't influence which remote host was connected.
Is it possible during Ansible execution to add another host in the play, without starting a new play?
I am aware of the add_host module, but that requires the start of a new play to add the host, which is undesired.
No. By design, it's not possible to add hosts to 'in-flight play'. Quoting from the Summary of Ansible bug #59401:
By design, the in-flight play will not start running tasks on newly-added hosts, but it will stop running tasks on hosts that have disappeared. Newly-created hosts from an inventory refresh are immediately visible in ansible_play_hosts, even though they're not executing.
Notes
The bug claims refresh_inventory and add_host should have the same effects.
One might expect that the option refresh_inventory of the module meta does the job. The scenario would be:
Start a play
Modify the source of the inventory
Run - meta: refresh_inventory
Unfortunately, the example of the INI file below shows that this doesn't work. The host host03 is added to the inventory and to the list ansible_play_hosts_all as well. But, then, the following task debug doesn't run at this host. Play recap doesn't include this host either.
shell> cat hosts
[test]
host01
host02
The playbook below
shell> cat playbook.yml
- hosts: test
gather_facts: false
tasks:
- debug:
var: ansible_play_hosts_all
run_once: true
- community.general.ini_file:
path: hosts
section: test
option: "{{ item.host }}"
state: "{{ item.state }}"
allow_no_value: true
loop:
- {host: host03, state: present}
run_once: true
delegate_to: localhost
- meta: refresh_inventory
- debug:
var: ansible_play_hosts_all
run_once: true
- debug:
var: inventory_hostname
gives
shell> ansible-playbook -i hosts playbook.yml
PLAY [test] **********************************************************************************
TASK [debug] *********************************************************************************
ok: [host01] =>
ansible_play_hosts_all:
- host01
- host02
TASK [community.general.ini_file] ************************************************************
changed: [host01 -> localhost] => (item={'host': 'host03', 'state': 'present'})
TASK [meta] **********************************************************************************
TASK [debug] *********************************************************************************
ok: [host01] =>
ansible_play_hosts_all:
- host01
- host02
- host03
TASK [debug] *********************************************************************************
ok: [host01] =>
inventory_hostname: host01
ok: [host02] =>
inventory_hostname: host02
PLAY RECAP ***********************************************************************************
host01 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
When I run this playbook, its not finding ansible_uptime_seconds variable. But ansible hostname -m setup gives this variable. I am using ansible 2.9.23 version.
- hosts: all
become: yes
become_method: sudo
gather_facts: yes
tasks:
- name: Print all available facts
ansible.builtin.debug:
var: ansible_facts
Getting this message
'ansible_uptime_seconds' is undefined
How to get this value in the play book?
The fact name is uptime_seconds, when facts are collected without setup module. however its "ansible_uptime_seconds" when collected with setup module.
---
- name: Sample playbook
connection: local
# gather_facts: false
hosts: localhost
tasks:
- name: print uptime sec
debug:
msg: "{{ ansible_facts.uptime_seconds }}"
Output of the above playbook is:
PLAY [Sample playbook] *********************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************
ok: [localhost]
TASK [print uptime sec] **********************************************************************************************************************************************************
ok: [localhost] => {
"msg": "172603"
}
PLAY RECAP *********************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I'm new to Ansible and trying to fetch hosts from below config.yaml file instead of inventory, to run tasks on those hosts. How can I do this in main playbook?
service:
web-app:
common:
tomcat:
port: 80
hosts:
all:
- abc.com
- pqr.com
Is there a way to access abc.com and pqr.com in my playbook, if I have to run certain tasks on those servers?
The base: loading the data
The base ansible functions needed for the following examples are:
The file lookup plugin to load the content of a file present on the controller.
The from_yaml filter to read the file content as yaml formatted data
For both examples below, I added your above yaml example (after fixing the indentation issues) to files/service_config.yml. Simply change the name of the file if it is in a files subdir, or use the full path to the file if it is outside of your project.
Combining the above, you can get your list of hosts with the following jinja2 expression.
{{ (lookup('file', 'service_config.yml') | from_yaml).service.hosts.all }}
Note: if your custom yaml file is not present on your controller, you will firts need to get the data locally by using the slurp or fetch modules
Use in memory inventory
In this example, I create a dynamic group custom_group running a add_hosttask on a play targeted to localhost and later target that custom group in the next play. This is probably the best option if you have a large set of tasks to run on those hosts.
---
- name: Prepare environment
hosts: localhost
gather_facts: false
vars:
# Replace with full path to actual file
# if this one is not in your 'files' subdir
my_config_file: service_config.yml
my_custom_hosts: "{{ (lookup('file', my_config_file) | from_yaml).service.hosts.all }}"
tasks:
- name: Create dynamic group from custom yaml file
add_host:
name: "{{ item }}"
group: custom_group
loop: "{{ my_custom_hosts }}"
- name: Play on new custom group
hosts: custom_group
gather_facts: false
tasks:
- name: Show we can actually contact the group
debug:
var: inventory_hostname
Which gives:
PLAY [Prepare environment] **********************************************************************************************************************************************************************************************************************************************
TASK [Create dynamic group from custom yaml file] ***********************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=abc.com)
changed: [localhost] => (item=pqr.com)
PLAY [Play on new custom group] *****************************************************************************************************************************************************************************************************************************************
TASK [Show we can actually contact the group] ***************************************************************************************************************************************************************************************************************************
ok: [abc.com] => {
"inventory_hostname": "abc.com"
}
ok: [pqr.com] => {
"inventory_hostname": "pqr.com"
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
abc.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
pqr.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Use delegation
In the following example, I use task delegation to change the target host inside a play targeted to other hosts.
This is more suited if you have few tasks to run on the custom hosts and/or you need facts from the current play hosts to run those tasks. See the load balancer example in the above doc for a more in depth explanation.
---
- name: Delegation example
hosts: localhost
gather_facts: false
vars:
# Replace with full path to actual file
# if this one is not in your 'files' subdir
my_config_file: service_config.yml
my_custom_hosts: "{{ (lookup('file', my_config_file) | from_yaml).service.hosts.all }}"
tasks:
- name: Task played on our current target host list
debug:
var: inventory_hostname
- name: Fake task delegated to our list of custom host
# Note: we play it only once so it does not repeat
# if the play `hosts` param is a group of several targets
# This is for example only and is not really delegating
# anything in this case. Replace with your real life task
debug:
msg: "I would run on {{ item }} with facts from {{ inventory_hostname }}"
delegate_to: "{{ item }}"
run_once: true
loop: "{{ my_custom_hosts }}"
Which gives:
PLAY [Delegation example] ***********************************************************************************************************************************************************************************************************************************************
TASK [Task played on our current target host list] **********************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"inventory_hostname": "localhost"
}
TASK [Fake task delegated to our list of custom host] *******************************************************************************************************************************************************************************************************************
ok: [localhost -> abc.com] => (item=abc.com) => {
"msg": "I would run on abc.com with facts from localhost"
}
ok: [localhost -> pqr.com] => (item=pqr.com) => {
"msg": "I would run on pqr.com with facts from localhost"
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have the following folder-structure, inspired by the best-practices section in Ansibles documentation:
my-playbook.yml
my-role
|
|── tasks
|
|── my-task.yml
I have tagged the tasks within the my-task.yml file which is part of a role. I execute the playbook using ansible-playbook.yml --tags "mytag". Unfortunately, all tasks are skipped. Can I only filter tasks directly part of the playbook?
Within my playbook, I do something like
- hosts: ansible_server
connection: local
gather_facts: no
roles:
- validate_properties
Thanks in advance!
What you should do is call the role from a task by using the include_role module. On that task you can apply tags. Take this playbook, for example:
---
- name: Tag role test
hosts: local
connection: local
gather_facts: no
tasks:
- include_role:
name: debug
tags:
- dont_run
- debug:
msg: Solo shot first
tags:
- run
Where my role/debug consists of just a task that prints Hello, world!.
If you call this playbook directly you get this output:
PLAY [Tag role test]
TASK [debug : debug]
ok: [localhost] =>
msg: Hello, world!
TASK [debug]
ok: [localhost] =>
msg: Solo shot first
PLAY RECAP
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
But, if you exclude the dont_run task like this:
ansible-playbook tag_roles.yml --skip-tags dont_run
This is the output:
PLAY [Diff test]
TASK [debug]
ok: [localhost] =>
msg: Solo shot first
PLAY RECAP
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I hope it helps.
You have to tag the subtasks also with your tag u want to run to:
Main Task:
- name: "test tags on sub task"
include_tasks: subtask.yml
with_items: "{{ myList }}"
loop_control:
label: item
tags: test
Sub task:
debug: msg="Sub Task"
tags: test