when you don't have any hosts in inventory, when running playbook there is only warning:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Is there a way to make that Error instead of Warning?
I find out that there is this parameter in ansible.cfg:
[inventory]
unparsed_is_failed = True
but it will only return error when there is no inventory file which you are trying to use. It didn't look into content.
One simple solution is:
Create the playbook "main.yml" like:
---
# Check first if the supplied host pattern {{ RUNNER.HOSTNAME }} matches with the inventory
# or forces otherwise the playbook to fail (for Jenkins)
- hosts: localhost
vars_files:
- "{{ jsonfilename }}"
tasks:
- name: "Hostname validation | If OK, it will skip"
fail:
msg: "{{ RUNNER.HOSTNAME }} not found in the inventory group or hosts file {{ ansible_inventory_sources }}"
when: RUNNER.HOSTNAME not in hostvars
# The main playbook starts
- hosts: "{{ RUNNER.HOSTNAME }}"
vars_files:
- "{{ jsonfilename }}"
tasks:
- Your tasks
...
...
...
Put your host variables in a json file "var.json":
{
"RUNNER": {
"HOSTNAME": "hostname-to-check"
},
"VAR1":{
"CIAO": "CIAO"
}
}
Run the command:
ansible-playbook main.yml --extra-vars="jsonfilename=var.json"
You can also adapt this solution as you like and pass directly the hostname with the command
ansible-playbook -i hostname-to-check, my_playbook.yml
but in this last case remember to put in your playbook:
hosts: all
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Q: "Is there a way to make that Error instead of Warning?"
A: Yes. It is. Test it in the playbook. For example,
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives with an empty inventory
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
Example of a project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
└── pb.yml
0 directories, 3 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
shell> cat hosts
shell> cat pb.yml
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives
shell> ansible-playbook pb.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Q: "Still I am getting a warning: [WARNING]: provided hosts list is empty, ..."
A: Feel free to turn the warning off. See LOCALHOST_WARNING.
shell> ANSIBLE_LOCALHOST_WARNING=false ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Related
I have the following playbook:
- name: Get-IP
hosts: webservers
tasks:
- debug: var=ansible_all_ipv4_addresses
register: foo
- debug:
var: foo
- local_action: lineinfile line= {{ foo }} path=/root/file2.txt
I added debug to make sure variable foo carrying the IP address, and its working correctly, but when I try to save it to file on local desk, files remains empty. If I delete the file I get an error about the file does not exist.
Whats wrong with my playbook? Thanks.
Write the lines in a loop. For example,
shell> cat pb.yml
- hosts: webservers
tasks:
- debug:
var: ansible_all_ipv4_addresses
- lineinfile:
create: true
dest: /tmp/ansible_all_ipv4_addresses.webservers
line: "{{ item }} {{ hostvars[item].ansible_all_ipv4_addresses|join(',') }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
gives
shell> ansible-playbook pb.yml
PLAY [webservers] ****************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
ok: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_all_ipv4_addresses:
- 10.1.0.61
ok: [test_13] =>
ansible_all_ipv4_addresses:
- 10.1.0.63
TASK [lineinfile] ****************************************************************************
changed: [test_11 -> localhost] => (item=test_11)
changed: [test_11 -> localhost] => (item=test_13)
PLAY RECAP ***********************************************************************************
test_11: ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shell> cat /tmp/ansible_all_ipv4_addresses.webservers
test_11 10.1.0.61
test_13 10.1.0.63
According your description and example I understand your use case that you like to gather Ansible facts about Remote Nodes and Cache facts on the Control Node.
If fact cache plugin is enabled
~/test$ grep fact_ ansible.cfg
fact_caching = yaml # or json
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 129600
a minimal example playbook
---
- hosts: test
become: false
gather_facts: true
gather_subset:
- "!all"
- "!min"
- "network"
tasks:
- name: Show Facts network
debug:
msg: "{{ ansible_facts }}"
will result into local files like
~/test$ grep -A1 _ipv4 /tmp/ansible/facts_cache/test.example.com
ansible_all_ipv4_addresses:
- 192.0.2.1
--
ansible_default_ipv4:
address: 192.0.2.1
--
...
By following this approach there is no need for re-implementing already existing functionality, less and easier to maintain code, less error prone, and so on.
Furthermore,
Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers.
Similar Q&A
How to gather IP addresses of Remote Nodes?
I have a directory I have created with several sub-tasks but I'm having trouble in making Ansible run all tasks from inside the specified directory.
The script looks like this:
---
- hosts: localhost
connection: local
tasks:
# tasks file for desktop
- name: "LOADING ALL TASKS FROM THE 'SUB_TASKS' DIRECTORY"
include_vars:
dir: sub_tasks
extensions:
- 'yml'
And this is the output:
plbchk main.yml --check
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [localhost] ****************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************
ok: [localhost]
TASK [LOADING ALL TASKS FROM THE 'SUB_TASKS' DIRECTORY] *************************************************************
fatal: [localhost]: FAILED! => {"ansible_facts": {}, "ansible_included_var_files": [], "changed": false, "message": "/home/user/Documents/ansible-roles/desktop/tasks/sub_tasks/gnome_tweaks.yml must be stored as a dictionary/hash"}
PLAY RECAP **********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I've tried all sorts of ways to make it run the sub-tasks but to no avail.
I'd like to do it this way instead of creating one big file containing all the tasks. Is this possible?
include_vars is not for including tasks , this is to include vars (as its name suggest). Also If you check the error message it says "must be stored as a dictionary/hash.
fatal: [localhost]: FAILED! => {"ansible_facts": {}, "ansible_included_var_files": [], "changed": false, "message": "/home/user/Documents/ansible-roles/desktop/tasks/sub_tasks/gnome_tweaks.yml must be stored as a dictionary/hash"}
Solution:
You need to use include_taskfor what you are trying. Check out here.
Here is a complete/minimal working example, here we are making a list of yaml or yml files present in a provided directory and then running include_tasks over loop for all the files.
---
- name: Sample playbook
connection: local
gather_facts: false
hosts: localhost
tasks:
- name: Find all the yaml files in the directory
find:
paths: /home/user/Documents/ansible-roles/desktop/tasks
patterns: '*.yaml,*.yml'
recurse: yes
register: file_list
- name: show the yaml files present
debug: msg="{{ item }}"
loop: "{{ file_list.files | map(attribute='path') | list }}"
- name: Include task list in play
include_tasks: "{{ item }}"
loop: "{{ file_list.files | map(attribute='path') | list }}"
The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE_SID]".
Lets say you have a database with ORACLE SID TEST02, it can be resolved as db-TEST02.
The complete script is doing some more stuff, but this example is sufficient to explain the problem.
The db-[SID] hostnames must be added as dynamic hosts to be able to parallelize the processing.
The problem is that oracle_databases is not passed to the new playbook. It works if I change the hosts from windows to localhost, but I need to analyze something first and get some data from the windows hosts, so this is not an option.
Here is the script:
---
# ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03"
- hosts: windows
gather_facts: false
vars:
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: true
tasks:
- set_fact:
database: "{{ databases.split(',') }}"
- name: Add databases as hosts, to parallelize the shutdown process
add_host:
name: "db-{{ item }}"
groups: oracle_databases
loop: "{{ database | list}}"
##### just to check, what is in oracle_databases
- name: show the content of oracle_databases
debug:
msg: "{{ item }}"
with_inventory_hostnames:
- oracle_databases
- hosts: oracle_databases
gather_facts: true
tasks:
- debug:
msg:
- "Hosts, on which the playbook is running: {{ ansible_play_hosts }}"
verbosity: 1
My inventory file is just small, but there will be more windows hosts in future:
[adminsw1#obelix oracle_change_home]$ cat inventory
[local]
localhost
[windows]
windows68
And the output
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02"
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
/usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
PLAY [windows] *****************************************************************************************************************************
TASK [set_fact] ****************************************************************************************************************************
ok: [windows68]
TASK [Add databases as hosts, to parallelize the shutdown process] *************************************************************************
changed: [windows68] => (item=TEST01)
changed: [windows68] => (item=TEST02)
TASK [show the content of oracle_databases] ************************************************************************************************
ok: [windows68] => (item=db-TEST01) => {
"msg": "db-TEST01"
}
ok: [windows68] => (item=db-TEST02) => {
"msg": "db-TEST02"
}
PLAY [oracle_databases] ********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************
windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It might be possible that Ansible is not parsing the updated inventory file file, or the hosts name is being malformed in as it updates the inventory.
In this scenario, you can use the -vv or -vvvv parameter in your Ansible command to get extra logging.
This will give you a complete picture into what Ansible is actually doing as it tries to parse hosts.
I found out, what the problem was. The playbook is restricted to a host "windows68" and therefore can´t be run at the hosts added by the dynamic inventory.
It will work that way:
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68,oracle_databases -e "databases=TEST01,TEST02"
I'm new to Ansible and trying to fetch hosts from below config.yaml file instead of inventory, to run tasks on those hosts. How can I do this in main playbook?
service:
web-app:
common:
tomcat:
port: 80
hosts:
all:
- abc.com
- pqr.com
Is there a way to access abc.com and pqr.com in my playbook, if I have to run certain tasks on those servers?
The base: loading the data
The base ansible functions needed for the following examples are:
The file lookup plugin to load the content of a file present on the controller.
The from_yaml filter to read the file content as yaml formatted data
For both examples below, I added your above yaml example (after fixing the indentation issues) to files/service_config.yml. Simply change the name of the file if it is in a files subdir, or use the full path to the file if it is outside of your project.
Combining the above, you can get your list of hosts with the following jinja2 expression.
{{ (lookup('file', 'service_config.yml') | from_yaml).service.hosts.all }}
Note: if your custom yaml file is not present on your controller, you will firts need to get the data locally by using the slurp or fetch modules
Use in memory inventory
In this example, I create a dynamic group custom_group running a add_hosttask on a play targeted to localhost and later target that custom group in the next play. This is probably the best option if you have a large set of tasks to run on those hosts.
---
- name: Prepare environment
hosts: localhost
gather_facts: false
vars:
# Replace with full path to actual file
# if this one is not in your 'files' subdir
my_config_file: service_config.yml
my_custom_hosts: "{{ (lookup('file', my_config_file) | from_yaml).service.hosts.all }}"
tasks:
- name: Create dynamic group from custom yaml file
add_host:
name: "{{ item }}"
group: custom_group
loop: "{{ my_custom_hosts }}"
- name: Play on new custom group
hosts: custom_group
gather_facts: false
tasks:
- name: Show we can actually contact the group
debug:
var: inventory_hostname
Which gives:
PLAY [Prepare environment] **********************************************************************************************************************************************************************************************************************************************
TASK [Create dynamic group from custom yaml file] ***********************************************************************************************************************************************************************************************************************
changed: [localhost] => (item=abc.com)
changed: [localhost] => (item=pqr.com)
PLAY [Play on new custom group] *****************************************************************************************************************************************************************************************************************************************
TASK [Show we can actually contact the group] ***************************************************************************************************************************************************************************************************************************
ok: [abc.com] => {
"inventory_hostname": "abc.com"
}
ok: [pqr.com] => {
"inventory_hostname": "pqr.com"
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
abc.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
pqr.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Use delegation
In the following example, I use task delegation to change the target host inside a play targeted to other hosts.
This is more suited if you have few tasks to run on the custom hosts and/or you need facts from the current play hosts to run those tasks. See the load balancer example in the above doc for a more in depth explanation.
---
- name: Delegation example
hosts: localhost
gather_facts: false
vars:
# Replace with full path to actual file
# if this one is not in your 'files' subdir
my_config_file: service_config.yml
my_custom_hosts: "{{ (lookup('file', my_config_file) | from_yaml).service.hosts.all }}"
tasks:
- name: Task played on our current target host list
debug:
var: inventory_hostname
- name: Fake task delegated to our list of custom host
# Note: we play it only once so it does not repeat
# if the play `hosts` param is a group of several targets
# This is for example only and is not really delegating
# anything in this case. Replace with your real life task
debug:
msg: "I would run on {{ item }} with facts from {{ inventory_hostname }}"
delegate_to: "{{ item }}"
run_once: true
loop: "{{ my_custom_hosts }}"
Which gives:
PLAY [Delegation example] ***********************************************************************************************************************************************************************************************************************************************
TASK [Task played on our current target host list] **********************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"inventory_hostname": "localhost"
}
TASK [Fake task delegated to our list of custom host] *******************************************************************************************************************************************************************************************************************
ok: [localhost -> abc.com] => (item=abc.com) => {
"msg": "I would run on abc.com with facts from localhost"
}
ok: [localhost -> pqr.com] => (item=pqr.com) => {
"msg": "I would run on pqr.com with facts from localhost"
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Is there a way to force commands ansible-playbook, ansible-variable, etc... to be executed with a --limit option (otherwise to deny it) ?
I discovered that, on a cluster it can easily run a playbook to all nodes if you mistakenly run it without limit, I'd like to prevent it from ansible users.
Use the ansible_limit variable (added in ansible 2.5). You can test like this:
tasks:
- fail:
msg: "you must use -l or --limit"
when: ansible_limit is not defined
run_once: true
It's the opposite of the task I've solved recently. My goal was to detect there is a --limit and to skip some plays.
https://medium.com/opsops/how-to-detect-if-ansible-is-run-with-limit-5ddb8d3bd145
In your case you can check this in the play and fail if it "full run":
- hosts: all
gather_facts: no
tasks:
- set_fact:
full_run: '{{play_hosts == groups.all}}'
- fail:
msg: 'Use --limit, Luke!'
when: full_run
You can use a different group instead of all, of course (change it in both hosts and set_fact lines).
I did it that way in a task:
$ cat exit-if-no-limit.yml
---
- name: Verifying that a limit is set
fail:
msg: 'This playbook cannot be run with no limit'
run_once: true
when: ansible_limit is not defined
- debug:
msg: Limit is {{ ansible_limit }}, let's continue
run_once: true
when: ansible_limit is defined
Which I include in my playbooks when I need to disallow them to run on all the hosts:
- include_role:
name: myrole
tasks_from: "{{ item }}.yml"
loop:
- exit-if-no-limit
- something
- something_else
Easy to reuse when needed. It works like that:
TASK [myrole: Verifying that a limit is set]
fatal: [ahost]: FAILED! => {"changed": false, "msg": "This playbook cannot be run with no limit"}
or
TASK [myrole: debug]
ok: [anotherhost] => {
"msg": "Limit is anotherhost, let's continue"
}
This can be done with the assert module.
I like to do this in a separate play, at the start of the playbook, with fact gathering disabled. That way, the playbook fails instantly if the limit is not specified.
- hosts: all
gather_facts: no
tasks:
- name: assert limit
run_once: yes
assert:
that:
- 'ansible_limit is defined'
fail_msg: Playbook must be run with a limit (normally staging or production)
quiet: yes
When a limit is not set, you get:
$ ansible-playbook site.yaml
PLAY [all] *********************************************************************
TASK [assert limit] ************************************************************
fatal: [host1.example.net]: FAILED! => {"assertion": "ansible_limit is defined", "changed": false, "evaluated_to": false, "msg": "Playbook must be run with a limit (normally staging or production)"}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
host1.example.com : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
And when a limit is set, you get:
$ ansible-playbook -l staging site.yaml
PLAY [all] *********************************************************************
TASK [assert limit] ************************************************************
ok: [host1.example.com]
PLAY [all] *********************************************************************
[... etc ...]
Functionally this is very similar to using the fail module, guarded with when. The difference is that the assert task itself is responsible for checking the assertions, therefore if the assertions pass, the task succeeds. When using the fail module, if the when condition fails, the task is skipped.