Ansible variable precedence - ansible

So I have this playbook, in this playbook I have a variable, let's call it the_var
Now this variable should always have the same value default except for certain inventories where it should be not_default
This is how I did it, under group_vars/all.yml I put the_var: default and under inventories/my_special_inv I put the_var=not_default (under [all:vars])
When I run ansible-playbook -i inventories/my_special_inv I expect the variables value to be not_default (since I overrode the default behaviour with the inventory file). but it is set to default
How do I implement this behaviour correctly?

I am giving you working example that will help you to understand it better:
.
|-- default
| |-- group_vars
| | `-- server.yml
| `-- inventory
|-- site.yml
|-- special
| |-- group_vars
| | `-- server.yml
| `-- inventory
In this example I have just tested it against the localhost host so inside both the special/inventory and default/inventory, I have this group, but you can put whatever as per your need:
[server]
localhost
Important thing is the group name, it should match under the default/group_vars and special/group_vars file name (in my case it is server but in your case it can be anything):
So in default/group_vars, I have placed:
---
the_var: default
and in special/group_vars, I have placed:
---
the_var: not_default
In my test playbook(site.yml in this case) have:
---
- hosts: all
gather_facts: no
tasks:
- debug:
msg: "{{ the_var }}"
Now when I call the playbook against the default inventory, got this value:
anansible-playbook -i default site.yml -c local
PLAY [all] *********************************************************************
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "default"
}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
when I call the playbook against the special inventory, got this value:
ansible-playbook -i special site.yml -c local
PLAY [all] *********************************************************************
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "not_default"
}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
-c local is for localhost connection which you don't need in your live environment, I am sure you are working on remote host with ssh connection, which is default. Hope it help you.

Related

Ansible dynamic inventory could not resolve hostname

In ansible (please see my Repo I have a dynamic inventory (hosts_aws_ec2.yml). It shows this
ansible-inventory -i hosts_aws_ec2.yml --graph
#all:
|--#aws_ec2:
| |--linuxweb01
| |--winweb01
|--#iis:
| |--winweb01
|--#linux:
| |--linuxweb01
|--#nginx:
| |--linuxweb01
|--#ungrouped:
|--#webserver:
| |--linuxweb01
| |--winweb01
|--#windows:
| |--winweb01
When I run any playbook, for example configure_iis_web_server.yml or ping_novars.yml in my repo It says host is unreachable.
ansible-playbook ping_novars.yml -i hosts_aws_ec2.yml --ask-vault-pas --limit linuxweb01
PLAY [linux] ******************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
fatal: [linuxweb01]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linuxweb01: Name or service not known", "unreachable": true}
PLAY RECAP ********************************************************************************************************************
linuxweb01 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
ansible all -i hosts_aws_ec2.yml -m debug -a "var=ip" --ask-vault-pass shows that it finds the ip addresses for the files in host_vars folder.
winweb01 | SUCCESS => {
"ip": "3.92.5.126"
}
linuxweb01 | SUCCESS => {
"ip": "52.55.134.86"
}
I used to have this working when I didn't have this in hosts_aws_ec2.yml:
hostnames:
- tag:Name
and the files in host_vars where the actual public IPv4 DNS addresses for example ec2-3-92-5-126.compute-1.amazonaws.com.yml instead of winweb01.Then the inventory would list the public dns not the name.
Is there anyway to use the name tag in the inventory but provide the ip address?
I was able to make it work by adding compose to my dynamic host script:
hostnames:
- tag:Name
compose:
ansible_host: public_dns_name
found answer here: Displaying a custom name for a host

Ansible: I need to create a playbook to execute the shell script in ansible

i need to create an ansible job to execute the shell script which is having arguments for different environments (one playbook which will be applicable to all environments test, QA and prod by providing argument in the command). for example, I need to execute script ABC.sh for which normal command is
sh ABC.sh /105t (for test execution) or sh ABC.sh /105q (for QA execution).
Can someone please help me with the playbook for this? Thanks!!
I tried the below format in YML file in gitlab.
-name: execute the script
tasks:
name: execute the ABC script
script: sh script_dir_path/ABC.sh /105t
The job ran successfully but it did not trigger the script execution.
Use the module script. It runs a local script on a remote node after transferring it. For example, given the tree
shell> tree .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── script_dir_path
└── ABC.sh
Create a simple script that displays the first argument
shell> cat script_dir_path/ABC.sh
echo $1
The playbook below runs on all remote hosts. It will transfer the script to the remotes, run it with the argument arg, and display the result
shell> cat pb.yml
- hosts: all
tasks:
- script:
cmd: "script_dir_path/ABC.sh {{ arg }}"
register: out
- debug:
var: out.stdout
Given the inventory
shell> cat hosts
test_11
test_13
The playbook works as expected
shell> ansible-playbook pb.yml -e arg=/105t
PLAY [all] ***********************************************************************************
TASK [script] ********************************************************************************
changed: [test_11]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
out.stdout: |-
/105t
ok: [test_13] =>
out.stdout: |-
/105t
PLAY RECAP ***********************************************************************************
test_11: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible pattern matching doesn't work as expected

I am unable to find why this simple pattern doesn't seem to match anything. I have two Ansible hosts as targets. This is my inventory file:
[web_Xubuntu]
192.168.160.128
[database_Fedora]
192.168.160.132
And this is what my YAML playbook looks like:
# Hosts: where our play will run and options it will run with
hosts: *Fedora
become: True
#gather_facts: False
# Vars: variables that will apply to the play, on all target systems
vars:
motd: "Welcome to Fedora Linux - Ansible Rocks\n"
# Tasks: the list of tasks that will be executed within the playbook
tasks:
- name: Configure a MOTD (message of the day)
copy:
content: "{{ motd }}"
dest: /etc/motd
notify: MOTD changed
# Handlers: the list of handlers that are executed as a notify key from a task
handlers:
- name: MOTD changed
debug:
msg: The MOTD was changed
On processing this playbook, Ansible reports the following error:
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
found undefined alias
The offending line appears to be:
# Hosts: where our play will run and options it will run with
hosts: *Fedora
^ here
What is the right way to use a wildcard?
You can use the asterisk *( wildcard) with FQDN or IP only. For example,
192.0.*
*.example.com
*.com
See Patterns: targeting hosts and groups.
Use the inventory plugin constructed if you want to run all *Fedora groups. See
shell> ansible-doc -t inventory constructed
For example, given the tree
shell> tree .
.
├── ansible.cfg
├── inventory
│   ├── 01-hosts
│   └── 02-constructed.yml
└── pb.yml
1 directory, 4 files
the inventory
shell> cat inventory/01-hosts
[web_Xubuntu]
192.168.160.128
[database_Fedora]
192.168.160.132
[web_Fedora]
192.168.160.133
the contructed plugin
shell> cat inventory/02-constructed.yml
plugin: constructed
groups:
Fedora: group_names|select('regex', '^.*Fedora$')
Test the inventory
shell> ansible-inventory -i inventory --list --yaml
all:
children:
Fedora:
hosts:
192.168.160.132: {}
192.168.160.133: {}
database_Fedora:
hosts:
192.168.160.132: {}
ungrouped: {}
web_Fedora:
hosts:
192.168.160.133: {}
web_Xubuntu:
hosts:
192.168.160.128: {}
Then, test the playbook
shell> cat pb.yml
- hosts: Fedora
gather_facts: false
tasks:
- debug:
var: inventory_hostname
gives
shell> ansible-playbook -i inventory pb.yml
PLAY [Fedora] *********************************************************************************
TASK [debug] **********************************************************************************
ok: [192.168.160.132] =>
inventory_hostname: 192.168.160.132
ok: [192.168.160.133] =>
inventory_hostname: 192.168.160.133
PLAY RECAP ************************************************************************************
192.168.160.132: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.160.133: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Fail instead of Warning when no hosts are matched

when you don't have any hosts in inventory, when running playbook there is only warning:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Is there a way to make that Error instead of Warning?
I find out that there is this parameter in ansible.cfg:
[inventory]
unparsed_is_failed = True
but it will only return error when there is no inventory file which you are trying to use. It didn't look into content.
One simple solution is:
Create the playbook "main.yml" like:
---
# Check first if the supplied host pattern {{ RUNNER.HOSTNAME }} matches with the inventory
# or forces otherwise the playbook to fail (for Jenkins)
- hosts: localhost
vars_files:
- "{{ jsonfilename }}"
tasks:
- name: "Hostname validation | If OK, it will skip"
fail:
msg: "{{ RUNNER.HOSTNAME }} not found in the inventory group or hosts file {{ ansible_inventory_sources }}"
when: RUNNER.HOSTNAME not in hostvars
# The main playbook starts
- hosts: "{{ RUNNER.HOSTNAME }}"
vars_files:
- "{{ jsonfilename }}"
tasks:
- Your tasks
...
...
...
Put your host variables in a json file "var.json":
{
"RUNNER": {
"HOSTNAME": "hostname-to-check"
},
"VAR1":{
"CIAO": "CIAO"
}
}
Run the command:
ansible-playbook main.yml --extra-vars="jsonfilename=var.json"
You can also adapt this solution as you like and pass directly the hostname with the command
ansible-playbook -i hostname-to-check, my_playbook.yml
but in this last case remember to put in your playbook:
hosts: all
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Q: "Is there a way to make that Error instead of Warning?"
A: Yes. It is. Test it in the playbook. For example,
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives with an empty inventory
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
Example of a project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
└── pb.yml
0 directories, 3 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
shell> cat hosts
shell> cat pb.yml
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives
shell> ansible-playbook pb.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Q: "Still I am getting a warning: [WARNING]: provided hosts list is empty, ..."
A: Feel free to turn the warning off. See LOCALHOST_WARNING.
shell> ANSIBLE_LOCALHOST_WARNING=false ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

How to share handlers?

The docs says:
Since handlers are tasks too, you can also include handler files from the ‘handlers:’ section.
What I do, playbook.yml:
- hosts: all
handlers:
- include: handlers.yml
# - name: h1
# debug: msg=h1
tasks:
- debug: msg=test
notify: h1
changed_when: true
handlers.yml:
- name: h1
debug: msg=h1
Then,
$ ansible-playbook playbook.yml -i localhost, -k -e ansible_python_interpreter=python2 -v
...
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "test"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
...
But when I uncomment the lines, I see
$ ansible-playbook playbook.yml -i localhost, -k -e ansible_python_interpreter=python2 -v
...
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "test"
}
RUNNING HANDLER [h1] ***********************************************************
ok: [localhost] => {
"msg": "h1"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
...
I'm running ansible-2.1.0.0.
What am I doing wrong? That's the first thing I'd like to know. Workarounds come second.
UPD
Includes can also be used in the ‘handlers’ section, for instance, if you want to define how to restart apache, you only have to do that once for all of your playbooks. You might make a handlers.yml that looks like:
---
# this might be in a file like handlers/handlers.yml
- name: restart apache
service: name=apache state=restarted
And in your main playbook file, just include it like so, at the bottom of a play:
handlers:
- include: handlers/handlers.yml
Depending on the size of your plays a better solution might be to use roles. Ansible has some discussion why roles are a good idea.
Tasks go in roles/mystuff/tasks/main.yml and roles/somethingelse/tasks/main.yml. You can share handlers between the roles, by creating a role containing only handlers roles/myhandlers/handlers/main.yml and make both roles depend on the myhandlers role:
roles/mystuff/meta/main.yml and roles/somethingelse/meta/main.yml:
---
dependencies:
- myhandlers
More on dependencies in https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-role-dependencies
To moderators. Read my question carefully please. That's the answer to my question. And I'm totally aware that SO is not a forum.
That's a bug in ansible-2.1. The credit goes to udondan who found the issue.

Resources