Is path of the inventory file accessible to playbooks? - ansible

I have a hierarchy of inventories like this:
inventories
foo
foo1/hosts
foo2/hosts
bar
bar1/hosts
bar2/hosts
Normally, I invoke ansible with explicit full path:
ansible -i inventories/bar/bar1 ....
However, some of the playbooks can run on the combined inventories:
ansible -i inventories/bar ....
This joins the multiple hosts files together, just as I want. However, I do not see a way for the tasks and templates to discern, which particular sub-inventory(ies) the host belongs to.
Is there a way to know about this? Ideally, a host would belong to group(s) based on the inventory file(s) it is listed in...

Yes, it is accessible. Ansible 2.3+ has the following two new magic variables:
inventory_file
inventory_dir

Related

How to use inventory data on a single host in ansible

I get my network inventory for ansible dynamicly from the netbox plugin and I want to use this data to edit the /etc/hosts file and run a script on a special host (librenms) to add all the network devices there.
Has anyone an idea how to run a playbook only on the librenms host with the data of the netbox inventory plugin?
Here my inventory (the librenms host is not part of it)
plugin: netbox
api_endpoint: https://netbox.mydomain.com
token: abcdefghijklmopqrstuvwxyz
validate_certs: False
config_context: True
group_by:
- device_roles
query_filters:
- role: test-switch
- has_primary_ip: True
Many, many thanks in advance!
If the host you want to run the playbook on is not part of your dynamic inventory and at the same time you want to use variables defined in the dynamic inventory in a play for that host, you have to construct an "hybrid" inventory containing both your dynamic inventory ans other static entries.
This is covered in the documentation on inventories: Using multiple inventory sources.
Very basically you have two solutions:
Use two different inventories on the command line:
ansible-playbook -i dynamic_inventory -i static_inventory your_playbook.yml
Create a compound inventory with different sources in a directory. See chapter "Aggregating inventory sources with a directory" in the above documentation.

ansible - define hosts/tag from dynamic inventory in the host file

I am using ec2.py and specific tag on ec2 instances to get my hosts, the results are shown as list of IP addresses, for example:
The results from ec2.py:
"tag_test_staging": [
"10_80_20_47"
],
I define the tag in my playbook - hosts: tag_Name_test and it is run on all the instances with tag_Name_test.
Is there a way to define the hosts/tag in the hosts file under the inventory/ folder and the playbook will take the hosts from there instead of specify the ec2 tag directly on the playbook like now ?
Any suggestions would be appreciated.
You already go to the right direction.
Suppose you got dynamic inventory by ec2.py and it is tag_test_staging. So you can build inventory folder and files as below
inventory
staging
hosts
group_vars
all.yml
tag_test_staging.yml
tag_Name_test.yml
You add the variable define in each YAML file. the variable in tag_test_staging.yml will be only applied to the instance with that tag.
So now you can apply your playbook as:
ansible-playbook -i inventory/staging your_playbook.yml
There is a best practices document on how to use dynamic inventory with clouds, please take a look as well.

What's the best way to populate variable files with hosts from groups?

In my all.yml file, I have some global variables that are used throughout my project. One of these is a list of redis hosts. My inventory files look like this:
[dev_redis]
host.abc.com
[prod_redis]
host.xyz.com
So what I'd like to do in my all.yml file is something like the following:
---
global__:
app_user: root
app_group: root
redis_hosts: "{% for host in groups[{{env}}_redis] %}{{host}}:5544,{% endfor %}"
This doesn't work though -- I get an error:
FAILED! => {"failed": true, "msg": "ERROR! template error while templating string: expected token ':', got '}'"}
My questions are:
(1) Why am I getting this error?
(2) Will I be able to do this if I'm not sourcing the inventory file which contains my redis nodes? For each run of the deploy scripts, I reference which inventory files to use for that service type (this all happens within a python wrapper) -- so if I'm deploying a service other than redis, can I still access the groups from the redis inventory file? Can I pass in multiple inventory files with -i?
I see you have used loop in main yml whereas this way we define the loop when we need to access the vars in templates.
The hosts can be accessed as :
---
global__:
app_user: root
app_group: root
redis_hosts: "{{[groups['dev_redis'][0]]}}:5544"
dev can be replaced with your variable {{env}}
Answers to your question:
You are getting this error due to some syntax issue. In ansible when a variable is used it should be given in "{}" and at some places its a bare var. Depending on the usage of var if its not used correctly ansible throws this error.
No thats not possible. You need to pass the inventory/variable file from which you want to use.
Yes you can access the groups from the redis inventory file even if you are deploying any other service but that should be passed
Multiple inventory files can be passed using -i in a way that you refer a folder.
eg: ansible-playbook -i /home/ubuntu/inventory/ test.yml
And in inventory directory there can be multiple inventory files.
like /home/ubuntu/inventory/host1,/home/ubuntu/inventory/host2,/home/ubuntu/inventory/var1 etc.

Filtering multiple tags in Ansible dynamic inventory

I think I've seen an answer to this somewhere, but I can't seem to find it now. I'm creating a dynamic development inventory file for my EC2 instances. I'd like to group all instances tagged with Stack=Development. Moreover, I'd like to specifically identify the development API servers. Those would not only have the Stack=Development tag, but also the API=Yes tag.
My basic setup uses inventory folders:
<root>/development
├── base
├── ec2.ini
└── ec2.py
In my base file, I'd like to have something like this:
[servers]
tag_Stack_Development
[apiservers]
tag_Stack_Development && tag_API_Yes
Then I'd be able to run this to ping all of my development api servers:
ansible -i development -u myuser apiservers -m ping
Can something like that be done? I know the syntax isn't right, but hopefully the intent is reasonably clear? I can't imagine I'm the only one who's ever needed to filter on multiple tags, but I haven't been able to find anything that gets me where I'm trying to go.
It's not the answer I had in my head, but sometimes what's in my head just gets in the way. Since each inventory directory has its own ec2.ini, I just filter the stack there and then group within that filter.
# <root>/development/ec2.ini
...
instance_filters = tag:Stack=Development
# <root>/development/base
[tag_Role_webserver]
[tag_API_Yes]
[webservers:children]
tag_Role_webserver
[apiservers:children]
tag_API_Yes
The answer provided by xiong-chiamiov does actually work. I've just been using it in my ansible deployment.
So I have a playbook using the dynamic inventory script. with this piece of code:
---
- name: AWS Deploy
hosts: tag_Environment_dev:&tag_Project_integration
gather_facts: true
And the process does filter the hosts by both of those tags.
EDIT
Actually expanding on this, you can also use variables to make the host group specification dynamic. like this:
---
- name: AWS Deploy
hosts: "tag_Environment_{{env}}:&tag_Project_{{tag_project}}"
sudo: true
gather_facts: true
I use the {{env}} and {{tag_project}} vars from variable files and arguments given to ansible at runtime. It successfully changes the hosts the playbook runs against.
The Ansible documentation has a section on patterns. Rather than creating a new section, you can do a tag intersection when you specify the hosts:
[$] ansible -i development -u myuser tag_Stack_Development:&tag_API_Yes
This also works within playbooks.

How to define private SSH keys for servers in dynamic inventories

I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):
[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa
Though, it still sets private key globally for all hosts in playbook.
Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.
You can simply define the key to use directly when running the command:
ansible-playbook \
\ # Super verbose output incl. SSH-Details:
-vvvv \
\ # The Server to target: (Keep the trailing comma!)
-i "000.000.0.000," \
\ # Define the key to use:
--private-key=~/.ssh/id_rsa_ansible \
\ # The `env` var is needed if `python` is not available:
-e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
\ # Dry–Run:
--check \
deploy.yml
Copy/ Paste:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml
I'm using the following configuration:
#site.yml:
- name: Example play
hosts: all
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.
The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):
[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem
Here is the patch for ec2.py:
204a205,206
> 'ssh_key_path': '~/.ssh',
> 'ssh_key_suffix': '.pem',
422a425,428
> # SSH key setup
> self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
> self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
>
1490a1497
> instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)

Resources