I have a folder structure like this in Ansible where global variables are at the root group_vars and then environment specific variables are in inventories/dev/group_vars/all etc.
.
├── ansible.cfg
├── group_vars
│ └── all
├── inventories
│ ├── dev
│ │ ├── group_vars
│ │ │ └── all
│ │ └── hosts
│ └── prod
│ ├── group_vars
│ │ └── all
│ └── hosts
└── playbook.yml
I want to use to be able reuse the existing variables in both var files in Molecule but unable to do so as it cannot find the variable. Something similar to the below works but I need both group_vars/all and inventories/dev/group_vars/all
extract of my molecule.yml
provisioner:
name: ansible
inventory:
links:
group_vars: ../../../group_vars
I tried comma separated and that doesn't work because afterall it's just a symlink to the file.
Related
Here is my directory structure,
├── README.md
├── internal-api.retry
├── internal-api.yaml
├── ec2.py
├── environments
│ ├── alpha
│ │ ├── group_vars
│ │ │ ├── alpha.yaml
│ │ │ ├── internal-api.yaml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ ├── prod
│ │ ├── group_vars
│ | │ ├── prod.yaml
│ │ │ ├── internal-api.yaml
│ │ │ ├── tag_Name_prod-internal-api-3.yml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ └── stage
│ ├── group_vars
│ │ ├── internal-api.yaml
│ │ ├── stage.yaml
│ ├── host_vars
│ │ ├── internal_ec2.ini
├── roles
│ ├── internal-api
├── roles.yaml
I am using separate config for an ec2 instance with tag Name = prod-internal-api-3, so I have defined a separate file, tag_Name_prod-internal-api-3.yaml in environments/prod/group_vars/ folder.
Here is my tag_Name_prod-internal-api-3.yaml,
---
internal_api_gunicorn_worker_type: gevent
Here is my main playbook, internal-api.yaml
- hosts: all
any_errors_fatal: true
vars_files:
- "environments/{{env}}/group_vars/{{env}}.yaml" # this has the ssh key,users config according to environments
- "environments/{{env}}/group_vars/internal-api.yaml"
become: yes
roles:
- internal-api
For prod deployemnts, I do export EC2_INI_PATH=environment/prod/internal_ec2.ini, likewise for stage and alpha. In environment/prod/internal_ec2.ini I have added instance filter, instance_filters = tag:Name=prod-internal-api-3
When I run my playbook,
I get this error,
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'internal_api_gunicorn_worker_type' is undefined"}
It means that it is not able to pick variable from the file tag_Name_prod-internal-api-3.yaml. Why is it happening? Do I need to manually add it in include_vars(I don't think that should be the case)?
Okay, so it is really weird, like really really weird. I don't know whether it has been documented or not(please provide link if it is).
If your tag Name is like prod-my-api-1, then the file name tag_Name_prod-my-api-1 will not work.
Your filename has to be tag_Name_prod_my_api_1. Yeah, thanks ansible for making me cry for 2 days.
I am using a role (zaxos.lvm-ansible-role) to manage lvms on a few hosts. Initially I had my vars for the lvm under host_vars/server.yaml which works.
Here is the working layout
├── filter_plugins
├── group_vars
├── host_vars
│ ├── server1.yaml
│ └── server2.yaml
├── inventories
│ ├── preprod
│ ├── preprod.yml
│ ├── production
│ │ ├── group_vars
│ │ └── host_vars
│ ├── production.yaml
│ ├── staging
│ │ ├── group_vars
│ │ └── host_vars
│ └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── README.md
├── tasks
│ ├── create-lvm.yml
│ ├── main.yml
│ ├── mount-lvm.yml
│ ├── remove-lvm.yml
│ └── unmount-lvm.yml
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
For my environment it would make more sense to have the host_vars under the inventories directory which is also supported (Alternative Directory Layout) as per Ansible doc.
However when I change to this layout the vars are not initialized and the lvms on the host don’t change.
├── filter_plugins
├── inventories
│ ├── preprod
│ │ ├── group_vars
│ │ └── host_vars
│ │ ├── server1.yaml
│ │ └── server2.yaml
│ ├── preprod.yml
│ ├── production
│ │ ├── group_vars
│ │ └── host_vars
│ ├── production.yaml
│ ├── staging
│ │ ├── group_vars
│ │ └── host_vars
│ └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── README.md
├── tasks
│ ├── create-lvm.yml
│ ├── main.yml
│ ├── mount-lvm.yml
│ ├── remove-lvm.yml
│ └── unmount-lvm.yml
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
Any idea why this approach is not working?
Your host_vars directory must reside in ansible's discovered inventory_dir.
With the above filetree, I guess you are launching your playbook with ansible-playbook -i inventories/preprod.yml yourplaybook.yml. In this context, ansible discovers inventory_dir as inventories
The solution is to move your inventory files inside each directory for your environment, e.g. for preprod => mv inventories/preprod.yml inventories/preprod/
You can then launch your playbook with ansible-playbook -i inventories/preprod/preprod.yml yourplaybook.yml and it should work as you expect.
I am trying to use the Alternative Directory Layout and ansible-vaults within.
But when i run my playbook, variables which are vault encrypted could not resolve with that directory structure. So what iam doing wrong?
I execute via:
ansible-playbook -i inventories/inv/hosts playbooks/inv/invTest.yml --check --ask-vault
Here is my structure:
.
├── inventories
│ ├── inv
│ │ ├── group_vars
│ │ │ ├── var.yml
│ │ │ └── vault.yml
│ │ └── hosts
│ └── staging
│ ├── group_vars
│ │ ├── var.yml
│ │ └── vault.yml
│ └── hosts
├── playbooks
│ ├── staging
│ │ └── stagingTest.yml
│ └── inv
│ ├── invTest.retry
│ └── invTest.yml
└── roles
├── basic-linux
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── test
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
└── webserver
├── defaults
│ └── main.yml
├── files
├── handler
│ └── main.yml
├── tasks
│ └── main.yml
└── templates
this is my hosts file (inventories/inv/hosts):
[inv]
testvm-01 ansible_ssh_port=22 ansible_ssh_host=172.16.0.101 ansible_ssh_user=root
testvm-02 ansible_ssh_port=22 ansible_ssh_host=172.16.0.102 ansible_ssh_user=root
playbook (playbooks/inv/invTest.yml):
---
- name: this is test
hosts: inv
roles:
- { role: ../../roles/test }
...
role which uses the vault encrypted var (roles/test/tasks/main.yml):
---
- name: create test folder
file:
path: "/opt/test/{{ app_user }}/"
state: directory
owner: "{{ default_user }}"
group: "{{ default_group }}"
mode: 2755
recurse: yes
...
var which points to vault (inventories/inv/group_vars/var.yml):
---
app_user: '{{ vault_app_user }}'
app_pass: '{{ vault_app_pass }}'
...
vault file (ansible-vault edit inventories/inv/group_vars/vault.yml):
vault_app_user: itest
vault_app_pass: itest123
The error message iam getting is something like this:
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ app_user }}: 'app_user' is undefined\n\nThe error appears to have been in 'roles/test/tasks/main.yml': but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: create test folder\n ^ here\n"}
You define variable app_user in a file called var.yml stored in group_vars folder.
In your execution line you point to the inventories/inv/hosts as your inventory directory.
It doesn't matter what strings you used in this path -- from Ansible's point of view it sees only:
hosts
group_vars
├── var.yml
└── vault.yml
It will read var.yml for a host group called var and vault.yml for a host group called vault.
In your case -- never.
You likely wanted to organise your files this way:
inventories
└── production
├── group_vars
│ └── inv
│ ├── var.yml
│ └── vault.yml
└── hosts
This way, files in group_vars/inv will be read for hosts in group inv.
The Ansible best practices documentation recommends to separate inventories:
inventories/
production/
hosts.ini # inventory file for production servers
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""
staging/
hosts.ini # inventory file for staging environment
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
stagehost1 # if systems need specific variables, put them here
stagehost2 # ""
My staging and production environments are structured in the same way. I have in both environments the same groups. And it turns out that I have also the same group_vars for the same groups. This means redundancy I would like to wipe out.
Is there a way to share some group_vars between different inventories?
As a work-around I started to put shared group_vars into the roles.
my_var:
my_group:
- { var1: 1, var2: 2 }
This makes it possible to iterate over some vars by intersecting the groups of a host with the defined var:
with_items: "{{group_names | intersect(my_var.keys())}}"
But this is a bit complicate to understand and I think roles should not know anything about groups.
I would like to separate most of the inventories but share some of the group_vars in an easy to understand way. Is it possible to merge global group_vars with inventory specific group_vars?
I scrapped the idea of following Ansible's recommendation. Now one year later, I am convinced that Ansible's recommendation is not useful for my requirements. Instead I think it is important to share as much as possible among different stages.
Now I put all inventories in the same directory:
production.ini
reference.ini
And I take care that each inventory defines a group including all hosts with the name of the stage.
The file production.ini has the group production:
[production:children]
all_production_hosts
And the file reference.ini has the group reference:
[reference:children]
all_reference_hosts
I have just one group_vars directory in which I define a file for every staging group:
group_vars/production.yml
group_vars/reference.yml
And each file defines a stage variable. The file production.yml defines this:
---
stage: production
And the file reference.yml defines that:
---
stage: reference
This makes it possible to share everything else between production and reference. But the hosts are completely different. By using the right inventory the playbook runs either on production or on reference hosts:
ansible-playbook -i production.ini site.yml
ansible-playbook -i reference.ini site.yml
If it is necessary for the site.yml or the roles to behave slightly different in the production and reference environment, they can use conditions using the stage variable. But I try to avoid even that. Because it is better to move all differences into equivalent definitions in the staging files production.yml and reference.yml.
For example, if the group_vars/all.yml defines some users:
users:
- alice
- bob
- mallory
And I want to create the users in both environments, but I want to exclude mallory from the production environment, I can define a new group called effective_users. In the reference.yml it is identical to the users list:
effective_users: >-
{{ users }}
But in the production.yml I can exclude mallory:
effective_users: >-
{{ users | difference(['mallory']) }}
The playbook or the roles do not need to distinguish between the two stages, they can simply use the group effective_users. The group contains automatically the right list of users simply by selecting the inventory.
The simple option here (and what we do) is simply symlink generic group vars files around.
For instance we might have a generic role for something like NGINX and then a few concrete use cases for that role. In this case we create a group vars file that uses the NGINX role for each concrete use case and then simply symlink those group vars files into the appropriate folders.
Our project folder structure then might look something like this (drastically simplified):
.
├── inventories
│ ├── bar-dev
│ │ ├── group_vars
│ │ │ ├── bar.yml -> ../../shared/bar.yml
│ │ │ └── dev.yml -> ../../shared/dev.yml
│ │ └── inventory
│ ├── bar-prod
│ │ ├── group_vars
│ │ │ ├── bar.yml -> ../../shared/bar.yml
│ │ │ └── prod.yml -> ../../shared/prod.yml
│ │ └── inventory
│ ├── bar-test
│ │ ├── group_vars
│ │ │ ├── bar.yml -> ../../shared/bar.yml
│ │ │ └── test.yml -> ../../shared/test.yml
│ │ └── inventory
│ ├── foo-dev
│ │ ├── group_vars
│ │ │ ├── dev.yml -> ../../shared/dev.yml
│ │ │ └── foo.yml -> ../../shared/foo.yml
│ │ └── inventory
│ ├── foo-prod
│ │ ├── group_vars
│ │ │ ├── foo.yml -> ../../shared/foo.yml
│ │ │ └── prod.yml -> ../../shared/prod.yml
│ │ └── inventory
│ ├── foo-test
│ │ ├── group_vars
│ │ │ ├── foo.yml -> ../../shared/foo.yml
│ │ │ └── test.yml -> ../../shared/test.yml
│ │ └── inventory
│ └── shared
│ ├── bar.yml
│ ├── dev.yml
│ ├── foo.yml
│ ├── prod.yml
│ └── test.yml
└── roles
└── nginx
├── defaults
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ └── main.yml
└── templates
└── main.yml
Now our inventory files can have the hosts use these shared group vars simply by putting the hosts in the correct groups.
You can place group_vars in playbook directory as well. More info.
Ansible will pick them up for all inventories.
Ansible executes all dependency role but my main.yml in meta folder looks like this:
---
dependencies:
- { role: common, caller_role: docker, tags: ['packages'] }
So, ansible should execute that part of role common that contains the following:
---
- name: Install required packages
package: name={{ item.name }} state=present
with_items:
- "{{ vars[caller_role]['SYSTEM']['PACKAGES'] }}"
tags:
- packages
- name: Modify /etc/hosts
lineinfile:
dest: /etc/hosts
line: "{{ vars[caller_role]['REGISTRY']['ip'] }} {{ vars[caller_role]['REGISTRY']['hostname']}}"
tags:
- write_etc_hosts
I execute ansible 2.1.1.0 as follows: ansible-playbook --list-tags site.yml and here i'm copying site.yml:
- hosts: localhost
connection: local
remote_user: root
become: yes
roles:
- docker
And finally the tree:
├── common
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── docker
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
└── site.yml
I fail to understand what is happening..
If you specify tags for a role, Ansible applies them to every task in that role.
In your example, tag packages will be added to every task in the role common.
Please inspect tag inheritance section in the documentation.
You can apply tags to more than tasks, but they ONLY affect the tasks themselves. Applying tags anywhere else is just a convenience so you don’t have to write it on every task
All of these [samples] apply the specified tags to EACH task inside the play, included file, or role, so that these tasks can be selectively run when the playbook is invoked with the corresponding tags.
OK, thank you Konstantin. For this purpose i think i will use:
- include: foo.yml
tags: [web,foo]
Regards