I am trying to use the Alternative Directory Layout and ansible-vaults within.
But when i run my playbook, variables which are vault encrypted could not resolve with that directory structure. So what iam doing wrong?
I execute via:
ansible-playbook -i inventories/inv/hosts playbooks/inv/invTest.yml --check --ask-vault
Here is my structure:
.
├── inventories
│ ├── inv
│ │ ├── group_vars
│ │ │ ├── var.yml
│ │ │ └── vault.yml
│ │ └── hosts
│ └── staging
│ ├── group_vars
│ │ ├── var.yml
│ │ └── vault.yml
│ └── hosts
├── playbooks
│ ├── staging
│ │ └── stagingTest.yml
│ └── inv
│ ├── invTest.retry
│ └── invTest.yml
└── roles
├── basic-linux
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── test
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
└── webserver
├── defaults
│ └── main.yml
├── files
├── handler
│ └── main.yml
├── tasks
│ └── main.yml
└── templates
this is my hosts file (inventories/inv/hosts):
[inv]
testvm-01 ansible_ssh_port=22 ansible_ssh_host=172.16.0.101 ansible_ssh_user=root
testvm-02 ansible_ssh_port=22 ansible_ssh_host=172.16.0.102 ansible_ssh_user=root
playbook (playbooks/inv/invTest.yml):
---
- name: this is test
hosts: inv
roles:
- { role: ../../roles/test }
...
role which uses the vault encrypted var (roles/test/tasks/main.yml):
---
- name: create test folder
file:
path: "/opt/test/{{ app_user }}/"
state: directory
owner: "{{ default_user }}"
group: "{{ default_group }}"
mode: 2755
recurse: yes
...
var which points to vault (inventories/inv/group_vars/var.yml):
---
app_user: '{{ vault_app_user }}'
app_pass: '{{ vault_app_pass }}'
...
vault file (ansible-vault edit inventories/inv/group_vars/vault.yml):
vault_app_user: itest
vault_app_pass: itest123
The error message iam getting is something like this:
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: {{ app_user }}: 'app_user' is undefined\n\nThe error appears to have been in 'roles/test/tasks/main.yml': but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: create test folder\n ^ here\n"}
You define variable app_user in a file called var.yml stored in group_vars folder.
In your execution line you point to the inventories/inv/hosts as your inventory directory.
It doesn't matter what strings you used in this path -- from Ansible's point of view it sees only:
hosts
group_vars
├── var.yml
└── vault.yml
It will read var.yml for a host group called var and vault.yml for a host group called vault.
In your case -- never.
You likely wanted to organise your files this way:
inventories
└── production
├── group_vars
│ └── inv
│ ├── var.yml
│ └── vault.yml
└── hosts
This way, files in group_vars/inv will be read for hosts in group inv.
Related
I am trying to use Ansible Collection for example the nginx one.
The directory tree structure looks like this:
├── ansible_collections
│ └── nginxinc
│ └── nginx_core
│ ├── CHANGELOG.md
.......
│ ├── README.md
│ └── roles
│ ├── nginx
│ │ ├── tasks
│ │ │ ├── amplify
│ │ │ ├── config
│ │ │ ├── keys
│ │ │ ├── main.yml
│ │ │ ├── modules
│ │ │ ├── opensource
│ │ │ │ └── install-debian.yml
│ │ │ └── unit
....
├── hosts
└── site.yaml
the site.yaml file I wrote is:
- name: Demo
hosts: all
connection: local
gather_facts: no
tasks:
- name: test
include_role:
name: nginxinc.nginx_core.nginx
tasks_from: install-debian
I am trying to run the task install-debian from the role nginx.
I run the playbook:
ansible-playbook -i hosts site.yaml
I get this error:
ERROR! the role 'nginxinc.nginx_core.nginx' was not found.....
I need help on how I should fix the site.yaml file
If I understand correctly, you should just install the Nginx collection with the following command, as explained here:
ansible-galaxy collection install nginxinc.nginx_core
It should install it in ~/.ansible/collections/ansible_collections/nginxinc/nginx_core/. Then create a playbook following these examples and the Ansible docs:
---
- hosts: all
collections:
- nginxinc.nginx_core
roles:
- role: nginx
Finally run your playbook:
ansible-playbook -i hosts my_nginx_playbook.yaml
It'll pick the Debian version for you if your host is Debian.
I regret to say that this is not working for me. this gives the impression that collections_paths is not used.
ansible --version
ansible 2.9.17
ansible-config view
[defaults]
inventory=/usr/local/ansible-admin/my_hosts
roles_path=/usr/local/galaxy-roles:./roles
collections_paths=/usr/local/ansible_collections
log_path=./ansible.log
the collections are installed in the /usr/local/ansible_collections folder:
tree -L 2 /usr/local/ansible_collections/nginxinc/
/usr/local/ansible_collections/nginxinc/
└── nginx_core
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docs
├── FILES.json
├── LICENSE
├── MANIFEST.json
├── playbooks
├── plugins
├── README.md
└── roles
here is the very basic content of the playbook:
cat playbooks/nginx_core.yml
- name: Test collection ansible with nginxinc
hosts: "{{ target }}"
collections:
- nginxinc.nginx_core
tasks:
- import_role:
name: nginx
we get the following error message when it is launched:
ansible-playbook playbooks/nginx_core.yml --extra-vars target=myvm.mydomain.org
ERROR! the role 'nginx' was not found in nginxinc.nginx_core:ansible.legacy:/usr/local/ansible-admin/playbooks/roles:/usr/local/galaxy-roles:/usr/local/ansible-admin/roles:/usr/local/ansible-admin/playbooks
it doesn't find the role in the collections, and worse, it doesn't say that it looked in the collections_path...
But here is a solution that works, but it is very very ugly: add the nginx role of the collection in roles_path!
roles_path=/usr/local/galaxy-roles:./roles:/usr/local/ansible_collections/nginxinc/nginx_core/roles
warning: this is obviously a misuse of the ansible collections!
any help would be appreciated.
Ernest.
I have a folder structure like this in Ansible where global variables are at the root group_vars and then environment specific variables are in inventories/dev/group_vars/all etc.
.
├── ansible.cfg
├── group_vars
│ └── all
├── inventories
│ ├── dev
│ │ ├── group_vars
│ │ │ └── all
│ │ └── hosts
│ └── prod
│ ├── group_vars
│ │ └── all
│ └── hosts
└── playbook.yml
I want to use to be able reuse the existing variables in both var files in Molecule but unable to do so as it cannot find the variable. Something similar to the below works but I need both group_vars/all and inventories/dev/group_vars/all
extract of my molecule.yml
provisioner:
name: ansible
inventory:
links:
group_vars: ../../../group_vars
I tried comma separated and that doesn't work because afterall it's just a symlink to the file.
Here is my directory structure,
├── README.md
├── internal-api.retry
├── internal-api.yaml
├── ec2.py
├── environments
│ ├── alpha
│ │ ├── group_vars
│ │ │ ├── alpha.yaml
│ │ │ ├── internal-api.yaml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ ├── prod
│ │ ├── group_vars
│ | │ ├── prod.yaml
│ │ │ ├── internal-api.yaml
│ │ │ ├── tag_Name_prod-internal-api-3.yml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ └── stage
│ ├── group_vars
│ │ ├── internal-api.yaml
│ │ ├── stage.yaml
│ ├── host_vars
│ │ ├── internal_ec2.ini
├── roles
│ ├── internal-api
├── roles.yaml
I am using separate config for an ec2 instance with tag Name = prod-internal-api-3, so I have defined a separate file, tag_Name_prod-internal-api-3.yaml in environments/prod/group_vars/ folder.
Here is my tag_Name_prod-internal-api-3.yaml,
---
internal_api_gunicorn_worker_type: gevent
Here is my main playbook, internal-api.yaml
- hosts: all
any_errors_fatal: true
vars_files:
- "environments/{{env}}/group_vars/{{env}}.yaml" # this has the ssh key,users config according to environments
- "environments/{{env}}/group_vars/internal-api.yaml"
become: yes
roles:
- internal-api
For prod deployemnts, I do export EC2_INI_PATH=environment/prod/internal_ec2.ini, likewise for stage and alpha. In environment/prod/internal_ec2.ini I have added instance filter, instance_filters = tag:Name=prod-internal-api-3
When I run my playbook,
I get this error,
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'internal_api_gunicorn_worker_type' is undefined"}
It means that it is not able to pick variable from the file tag_Name_prod-internal-api-3.yaml. Why is it happening? Do I need to manually add it in include_vars(I don't think that should be the case)?
Okay, so it is really weird, like really really weird. I don't know whether it has been documented or not(please provide link if it is).
If your tag Name is like prod-my-api-1, then the file name tag_Name_prod-my-api-1 will not work.
Your filename has to be tag_Name_prod_my_api_1. Yeah, thanks ansible for making me cry for 2 days.
I am new to Ansible and I am attempting to work on getting user access under control. I found this playbook from Galaxy:
https://github.com/singleplatform-eng/ansible-users
I was also reading from this source to help manage different environments:
https://www.digitalocean.com/community/tutorials/how-to-manage-multistage-environments-with-ansible
So I have the following setup:
vagrant#ansible:~/ansible$ tree
├── ansible.cfg
├── debug.yml
├── dev_site.yml
├── filter_plugins
├── group_vars
│ └── all
│ └── 000_cross_env_vars -> ../../inventories/000_cross_env_vars
├── hosts
├── inventories
│ ├── 000_cross_env_vars
│ ├── development
│ │ ├── group_vars
│ │ │ └── all
│ │ │ ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│ │ │ └── env_specific.yml
│ │ ├── hosts
│ │ └── host_vars
│ │ └── hostname1
│ ├── production
│ │ ├── group_vars
│ │ │ └── all
│ │ │ ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│ │ │ └── env_specific
│ │ ├── hosts
│ │ └── host_vars
│ │ └── hostname1
│ └── staging
│ ├── group_vars
│ │ └── all
│ │ ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│ │ └── env_specific.yml
│ ├── hosts
│ └── host_vars
│ └── hostname1
├── library
├── mgmt-ssh-add-key.yml
├── module_utils
├── prod_site.yml
├── README.md
├── roles
│ └── users <--- FROM LINK ABOVE
│ ├── defaults
│ │ └── main.yml
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ ├── main.yml
│ └── tests
│ └── test.yml
├── stage_site.yml
├── user_accounts.retry
└── user_accounts.yml
Playbook
vagrant#ansible:~/ansible$ cat user_accounts.yml
---
- hosts: all
become: true
remote_user: vagrant
vars_files:
- "{{ inventory_dir }}/group_vars/all/env_specific.yml"
roles:
- users
Shared Variables between environments
vagrant#ansible:~/ansible$ more inventories/000_cross_env_vars
---
# System Users
users:
- username: sbody
name: Some Body
uid: 3001
groups: "{{ users_groups.['username'].groups }}"
home: /home/sbody
profile: |
alias ll='ls -lah'
ssh_key:
- "ssh-rsa ... "
# Users to delete
users_deleted:
- username: bar
uid: 9002
remove: yes
force: yes
Specific Environment Variables
vagrant#ansible:~/ansible$ cat inventories/development/group_vars/all/env_specific.yml
# here we assign variables to particular groups
env: dev
users_groups:
- username: sbody
groups: ['users','developers'] # feeds groups in user creation
# Groups to create
groups_to_create:
- name: developers
gid: 10000
I think there is a way to feed the groups memberships from env_specific.yml for each user in 000_cross_env_vars but I am not sure how to do it without the env_specific.yml trumping the 000_cross_env_vars. Any help would be most appreciated. Thank you in advance.
EDIT:
I made the following changes and it seems to be getting closer now:
vagrant#ansible:~/ansible$ cat
inventories/development/group_vars/all/env_specific.yml
# here we assign variables to particular groups
stage: dev
group_membership:
sbody_groups: ['users','developers']
And the users declaration:
vagrant#ansible:~/ansible$ more inventories/000_cross_env_vars
---
# System Users
users:
- username: sbody
name: Some Body
uid: 3001
groups: "{{ group_membership['sbody_groups'] }}"
home: /home/sbody
profile: |
alias ll='ls -lah'
ssh_key:
- "ssh-rsa ... "
So now I need to figure out how to set a default in case the user_group isn't defined.
From my experience: Instead of using inventory vars I prefer using a vars directory like:
- name: Read all variables
block:
- name: Get stats on all variable files
stat:
path: "{{item}}"
with_fileglob:
- "vars/global/common.yml"
- "vars/{{ env|default('dev') }}/default.yml"
- "vars/{{ env|default('dev') }}/secrets.vault"
register: _variables_stat
- name: Include all variable files (only when found)
include_vars : "{{item.stat.path}}"
when : item.stat.exists
with_items : "{{_variables_stat.results}}"
no_log : true
delegate_to: localhost
become: false
You can select the env whether from the inventory or from the command line.
Your global vars will be read first and replaced by your environments if they exists. If not, you will have always the default option.
From personal experience, I've used inventories to separate environments and all it does it create unnecessary overhead when trying to keep certain variables in sync across the different inventories.
What we opted for instead is to separate the environments by inventory groups. That way we can load variables based on group names, pass those to our roles and utilize Ansible's inventory auto-loading mechanisms.
- name: Manage Users
hosts: some-host
tasks:
- name: Include Common Users & Groups
set_fact:
users: "{{ common_users }}"
usergroups: "{{ common_usergroups }}"
- name: Include Users Based on Groups
set_fact:
users "{{ users + q('vars', item + '_users') }}"
usergroups: "{{ usergroups + q('vars', item + '_usergroups') }}"
loop: "{{ lookup('items', group_names) }}"
roles:
role: users
However, the query filter and the vars lookup are new features and were shipped with Ansible 2.5.
Ansible executes all dependency role but my main.yml in meta folder looks like this:
---
dependencies:
- { role: common, caller_role: docker, tags: ['packages'] }
So, ansible should execute that part of role common that contains the following:
---
- name: Install required packages
package: name={{ item.name }} state=present
with_items:
- "{{ vars[caller_role]['SYSTEM']['PACKAGES'] }}"
tags:
- packages
- name: Modify /etc/hosts
lineinfile:
dest: /etc/hosts
line: "{{ vars[caller_role]['REGISTRY']['ip'] }} {{ vars[caller_role]['REGISTRY']['hostname']}}"
tags:
- write_etc_hosts
I execute ansible 2.1.1.0 as follows: ansible-playbook --list-tags site.yml and here i'm copying site.yml:
- hosts: localhost
connection: local
remote_user: root
become: yes
roles:
- docker
And finally the tree:
├── common
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── docker
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── README.md
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
└── site.yml
I fail to understand what is happening..
If you specify tags for a role, Ansible applies them to every task in that role.
In your example, tag packages will be added to every task in the role common.
Please inspect tag inheritance section in the documentation.
You can apply tags to more than tasks, but they ONLY affect the tasks themselves. Applying tags anywhere else is just a convenience so you don’t have to write it on every task
All of these [samples] apply the specified tags to EACH task inside the play, included file, or role, so that these tasks can be selectively run when the playbook is invoked with the corresponding tags.
OK, thank you Konstantin. For this purpose i think i will use:
- include: foo.yml
tags: [web,foo]
Regards