ansible: Best Practice for host specific playbooks - ansible

I'm trying to clean up my ansible infrastructure and wanted to stick to the best practices as much as possible.
My problem is that I have host-specific tasks that I naturally want to map into the logic as sensibly as possible.
If I want to have both, "best practice" and host-specific tasks, I end up with the following simplified file structure:
├── dbservers.yml
├── generic.yml
├── group_templates/
│   └── webservers/
├── group_vars/
│   └── webservers.yml
├── host_files/
│   └── d-ws323/
├── host_tasks/
│   └── d-ws323.yml
├── host_vars/
│   └── d-ws323/
├── production.ini
├── roles/
├── secrets.txt
├── site.yml
└── webservers.yml
My approch looks like this:
site.yml is the master playbook:
---
# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
- import_playbook: generic.yml
And generic.yml:
---
- hosts: all
become: yes
roles:
- generic
tasks:
- local_action: stat path=host_tasks/{{ inventory_hostname }}.yml
register: host_file
become: no
- include: host_playbooks/{{ inventory_hostname }}.yml
when: host_file.stat.exists
Does it make sense?

What we do is create roles that have playbooks and then assign these roles to hosts or groups of hosts. Based on that, we would have a directory layout similar to:
.
├── group_vars
├── hosts
├── host_vars
│   └── host-01.yml
├── roles
│   └── webservers
│   ├── files
│   ├── tasks
│   │   └── main.yml
│   └── templates
└── webservers.yml
The hosts file is the inventory file, which you're gonna put all your hosts and groups, and the webservers.yml is the playbook with the following content:
---
- hosts: host-01
become: true
user: ubuntu
roles:
- webservers
If you have to make any distro-specific arrangement, you should do that inside the role playbook. Playbooks outside roles should exist only for conecting groups or hosts to roles, or very trivial tasks, if possible.
Best regards!

Related

Re-use Ansible vault file in different groups

I want to extend my current Ansible project to also support Linux servers. For that I want to re-use the vault file I have created but I cannot seem to find a solution without duplicating the vault file.
Here's what my current Ansible structure looks like
├── ansible.cfg
├── ansible_pw.sh
├── group_vars
│   └── windows
│   ├── vault.yml
│   └── main.yml
├── inventory.yml
├── main.yml
└── roles
├── wait_for_host
│   └── tasks
│   └── main.yml
└── install_software
   └── tasks
   └── main.yml
inventory.yml
---
all:
children:
windows:
hosts:
win-server.mycompany.com
main.yml
---
- hosts: windows
tasks:
- block:
- include_role: { name: wait_for_host }
- include_role: { name: install_software }
Playbook is run like this:
ansible-playbook main.yml -i inventory.yml --vault-password-file ./ansible_pw.sh
My idea is to create a new group_vars/linux directory which contains all specific settings which only apply for linux servers.
While writing this question I actually found neat solution. All general settings (including the vault file) can be stored in the default all group (see https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#default-groups) and all Windows/Linux specific settings (like ansible_connection) can be stored in separate directories:
group_vars
├── all
│ ├── main.yml
│ └── vault.yml
├── linux
│ └── main.yml
└── windows
└── main.yml

Ansible collection usage

I am trying to use Ansible Collection for example the nginx one.
The directory tree structure looks like this:
├── ansible_collections
│   └── nginxinc
│   └── nginx_core
│   ├── CHANGELOG.md
.......
│   ├── README.md
│   └── roles
│   ├── nginx
│   │   ├── tasks
│   │   │   ├── amplify
│   │   │   ├── config
│   │   │   ├── keys
│   │   │   ├── main.yml
│   │   │   ├── modules
│   │   │   ├── opensource
│   │   │   │ └── install-debian.yml
│   │   │   └── unit
....
├── hosts
└── site.yaml
the site.yaml file I wrote is:
- name: Demo
hosts: all
connection: local
gather_facts: no
tasks:
- name: test
include_role:
name: nginxinc.nginx_core.nginx
tasks_from: install-debian
I am trying to run the task install-debian from the role nginx.
I run the playbook:
ansible-playbook -i hosts site.yaml
I get this error:
ERROR! the role 'nginxinc.nginx_core.nginx' was not found.....
I need help on how I should fix the site.yaml file
If I understand correctly, you should just install the Nginx collection with the following command, as explained here:
ansible-galaxy collection install nginxinc.nginx_core
It should install it in ~/.ansible/collections/ansible_collections/nginxinc/nginx_core/. Then create a playbook following these examples and the Ansible docs:
---
- hosts: all
collections:
- nginxinc.nginx_core
roles:
- role: nginx
Finally run your playbook:
ansible-playbook -i hosts my_nginx_playbook.yaml
It'll pick the Debian version for you if your host is Debian.
I regret to say that this is not working for me. this gives the impression that collections_paths is not used.
ansible --version
ansible 2.9.17
ansible-config view
[defaults]
inventory=/usr/local/ansible-admin/my_hosts
roles_path=/usr/local/galaxy-roles:./roles
collections_paths=/usr/local/ansible_collections
log_path=./ansible.log
the collections are installed in the /usr/local/ansible_collections folder:
tree -L 2 /usr/local/ansible_collections/nginxinc/
/usr/local/ansible_collections/nginxinc/
└── nginx_core
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docs
├── FILES.json
├── LICENSE
├── MANIFEST.json
├── playbooks
├── plugins
├── README.md
└── roles
here is the very basic content of the playbook:
cat playbooks/nginx_core.yml
- name: Test collection ansible with nginxinc
hosts: "{{ target }}"
collections:
- nginxinc.nginx_core
tasks:
- import_role:
name: nginx
we get the following error message when it is launched:
ansible-playbook playbooks/nginx_core.yml --extra-vars target=myvm.mydomain.org
ERROR! the role 'nginx' was not found in nginxinc.nginx_core:ansible.legacy:/usr/local/ansible-admin/playbooks/roles:/usr/local/galaxy-roles:/usr/local/ansible-admin/roles:/usr/local/ansible-admin/playbooks
it doesn't find the role in the collections, and worse, it doesn't say that it looked in the collections_path...
But here is a solution that works, but it is very very ugly: add the nginx role of the collection in roles_path!
roles_path=/usr/local/galaxy-roles:./roles:/usr/local/ansible_collections/nginxinc/nginx_core/roles
warning: this is obviously a misuse of the ansible collections!
any help would be appreciated.
Ernest.

Referencing Ansible Variables

I am new to Ansible and I am attempting to work on getting user access under control. I found this playbook from Galaxy:
https://github.com/singleplatform-eng/ansible-users
I was also reading from this source to help manage different environments:
https://www.digitalocean.com/community/tutorials/how-to-manage-multistage-environments-with-ansible
So I have the following setup:
vagrant#ansible:~/ansible$ tree
├── ansible.cfg
├── debug.yml
├── dev_site.yml
├── filter_plugins
├── group_vars
│   └── all
│   └── 000_cross_env_vars -> ../../inventories/000_cross_env_vars
├── hosts
├── inventories
│   ├── 000_cross_env_vars
│   ├── development
│   │   ├── group_vars
│   │   │   └── all
│   │   │   ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│   │   │   └── env_specific.yml
│   │   ├── hosts
│   │   └── host_vars
│   │   └── hostname1
│   ├── production
│   │   ├── group_vars
│   │   │   └── all
│   │   │   ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│   │   │   └── env_specific
│   │   ├── hosts
│   │   └── host_vars
│   │   └── hostname1
│   └── staging
│   ├── group_vars
│   │   └── all
│   │   ├── 000_cross_env_vars -> ../../../000_cross_env_vars
│   │   └── env_specific.yml
│   ├── hosts
│   └── host_vars
│   └── hostname1
├── library
├── mgmt-ssh-add-key.yml
├── module_utils
├── prod_site.yml
├── README.md
├── roles
│   └── users <--- FROM LINK ABOVE
│   ├── defaults
│   │   └── main.yml
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── tasks
│   │   ├── main.yml
│   └── tests
│   └── test.yml
├── stage_site.yml
├── user_accounts.retry
└── user_accounts.yml
Playbook
vagrant#ansible:~/ansible$ cat user_accounts.yml
---
- hosts: all
become: true
remote_user: vagrant
vars_files:
- "{{ inventory_dir }}/group_vars/all/env_specific.yml"
roles:
- users
Shared Variables between environments
vagrant#ansible:~/ansible$ more inventories/000_cross_env_vars
---
# System Users
users:
- username: sbody
name: Some Body
uid: 3001
groups: "{{ users_groups.['username'].groups }}"
home: /home/sbody
profile: |
alias ll='ls -lah'
ssh_key:
- "ssh-rsa ... "
# Users to delete
users_deleted:
- username: bar
uid: 9002
remove: yes
force: yes
Specific Environment Variables
vagrant#ansible:~/ansible$ cat inventories/development/group_vars/all/env_specific.yml
# here we assign variables to particular groups
env: dev
users_groups:
- username: sbody
groups: ['users','developers'] # feeds groups in user creation
# Groups to create
groups_to_create:
- name: developers
gid: 10000
I think there is a way to feed the groups memberships from env_specific.yml for each user in 000_cross_env_vars but I am not sure how to do it without the env_specific.yml trumping the 000_cross_env_vars. Any help would be most appreciated. Thank you in advance.
EDIT:
I made the following changes and it seems to be getting closer now:
vagrant#ansible:~/ansible$ cat
inventories/development/group_vars/all/env_specific.yml
# here we assign variables to particular groups
stage: dev
group_membership:
sbody_groups: ['users','developers']
And the users declaration:
vagrant#ansible:~/ansible$ more inventories/000_cross_env_vars
---
# System Users
users:
- username: sbody
name: Some Body
uid: 3001
groups: "{{ group_membership['sbody_groups'] }}"
home: /home/sbody
profile: |
alias ll='ls -lah'
ssh_key:
- "ssh-rsa ... "
So now I need to figure out how to set a default in case the user_group isn't defined.
From my experience: Instead of using inventory vars I prefer using a vars directory like:
- name: Read all variables
block:
- name: Get stats on all variable files
stat:
path: "{{item}}"
with_fileglob:
- "vars/global/common.yml"
- "vars/{{ env|default('dev') }}/default.yml"
- "vars/{{ env|default('dev') }}/secrets.vault"
register: _variables_stat
- name: Include all variable files (only when found)
include_vars : "{{item.stat.path}}"
when : item.stat.exists
with_items : "{{_variables_stat.results}}"
no_log : true
delegate_to: localhost
become: false
You can select the env whether from the inventory or from the command line.
Your global vars will be read first and replaced by your environments if they exists. If not, you will have always the default option.
From personal experience, I've used inventories to separate environments and all it does it create unnecessary overhead when trying to keep certain variables in sync across the different inventories.
What we opted for instead is to separate the environments by inventory groups. That way we can load variables based on group names, pass those to our roles and utilize Ansible's inventory auto-loading mechanisms.
- name: Manage Users
hosts: some-host
tasks:
- name: Include Common Users & Groups
set_fact:
users: "{{ common_users }}"
usergroups: "{{ common_usergroups }}"
- name: Include Users Based on Groups
set_fact:
users "{{ users + q('vars', item + '_users') }}"
usergroups: "{{ usergroups + q('vars', item + '_usergroups') }}"
loop: "{{ lookup('items', group_names) }}"
roles:
role: users
However, the query filter and the vars lookup are new features and were shipped with Ansible 2.5.

select group_var variable file in top level playbook

I have defined all my variables in group_vars/all/vars_file.yml and my playbook is as below:
---
# Top level play site.yml
- hosts: webclient
roles:
- common
- nginx
- nvm
- deploy_web_client
- hosts: appserver
roles:
- common
- gradle
- tomcat
- deploy_sb_war
Now I have 3 environments dev / staging / production. Depending upon the environment i used to change the vars_file.yml under group_vars and then run the ansible-play.
Is there any way I can keep 3 files like "group_vars/dev" , "group_vars/staging", "group_vars/production" and specify it in my main site.yml
I have 3 inventory files as below, and depending upon the environment during ansible-play i specify the inventory file name
[webclient]
10.20.30.40
[appserver]
10.20.30.41
Instead of using inventory files saved in a single directory, use inventory files in separate directories and put group_vars inside each of them.
.
├── dev
│   ├── group_vars
│   │   └── all
│   │   └── vars_file.yml
│   └── inventory
├── production
│   ├── group_vars
│   │   └── all
│   │   └── vars_file.yml
│   └── inventory
└── staging
├── group_vars
│   └── all
│   └── vars_file.yml
└── inventory
Then point to the directory in the ansible-playbook call:
ansible-playbook -i dev <the_rest>

ansible - include the static script file in roles

Below is my directory structure for ansibel role called webserver
localhost roles # tree
.
├── readme.md
├── site.yml
└── webserver
├── files
│   ├── createswap.sh
│   └── nginxkeyadd.sh
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
├── templates
│   ├── helloworld.conf.j2
│   └── index.html.j2
└── vars
└── main.yml
my tasks/main.yml looks like
- name: Create swap file 50MB
script: /etc/ansible/roles/webserver/files/createswap.sh
- name: add GPG key for nginx
script: /etc/ansible/roles/webserver/files/nginxkeyadd.sh
- name: Install nginx on target
apt: name={{ item }} state=latest
with_items:
- rsync
- git
- nginx
in the task/main.yml im specifying absolute path to local scriptfile like
script: /etc/ansible/roles/webserver/files/nginxkeyadd.sh and script: /etc/ansible/roles/webserver/files/createswap.sh . The scripts don't have any ansible variables.
is it good practice in ansible ?
is it good practice in ansible ?
No. Excerpt from the docs:
Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely
Also using shell-scripts instead of native Ansible modules is an antipattern.

Resources