I'm trying to build an ansible configuration with less tied roles. but I'm struggling to find the best config ..
First of all I created many roles as elementary as possible and here what my ansible folder looks like:
.
├── group_vars
│ └── all
├── inventory.yml
├── playbook.yml
└── roles
├── app_1
│ └── defaults
├── app_2
│ └── defaults
├── postgres
│ └── defaults
├── rabbitmq
│ └── defaults
└── redis
└── defaults
inventory.yml
all:
children:
db:
hosts:
db.domain.com:
app1:
hosts:
app1.domain.com:
app2:
hosts:
app2.domain.com:
cache:
hosts:
cache.domain.com:
playbook.yml
- name: "play db"
hosts: db
roles:
- postgres
- name: "play cache"
hosts: cache
roles:
- redis
- name: "play app1"
hosts: app1
roles:
- app_1
- rabbitmq
- name: "play app2"
hosts: app2
roles:
- app_2
- rabbitmq
the problem here is that I have no idea how different roles can share variables because they're in different hosts. app_1 and app_2 needs variables defined in redis and postgres for example.
I have two solutions:
Define all variables in group_vars/all => the problem is that there are a lot of variable and my file will be too big besides the duplication of variables (locally in the role + globally)
in each role I could say, If you need a variable from postgres then use hostvars from the group "db" but here I think the role is not supposed to know anything about hosts configuration .
I really have no idea how to solve this problem to have a clean config.
thank you !
for the purpose of tests, any role need to have it's own variables, so you can test them individualy.
And variables also have a scope and precedence. see: variable precedence
So when you declare a variable at the role scope, it will not be available for others roles. if you need a variable to be global, add them to group_vars scope, host_vars scope, play scope or extra_vars scope (cli). anyway, you will need to include them.
One way to reuse the variables from other roles or group_vars is to use vars_files to load them for the play you want.
For example, if your app1 hosts require variables defined in redis/defaults/main.yml:
- name: "play app1"
hosts: app1
vars_files:
- roles/redis/defaults/main.yml
roles:
- app_1
- rabbitmq
Or a better option in my opinion would be to have variables segregated into group_vars and load them same way for other hosts.
- name: "play app2"
hosts: app2
vars_files:
- group_vars/db.yml
roles:
- app_2
- rabbitmq
Related
I want to setup a server using Ansible. This is my file structure:
group_vars/
all.yml
development.yml
production.yml
vault/
all.yml
development.yml
production.yml
playbooks/
development.yml
production.yml
roles/
common/
tasks/
main.yml
vars/
main.yml
ansible.cfg
hosts
This is my ansible.cfg:
[defaults]
vault_password_file = ./vault_pass.txt
host_key_checking = False
inventory = ./hosts
The development.yml playbook:
- hosts: all
name: Development Playbook
become: true
roles:
- ../roles/common
vars_files:
- ../group_vars/development.yml
- ../group_vars/all.yml
- ../group_vars/vault/development.yml
- ../group_vars/vault/all.yml
And the tasks/main.yml file of the common role:
# Set hostame
- name: Set hostname
become: true
ansible.builtin.hostname:
name: "{{ server.hostname }}"
# Set timezone
- name: Set timezone
become: true
community.general.timezone:
name: "{{ server.timezone }}"
# Update all packages
- name: Update all packages
become: true
ansible.builtin.apt:
upgrade: dist
update_cache: true
The group_vars/all.yml file looks like this:
server:
hostname: "myhostname"
timezone: "Europe/Berlin"
When running the playbook using ansible-playbook playbooks/development.yml, I get this error:
fatal: [default]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'hostname'. 'dict object' has no attribute 'hostname'\n\nThe error appears to be in '/ansible/roles/common/tasks/main.yml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Set hostame\n- name: Set hostname\n ^ here\n"}
Can someone explain to me why the vars_files does not work and how to fix this?
Ansible automatically imports files and directories in group_vars that match the name of an active group. That is, if you are targeting the production group, and you have a file group_vars/production.yaml, the variables defined in this file will be imported automatically.
If instead of a file you have the directory group_vars/production/, then all files in that directory will be imported for hosts in the production group.
So your files in group_vars/vault/ will only be imported automatically for hosts in the vault hostgroup, which isn't the behavior you want.
Without knowing all the details about your deployment, I would suggest:
Create directories group_vars/{all,production/development}.
Rename group_vars/all.yml inventory files to group_vars/all/common.yml, and similarly for development.yml and production.yml (the name common.yml isn't special, you can use whatever name you want).
Rename group_vars/vault/all.yml to group_vars/all/vault.yaml, and similarly for the other files.
This will give you the following layout:
group_vars/
├── all
│ ├── common.yml
│ └── vault.yml
├── development
│ ├── common.yml
│ └── vault.yml
└── production
├── common.yml
└── vault.yml
my playbook directory structure.
/ansible_repo/
└── playbooks/
├──playbooks1.yml
├──playbooks2.yml
├── somerole.yml --> main playbook with roles
└── roles/
└── somerole
├── default
│ └── main.yml
├── handler
│ └── main.yml
├── tasks
│ └── main.yml
└── vars
└── main.yml
playbooks1.yml :
---
- hosts: all
tasks:
- pause:
minutes: 3
- name: ping host
win_ping:
somerole.yml :
---
- hosts: ci_host
roles:
- somerole
somerole\tasks\main.yml :
---
- include: playbooks/playbooks1.yml
when I run the role on some host:
ansible-playbook role-test.yml -vv --limit somehost
I get this error:
fatal: [somehost]: FAILED! =>
reason: |-
conflicting action statements: hosts, tasks
if I change the like that it passed:
- pause:
minutes: 3
- name: ping host
win_ping:
I tried understand how to set hosts and tasks in both, role-tasks-main and playbook.yml
and include the playbook into the role task.
if I get conflict I can config hierarchy host?
The error indicates that you are including a playbook inside a role, and for a role hosts and tasks are not allowed.
As somerole.yml is your main playbook, you can invoke other playbooks and roles as necessary.
Example:
- name: run playbook playbook1
import_playbook: playbooks/playbooks1.yml
- hosts: ci_host
roles:
- somerole
- name: run playbook playbook2
import_playbook: playbooks/playbooks2.yml
ERROR! vars file vars not found on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option.
---
# tasks file for user-management
- hosts: linux
become: yes
vars_files:
- vars/active-users
- vars/remove-users
roles:
- { role: user-management }
tasks:
- import_tasks: /tasks/user-accounts.yml
- import_tasks: /tasks/authorized-key.yml
Trying to run the main.yml on a server to execute on remote hosts (linux). The vars playbook in the vars directory has two playbooks (active-users, remove-users).
Your vars folder should be at the same level than your playbook.
If yes would it be possible that you missed .yml extension of your active-users and remove-users.
Check the relative path this way:
vars_files:
- ../vars/active-users.yaml
- ../vars/remove-users.yaml
I was stuck at the same error, issue I found was that my vars_files's location defined in the playbook was incorrect.
Steps I followed :-
1> run #find / name | grep <vars_files>.yaml # this command is going to run search for your <vars_files>.yaml starting from the root directory; check location of the vars_files.
2> make sure location of the vars files in the playbook match with the step1.
for instance:-
hosts: localhost
connection: local
gather_facts: no
become: yes
vars_files:
- /root/secrets.yaml
#MoYaMoS, Thanks! This was the resolution for me. It wasn't obvious at first as I have a top level play calling two plays in the play/ directory. The domain-join play is the one that calls group_vars in this case.
├── group_vars
│ └── linux
├── plays
│ ├── create-vm.yml
│ └── domain-join.yml
└── provision.yml
I just specified the relative path in the domain-join.yml as you mentioned. Works perfectly.
vars_files:
- ../group_vars/linux
I know that you can change between different inventory files using the -i flag which can be used to switch between different hosts.
In my case, the hosts to be acted upon change between deployments, so I take the hosts in as --extra-vars and use delegate_to to deploy to that host (see below for more details).
I was hoping for a way to switch between files containing environment variables in a similar fashion. For example, lets say I have the following simplified directory structure:
/etc/ansible/
├── ansible.cfg
├── hosts
└── project/
└── environments/
├── dev/
│ └── vars.yml
└── prd/
└── vars.yml
The structure of vars.yml in both environments would be exactly the same, just with the variables having different values due to the differences between environments.
I've found a few places that talk about doing something similar such as these:
https://rock-it.pl/managing-multiple-environments-with-ansible-best-practices/
http://rosstuck.com/multistage-environments-with-ansible
http://www.geedew.com/setting-up-ansible-for-multiple-environment-deployments/
In those guides, they act against statically declared hosts. One thing that help me seems to be the directories called group_vars. It looks like the inventory points to the config with the same name, and assumingly uses those variables when the hosts: directive of a play contains the host(s) specified in the inventory header.
However, Since I dynamically read in the servers that we're acting against via the CLI flag --extra-vars, I can't take that approach because I will always have something like this for my plays:
...
hosts: localhost
tasks:
...
- name: do something
...
delegate_to: {{ item }}
with_items: {{ given_hosts }}
Or I run a task first that takes the servers and adds them to a new host like this:
- name: Extract Hosts
hosts: localhost
tasks:
- name: Adding given hosts to new group...
add_host:
name: "{{ item }}"
groups: some_group
with_items:
- "{{ list_of_hosts | default([]) }}"
and then uses the dynamically created group:
- name: Restart Tomcat for Changes to Take Effect
hosts: some_group
tasks:
- name: Restarting Tomcat...
service:
name: tomcat
state: restarted
So I need to find a way to specify which vars.yml to use. Because I use Jenkins to kick off the Ansible playbook via CLI over SSH, I was hoping for something like the following:
ansible-playbook /path/to/some/playbook.yml --include-vars /etc/ansible/project/dev/vars.yml
At the least, how would I explicitly include a vars.yml file in a playbook to use the variables defined within?
You can use:
extra vars with #: --extra-vars #/etc/ansible/project/dev/vars.yml
or
include_vars:
- include_vars: "/etc/ansible/project/{{ some_env }}/vars.yml"
to load different variables depending in your environment.
Currently my playbook structure is like this:
~/test_ansible_roles ❯❯❯ tree .
.
├── checkout_sources
│ └── tasks
│ └── main.yml
├── install_dependencies
│ └── tasks
│ └── main.yml
├── make_dirs
│ └── tasks
│ └── main.yml
├── setup_machine.yml
One of the roles that I have is to install dependencies on my box, so for this I need sudo. Because of that all of my other tasks I need to include the stanza:
become: yes
become_user: my_username
Is there a better way to do this ?
You can set the become options per:
play
role
task
Per play:
- hosts: whatever
become: true
become_user: my_username
roles:
- checkout_sources
- install_dependencies
- make_dirs
Per role:
- hosts: whatever
roles:
- checkout_sources
- role: install_dependencies
become: true
become_user: my_username
- make_dirs
Per task:
- shell: do something
become: true
become_user: my_username
You can combine this however you like. The play can run as user A, a role as user B and finally a task inside the role as user C.
Defining become per play or role is rarely needed. If a single task inside a role requires sudo it should only be defined for that specific task and not the role.
If multiple tasks inside a role require become, blocks come in handy to avoid recurrence:
- block:
- shell: do something
- shell: do something
- shell: do something
become: true
become_user: my_username