The ansible docu says
If Ansible were to load ansible.cfg from a world-writable current working directory, it would create a serious security risk.
That makes sense but causes a problem in my ci-pipeline fir my project:
.
├── group_vars
├── host_vars
├── playbooks
├── resources
├── roles
| ├── bootstrap
| └── networking
├── ansible.cfg
├── inventory.yml
├── requirements.yml
├── site.yml
└── vault.yml
I have two "local" roles which are checked in under source control of the ansible project under ./roles, but the roles are not found when i run ansible-playbook --syntax-check site.yml
$ ansible-playbook --syntax-check site.yml
[WARNING] Ansible is being run in a world writable directory (/builds/papanito/infrastructure), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: provided hosts list is empty, only localhost is available. Note
that the implicit localhost does not match 'all'
ERROR! the role 'networking' was not found in /builds/papanito/infrastructure/playbooks/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/builds/papanito/infrastructure/playbooks
The error appears to have been in '/builds/papanito/infrastructure/playbooks/networking.yml': line 14, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- { role: networking, become: true }
^ here
ERROR: Job failed: exit code 1
--------------------------------------------------------
Obviously cause roles are searched
A roles/ directory, relative to the playbook file.
Thus my ansible.cfg defined to look in ./roles
# additional paths to search for roles in, colon separated
roles_path = ./roles
So based on the ansible docu I can use the environment variable ANSIBLE_CONFIG as I do as follows in the gitlab-ci.yml
variables:
SITE: "site.yml"
PLAYBOOKS: "playbooks/**/*.yml"
ANSIBLE_CONIG: "./ansible.cfg"
stages:
- verify
before_script:
.....
ansible-verify:
stage: verify
script:
- ansible-lint -v $SITE
- ansible-lint -v $PLAYBOOKS
- ansible-playbook --syntax-check $SITE
- ansible-playbook --syntax-check $PLAYBOOKS
But I still get the error above. What do I miss?
site.yml
- import_playbook: playbooks/networking.yml
- import_playbook: playbooks/monitoring.yml
playbooks/networking.yml
- name: Setup default networking
hosts: all
roles:
- { role: networking, become: true }
- { role: oefenweb.fail2ban, become: true }
I know the topic is old, but you have a typo in your config file. You are missing an F in ANSIBLE_CONFIG, so write this instead
variables:
SITE: "site.yml"
PLAYBOOKS: "playbooks/**/*.yml"
ANSIBLE_CONFIG: "./ansible.cfg"
BTW, it helped to solve my problem
Looks like a Hierarchy setup issue, there is no task associated within the roles bootstrap, networking; Instead looks like the playbooks are in a different folder called playbooks.
Refer directory layout: https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html
Related
I want to setup a server using Ansible. This is my file structure:
group_vars/
all.yml
development.yml
production.yml
vault/
all.yml
development.yml
production.yml
playbooks/
development.yml
production.yml
roles/
common/
tasks/
main.yml
vars/
main.yml
ansible.cfg
hosts
This is my ansible.cfg:
[defaults]
vault_password_file = ./vault_pass.txt
host_key_checking = False
inventory = ./hosts
The development.yml playbook:
- hosts: all
name: Development Playbook
become: true
roles:
- ../roles/common
vars_files:
- ../group_vars/development.yml
- ../group_vars/all.yml
- ../group_vars/vault/development.yml
- ../group_vars/vault/all.yml
And the tasks/main.yml file of the common role:
# Set hostame
- name: Set hostname
become: true
ansible.builtin.hostname:
name: "{{ server.hostname }}"
# Set timezone
- name: Set timezone
become: true
community.general.timezone:
name: "{{ server.timezone }}"
# Update all packages
- name: Update all packages
become: true
ansible.builtin.apt:
upgrade: dist
update_cache: true
The group_vars/all.yml file looks like this:
server:
hostname: "myhostname"
timezone: "Europe/Berlin"
When running the playbook using ansible-playbook playbooks/development.yml, I get this error:
fatal: [default]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'hostname'. 'dict object' has no attribute 'hostname'\n\nThe error appears to be in '/ansible/roles/common/tasks/main.yml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Set hostame\n- name: Set hostname\n ^ here\n"}
Can someone explain to me why the vars_files does not work and how to fix this?
Ansible automatically imports files and directories in group_vars that match the name of an active group. That is, if you are targeting the production group, and you have a file group_vars/production.yaml, the variables defined in this file will be imported automatically.
If instead of a file you have the directory group_vars/production/, then all files in that directory will be imported for hosts in the production group.
So your files in group_vars/vault/ will only be imported automatically for hosts in the vault hostgroup, which isn't the behavior you want.
Without knowing all the details about your deployment, I would suggest:
Create directories group_vars/{all,production/development}.
Rename group_vars/all.yml inventory files to group_vars/all/common.yml, and similarly for development.yml and production.yml (the name common.yml isn't special, you can use whatever name you want).
Rename group_vars/vault/all.yml to group_vars/all/vault.yaml, and similarly for the other files.
This will give you the following layout:
group_vars/
├── all
│ ├── common.yml
│ └── vault.yml
├── development
│ ├── common.yml
│ └── vault.yml
└── production
├── common.yml
└── vault.yml
I have the following directory structure:
├── ansible.cfg
├── hosts.yml
├── playbook.yml
├── group_vars
| ├── all.yml
│ └── vm_dns.yml
└── roles
└── pihole
├── handlers
│ └── main.yml
└── tasks
└── main.yml
In ansible.cfg I simply have:
[defaults]
inventory = ./hosts.yml
In group_vars/all.yml I have some generic settings:
---
aptcachetime: 3600
locale: "en_GB.UTF-8"
timezone: "Europe/Paris"
And in hosts.yml I setup my PiHole VMs:
---
all:
vars:
ansible_python_interpreter: /usr/bin/python3
vm_dns:
vars:
dns_server: true
hosts:
vmb-dns:
pihole:
dns:
- "185.228.168.10"
- "185.228.169.11"
network:
ipv4: "192.168.2.4/24"
interface: eth0
vmk-dns:
pihole:
dns:
- "185.228.168.10"
- "185.228.169.11"
network:
ipv4: "192.168.3.4/24"
interface: eth0
At this point, I've not attempted to move any vars to group_vars, and everything works.
Now, I felt could make the hosts file more readable by breaking out the settings that are the same for all vm_dns hosts to a group_vars file. So I removed all the dns and interface lines from hosts.yml, and put them in a
group_vars/vm_dns.yml file, like this:
---
pihole:
dns:
- "185.228.168.10"
- "185.228.169.11"
network:
interface: eth0
At this point, the hosts.yml thus contains:
---
all:
vars:
ansible_python_interpreter: /usr/bin/python3
vm_dns:
vars:
dns_server: true
hosts:
vmb-dns:
pihole:
network:
ipv4: "192.168.2.4/24"
vmk-dns:
pihole:
network:
ipv4: "192.168.3.4/24"
But when I now run the playbook, once it tries to execute a task that uses one of the vars that were moved from hosts.yml to group_vars/vm_dns.yml, Ansible fails with AnsibleUndefinedVariable: dict object has no attribute ....
I'm not really sure if I'm simply misunderstanding the "Ansible way", or if what I'm trying to do (essentially having different parts of the same list split across hosts and group_vars, I suppose) is not just doable. I thought the "flattening" that Ansible does was supposed to handle this, but it seems Ansible is not incorporating the vars defined in group_vars/vm_dns.yml at all.
I've read the docs on the subject, and found some almost-related posts, but found none demonstrating YAML-formatted lists used across hosts and group_vars in this manner.
Edit: other SO or Github issues that are actually related to this question
In Ansible, how to combine variables from separate files into one array?
https://github.com/ansible/ansible/issues/58120
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-hash-behaviour
Since you are keeping a definition for the pihole var in your inventory at host level, this one wins the game by default and replaces the previous definition at group level. See the variable precedence documentation. So if you later try to access e.g. pihole.dns or pihole.network.interface, the mappings do not exist anymore and ansible fires the above error.
This is the default behavior in ansible: replacing a previous variable by the latest by order of precedence. But you can change this behavior for dicts by setting hash_behaviour=merge in ansible.cfg.
My personal experimentation with this settings where not really satisfactory: it behaved correctly with my own playbooks/roles that where made specifically for this but started to fire hard to trace bugs when including third party contributions (playbook snippets, roles, custom modules....). So I definitely don't recommend it. Moreover, this configuration has been deprecated in ansible 2.10 and will therefore be removed in ansible 2.14. If you still want to use it, you should limit the scope of the setting as narrow as possible and certainly not set it on a global level (i.e. surely not in /etc/ansible/ansible.cfg)
What I globally use nowadays to solve this kind of problems:
define your variable for each host/group/whatever containing only the specific information. In your case for you host
---
pihole_host:
network:
ipv4: "192.168.2.4/24"
define somewhere the defaults for those settings. In your case for your group.
---
pihole_defaults:
dns:
- "185.228.168.10"
- "185.228.169.11"
network:
interface: eth0
(Note that you can define those defaults at different level taking advantage of the above order of precedence for vars)
at a global level (I generally put this in group_vars/all.yml), define your var which will be the combination of default and specific, making sure it always defaults to empty
---
# Calculate pihole from group defaults and host specific
pihole: >-
{{
(pihole_defaults | default({}))
| combine((pihole_host | default({})), recursive=true)
}}
I'm having the following issue and am not sure if it's a bug or my setup is wrong. I've created a role ssh with the following structure:
.
├── roles
├── ssh
│ ├── files
│ │ └── sshd_config
│ └── tasks
│ └── main.yml
The main.yml file looks like this:
---
- hosts: all
tasks:
- name: "Set sshd configuration"
copy:
src: sshd_config
dest: /etc/ssh/sshd_config
Because sshd_config is stored in the recommended files directory, I expected the copy command to automatically fetch that file when referencing it from the task.
Instead, Ansible looks for sshd_config in the following directories:
ansible.errors.AnsibleFileNotFound: Could not find or access 'sshd_config'
Searched in:
<redacted>/roles/ssh/tasks/files/sshd_config
<redacted>/roles/ssh/tasks/sshd_config
<redacted>/roles/ssh/tasks/files/sshd_config
<redacted>/roles/ssh/tasks/sshd_config on the Ansible
Notice it does look in a files directory, but does so in the tasks folder!
Main goal is to send a local file (on my host machine) to the remote server.
I run the playbook with following command:
ansible-playbook -i hosts ./roles/ssh/tasks/main.yml -vvv
Questions:
Is my assumption right Ansible should look for the file in the files directory adjacent to tasks directory?
Did I mess up my setup?
I think you confused roles with playbooks. You created a playbook in place where the role should be. You rather should create a role and then create a playbook (outside of /roles dir) that uses it.
Here's example /roles/ssh/tasks/main.yml:
- name: "Set sshd configuration"
copy:
src: sshd_config
dest: /etc/ssh/sshd_config
and playbook using ssh role:
---
- hosts: all
tasks:
- import_role:
name: ssh
ERROR! vars file vars not found on the Ansible Controller. If you are using a module and expect the file to exist on the remote, see the remote_src option.
---
# tasks file for user-management
- hosts: linux
become: yes
vars_files:
- vars/active-users
- vars/remove-users
roles:
- { role: user-management }
tasks:
- import_tasks: /tasks/user-accounts.yml
- import_tasks: /tasks/authorized-key.yml
Trying to run the main.yml on a server to execute on remote hosts (linux). The vars playbook in the vars directory has two playbooks (active-users, remove-users).
Your vars folder should be at the same level than your playbook.
If yes would it be possible that you missed .yml extension of your active-users and remove-users.
Check the relative path this way:
vars_files:
- ../vars/active-users.yaml
- ../vars/remove-users.yaml
I was stuck at the same error, issue I found was that my vars_files's location defined in the playbook was incorrect.
Steps I followed :-
1> run #find / name | grep <vars_files>.yaml # this command is going to run search for your <vars_files>.yaml starting from the root directory; check location of the vars_files.
2> make sure location of the vars files in the playbook match with the step1.
for instance:-
hosts: localhost
connection: local
gather_facts: no
become: yes
vars_files:
- /root/secrets.yaml
#MoYaMoS, Thanks! This was the resolution for me. It wasn't obvious at first as I have a top level play calling two plays in the play/ directory. The domain-join play is the one that calls group_vars in this case.
├── group_vars
│ └── linux
├── plays
│ ├── create-vm.yml
│ └── domain-join.yml
└── provision.yml
I just specified the relative path in the domain-join.yml as you mentioned. Works perfectly.
vars_files:
- ../group_vars/linux
I know that you can change between different inventory files using the -i flag which can be used to switch between different hosts.
In my case, the hosts to be acted upon change between deployments, so I take the hosts in as --extra-vars and use delegate_to to deploy to that host (see below for more details).
I was hoping for a way to switch between files containing environment variables in a similar fashion. For example, lets say I have the following simplified directory structure:
/etc/ansible/
├── ansible.cfg
├── hosts
└── project/
└── environments/
├── dev/
│ └── vars.yml
└── prd/
└── vars.yml
The structure of vars.yml in both environments would be exactly the same, just with the variables having different values due to the differences between environments.
I've found a few places that talk about doing something similar such as these:
https://rock-it.pl/managing-multiple-environments-with-ansible-best-practices/
http://rosstuck.com/multistage-environments-with-ansible
http://www.geedew.com/setting-up-ansible-for-multiple-environment-deployments/
In those guides, they act against statically declared hosts. One thing that help me seems to be the directories called group_vars. It looks like the inventory points to the config with the same name, and assumingly uses those variables when the hosts: directive of a play contains the host(s) specified in the inventory header.
However, Since I dynamically read in the servers that we're acting against via the CLI flag --extra-vars, I can't take that approach because I will always have something like this for my plays:
...
hosts: localhost
tasks:
...
- name: do something
...
delegate_to: {{ item }}
with_items: {{ given_hosts }}
Or I run a task first that takes the servers and adds them to a new host like this:
- name: Extract Hosts
hosts: localhost
tasks:
- name: Adding given hosts to new group...
add_host:
name: "{{ item }}"
groups: some_group
with_items:
- "{{ list_of_hosts | default([]) }}"
and then uses the dynamically created group:
- name: Restart Tomcat for Changes to Take Effect
hosts: some_group
tasks:
- name: Restarting Tomcat...
service:
name: tomcat
state: restarted
So I need to find a way to specify which vars.yml to use. Because I use Jenkins to kick off the Ansible playbook via CLI over SSH, I was hoping for something like the following:
ansible-playbook /path/to/some/playbook.yml --include-vars /etc/ansible/project/dev/vars.yml
At the least, how would I explicitly include a vars.yml file in a playbook to use the variables defined within?
You can use:
extra vars with #: --extra-vars #/etc/ansible/project/dev/vars.yml
or
include_vars:
- include_vars: "/etc/ansible/project/{{ some_env }}/vars.yml"
to load different variables depending in your environment.