Ansible role_path undefined error - ansible

I have a task file that looks like this:
- name: Drop schemas
mysql_db: state=import name=mysql target={{ role_path }}/files/schemas/drop-imdb-perf.sql login_user={{ MYSQL_ROOT_USER }} login_password={{ MYSQL_ROOT_PWD }} login_host={{ inventory_hostname }}
I'm invoking it from a playbook that looks like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
I'm using Ansible version 1.9.4 so I'm thinking role_path should be a valid variable.
But when I run the playbook, I get this output:
TASK: [Drop schemas] **************************************************
fatal: [imdb] => One or more undefined variables: 'role_path' is undefined
I can't figure out why role_path is undefined. According to the ansible docs, it seems like for versions 1.8 and above it should be populated with the directory of the role in question, but I'm clearly mistaken about something or other.

I don't see you using any roles. Without looking into the Ansible code it seems obvious that role_path is defined within a role. Including a file of a role does not make it run in context of a role though.
If your include is by intend, role_path won't be defined. You could try to set it yourself together with the include like so:
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
role_path: ../roles/mysql
That might work or not, since role_path still is a magic variable and therefore might not be manually changed.
If you actually meant to include the role, then you need to define your playbook like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
roles:
- role: ../roles/mysql
But my guess is, you're trying to only run a single tasks file of that role and not the whole role. But what you're trying to do there seem to be against best practice. My recommendation would be to move the tag mysql-data-drop into the tasks of the file drop-perf.yml, because that's what tags are for: to trigger a limited set of tasks of roles or playbooks.

Related

ansible: customize which password manager to use

New to Ansible here. I've been adapting an Ansible playbook to automate the setup of a Raspberry Pi to use Pi-hole as described on this blog and as per this repo.
One change I've made is to use Bitwarden instead of 1Password as the secrets manager (see details below).
But after having done it, and thinking of offering a PR to the repo in question, I realized that my use of Bitwarden is just as hard-coded as the use of 1Password in the original (hardly an improvement).
Is there a way to make the choice of secret manager configurable in an Ansible playbook or setup? Or "call a different function" based on the value of a variable?
A desirable setup would be something along the lines of:
vars:
secret_manager: bitwarden
# one of {'1password', 'bitwarden', 'lastpass', 'ansible-vault',
# 'aws-secrets', 'azure-key-vault', ...}
tasks:
- name: ...
...
value: "{{ lookup(secret_manager, item_name, field=field_name) }}"
Details
Here is an example of using bitwarden instead of 1password in a playbook.yml:
New:
roles:
- role: ansible-modules-bitwarden
tasks:
- name: "test: get 'username' from the secrets manager"
debug:
- msg: "{{ lookup('bitwarden', 'pi-hole', field='username') }}"
(and note: if the field is a custom one, one must add custom_field=true whereas with
community.general.onepassword this is not needed).
Instead of:
tasks:
- name: "test: get 'username' from the secrets manager"
debug:
- msg: "{{ lookup('community.general.onepassword', 'pi-hole', field='username') }}"
Side note and for sake of completeness, in order to make this work I needed to install Bitwarden CLI as well as (one of) the ansible-modules-bitwarden, and then of course log in and get a session token:
export BW_SESSION="$(bw unlock --raw)"
This all works fine, but I am not quite happy: now the use of Bitwarden is hard-coded instead of "1Password". I would like to make this more generic and customizable. What if someone would like to use LastPass or AWS Secrets Manager instead?
Things I've tried
using if-else:
- hosts: all
vars:
use_bitwarden: true
secret_item: pi-hole
roles:
- role: ansible-modules-bitwarden
when: use_bitwarden
tasks:
- name: get username from the secrets manager
debug:
msg: "{{ lookup('bitwarden', secret_item, field='username') if use_bitwarden \
else lookup('community.general.onepassword', secret_item, field='username') }}"
but this bloats the playbook, makes it less readable and less DRY. Also, it isn't very extensible to the many other secrets managers out there.
define which manager to use in a variable:
- hosts: all
vars:
secret_manager: bitwarden
# secret_manager: community.general.onepassword
secret_item: pi-hole
roles:
- role: ansible-modules-bitwarden
when: secret_manager == 'bitwarden'
tasks:
- name: get username from the secrets manager
debug:
msg: "{{ lookup(secret_manager, secret_item, field='username') }}"
but this doesn't work as custom fields are to be queried with custom_field=true with the Bitwarden plugin, but not with 1Password.
Other ideas (not tried)
define a new generic plugin that uses other existing ones, based on a config variable.
use a task to get all the needed variables from whatever password manager at once and insert them, encrypted, into Ansible's vault. That way the task that does that can be isolated and have conditional execution with when: secret_manager == 'that'. But it makes maintenance of the playbook more difficult.
Maybe there is a much simpler and concise way of accomplishing this?

Ansible role dependencies and facts with delegate_to

The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.

Ansible pre-check before run playbook

Is it possible to add a condition before to run a playbook which check if there is a title, a description, the environment and the versions on the playbook ?
For example my test.yml playbook:
---
#Apache servers
#Linux
#Ubuntu
#version 2.1.1
#Testing for secure the webserver
task:
xxxxxx
xxxxxx
And I would like to check if all the comment before are present before to run this task !
I tried to test this solution :
name: run Apachetest playbook
include: test.yml
when: "{{ lookup('file', 'test.yml').split('\n')[1] == '#Apache servers' }}"
But still not working...
BS
Comments are, well, comments. They do not impact the execution and are just ignored. So there is no way, and actually no real reason, to check if comments are present or not. You would need to implement that yourself.
To check playbooks, roles, etc. there is ansible-lint which will verify the syntax and some best practices (e.g. if you use a command or shell for something there is a module for) but this does not verify comments (again, checking for comments does not make sense from a execution perspective, as they are ignored).
You want some information to be present in your playbook, that is what I understand. If I was you, I would either create a git hook, that verifies if the information is present before letting you push that code to your repository or establish a proper review-process, where the reviewer only accepts a merge/pull request, if the information is present.
Otherwise, here is the code, that will do what you are trying to do:
---
#Apache server
- hosts: all
tasks:
- name: set fact
set_fact:
pb: "{{ lookup('file', 'test.yml').split('\n')[1] }}"
- name: check if we found it
debug:
msg: 'found'
when: "'#Apache server' in pb"
You could use the apache role for apache installed like that
---
- hosts: apache
sudo: yes
tasks:
- name: install apache2
apt: name=apache2 update_cache=yes state=latest
have a look here how-to-install-apache-on-ansible

Ansible - vars are not correctly propagated to handlers when role run in loop

I am asking for help with a problem of deploying multiple versions (different variables) of the app with the same role run from one playbook.
We have an app with multiple product families, which are different code versions. Each version has separate uWSGI vassal config and Nginx virtualhost config(/api/v2, /api/v3, ...).
The desired state would be to run playbook and configure the server with all versions specified.
Sadly, ansible's import_role/import_tasks can't be used with with_items, so include_role/include_tasks must be used (pitty because they do not honor role tags).
The include_role method would not be the biggest problem, but we use handlers to notify uWSGI touch to reload - on a code change, link change, virtualenv change, app_config change, ...).
But when using loop (with_items), the variables passed from the loop does not correctly propagate to handlers.
I tried this scenarios
playbook.yml - with_items loop inside the playbook
PROBLEM: Handler is run only for the first iteration of the loop.
#!/usr/bin/env ansible-playbook
# HAndler is run only once, from first notifier
- hosts: localhost
gather_facts: no
vars:
app_root: "/tmp/test_ansible"
app_versions:
- app_product_family: 1
app_release: "v1.0.2"
- app_product_family: 3
app_release: "v4.0.7"
tasks:
- name: Deploy multiple versions of app
include_role:
name: app
with_items: "{{ app_versions }}"
loop_control:
loop_var: app_version
vars:
app_product_family: "{{ app_version.app_product_family }}"
app_release: "{{ app_version.app_release }}"
tags:
- app
- app_debug
playbook_v2.yml - with_items loop inside role task
PROBLEM: Handler is run with the default value from "Defaults"
#!/usr/bin/env ansible-playbook
- hosts: localhost
gather_facts: no
roles:
- app_v2
vars:
app_v2_root: "/tmp/test_ansible_v2"
app_v2_versions:
- app_v2_product_family: 1
app_v2_release: "v1.0.2"
- app_v2_product_family: 3
app_v2_release: "v4.0.7"
Tasks roles/app_v2/main.yml
---
# Workaround because import_tasks can't be run with_items
- include_tasks: deploy.yml
when: app_v2_versions
with_items: "{{ app_v2_versions }}"
loop_control:
loop_var: app_v2
vars:
app_v2_product_family: "{{ app_v2.app_v2_product_family }}"
app_v2_release: "{{ app_v2.app_v2_release }}"
tags:
- app_v2
- app_v2_deploy
...
One idea was about writing a separate role for each product family, but they share nginx and uWSGI, so it will be lots of copy-pasting and sharing tasks (so tags would not work properly).
For now, I solved it with shell script wrapper, but this is not an ideal solution and does not work from Ansible tower.
Sample repo with tasks to reproduce problem (tested with ansible 2.4, 2.5, 2.6)
Any ideas & recommendations are very welcome.
The order of overrides for variables is broken for includes in Ansible. F.e. even set_fact in the included role will be shadowed by role defaults.
See this bug: https://github.com/ansible/ansible/issues/22025
It's closed but not fixed. My advice: use include and variables really carefully.
In practice I never use role includes with loop. If you need loop, include a tasklist in this loop (and that tasklist, in turn, may import_role).
Ok, It is a bug as #George Shuklin posted.
I will use my shell wrapper, which reads group_vars yaml and then runs the playbook multiple times according to the variable list length.
Sadly I hit multiple annoying bugs in ansible in last few weeks, kinda losing my trust in it ):
And probably everybody is using microservices and kubernetes, so need to speed up our migration (:

Is there a better way to conditionally add a user to a group with Ansible?

I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)

Resources