include_role combined with_items for role name - ansible

I want to selectively execute roles based on a list of roles passed to the play book. However, this fails
- name: create container definitions for selected services
include_role:
name: "{{ item }}"
with_items: "{{ selected_service_list }}"
with
ERROR! 'item' is undefined
I gather it is impossible to use a list of role names to control when we execute roles. Let me know if you know how to do this

For ansible latest 2020-07 one can follow this (example from ansible documentation)
For ansible 2.3 one should upgrade or can use this approach
- { role: myservice, when: "'myservice' in selected_service_list" }

Related

Ansible import role run conditionally

I am writing a parent ansible role that runs another role though import_role. The idea that this sibling role (staticdev.pyenv) only runs when an argument pyenv_python_versions is passed, otherwise this is skipped.
According to the official documentation, I tried the following approach:
parent/tasks/main.yml
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
pyenv_global: "{{ pyenv_global }}"
pyenv_python_versions: "{{ pyenv_python_versions }}"
pyenv_virtualenvs: []
when: pyenv_python_versions
I am using currently ansible 4.1.0 (core 2.11.1), and when I test it on Debian 11 (image: cisagov/docker-debian11-ansible:latest) it executes the role anyway, even without any value for pyenv_python_versions. when is not being considered and I also tried with include_role. Complete logs can be found here.
Any idea?
UPDATE: changed condition when from to pyenv_python_versions as suggested by #lonetwin.
The problem was role import was replicating variables from the imported role (pyenv_global, pyenv_python_versions and pyenv_virtualenvs), in this case you solve it just by omitting imported role params (they will be overwritten if you create new defaults for them).
Solution:
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
when: pyenv_python_versions

Ansible role dependencies and facts with delegate_to

The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.

ansible share variables between hosts during run time

I am using ansible 2.5.4 and I need to share variables between hosts.
I tried many examples thtat I saw on-line ( share with set_fact or using a dummy host ) and it is all not working.
maybe I am doing something different,
this is my playbook:
---
- hosts: master[0]
tasks:
- name: generate kubernetes BootrapToken
command: kubeadm token generate
register: generate_token_result
- set_fact: token="{{generate_token_result}}"
- hosts: new # requires creating new group in inventory.cfg named new
tasks:
- name: include docker-host role
include_role:
name: docker-host
when: not skip_nodes_setup
- name: include kubernetes-host role
include_role:
name: kubernetes-host
when: not skip_nodes_setup
- name: include kubernetes-operator role
include_role:
name: kubernetes-operator
when: not skip_nodes_setup
- name: join node to kubernetes cluster
command: "kubeadm join --token {{ hostvars['master[0]']['token']['stdout'] }} --discovery-token-unsafe-skip-ca-verification {{ hostvars['kubernetes_machines']['kube_apiserver'] }}"
I am getting the following error:
The task includes an option with an undefined variable. The error was: "hostvars['master[0]']" is undefined
the first task is able to run on master[0] but the second task does not recognize that host.
please help.
thanks
adding the inventory.cfg:
[kubernetes_machines:vars]
kube_apiserver=10.82.72.54:6443
[kubernetes_machines:children]
masters
nodes
new
[masters]
srv12
[nodes]
srv13
[new]
prd4
If you ask for "hostvars['master[0]']", you've got the entire master[0] inside quotes so you're referring to a host with the literal name master[0]. If you mean the first member of the master hostgroup, you need a variable reference, not a string, and you'll need to use the groups variable (and you need to remember your hostgroup is named masters not master):
hostvars[groups.masters.0]
You can find relevant documentation here.
Quoting from Playbook Basics
The hosts line is a list of one or more groups or host patterns
Pattern master[0] doesn't match hostname master[0]. If the hostname is master0 then the hostvars reference should be
hostvars['master0']
It's not clear why hosts: master[0] works. It should not according to the documentation. hosts: master.0 which should be the same doesn't work.

Does Ansible duplicate role variables for role dependencies that enable "allow_duplicates"?

Does Ansible duplicate role variables for role dependencies that have enabled allow_duplicates?
For example, given a playbook that includes more than once role application-environment that allows duplicates, will Ansible create multiple copies of its variables?
meta/main.yml:
---
allow_duplicates: yes
dependencies:
- src: git+http://javasource/git/ansible/roles/organization
version: 1.1.0
vars/main.yml:
---
application_directory: "{{ organization.directory }}/{{ application_name }}"
application_component_directory: "{{ application_directory }}/{{ application_component_name }}"
If Ansible does not create multiple copies of these variables, how could I rework the role so that it can support multiple variables?
You may find some helpful information here:
About vars:
Anything in the vars directory of the role overrides previous versions of that variable in namespace.
About defaults:
Tasks in each role will see their own role’s defaults. Tasks defined outside of a role will see the last role’s defaults.

Ansible role_path undefined error

I have a task file that looks like this:
- name: Drop schemas
mysql_db: state=import name=mysql target={{ role_path }}/files/schemas/drop-imdb-perf.sql login_user={{ MYSQL_ROOT_USER }} login_password={{ MYSQL_ROOT_PWD }} login_host={{ inventory_hostname }}
I'm invoking it from a playbook that looks like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
I'm using Ansible version 1.9.4 so I'm thinking role_path should be a valid variable.
But when I run the playbook, I get this output:
TASK: [Drop schemas] **************************************************
fatal: [imdb] => One or more undefined variables: 'role_path' is undefined
I can't figure out why role_path is undefined. According to the ansible docs, it seems like for versions 1.8 and above it should be populated with the directory of the role in question, but I'm clearly mistaken about something or other.
I don't see you using any roles. Without looking into the Ansible code it seems obvious that role_path is defined within a role. Including a file of a role does not make it run in context of a role though.
If your include is by intend, role_path won't be defined. You could try to set it yourself together with the include like so:
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
role_path: ../roles/mysql
That might work or not, since role_path still is a magic variable and therefore might not be manually changed.
If you actually meant to include the role, then you need to define your playbook like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
roles:
- role: ../roles/mysql
But my guess is, you're trying to only run a single tasks file of that role and not the whole role. But what you're trying to do there seem to be against best practice. My recommendation would be to move the tag mysql-data-drop into the tasks of the file drop-perf.yml, because that's what tags are for: to trigger a limited set of tasks of roles or playbooks.

Resources