Unable to set environment variables for use in ansible roles - ansible

I have playbook running fine when I have environment variables and tasks defined in one single playbook without roles.
But when I structure my project into roles, I see that running tasks is not finding the environment variables that are set from the original playbook.
Any hint how to set env variables so they are available for all roles inside a playbook?
Do I need to specify the environment variables in tasks/main.yaml file?, if yes how should do this exactly?
cat playbook.yaml
-
name: Deploy Team Services Playbook
hosts: all
environment:
PATH: "{{ ansible_env.PATH }}:/usr/local/bin"
KUBECONFIG: "{{ ansible_env.HOME }}/.kube/config/{{ ansible_env.USER }}.kubeconfig"
roles:
- prereq1_setup
- prereq2_k8s
prereq1_setup\tasks\main.yaml
- name: "Validate kubeconfig set?"
shell: echo {{ ansible_env.KUBECONFIG }}
failed_when: "'KUBECONFIG' not in ansible_env"
Above works if I don't use roles and directly add tasks below. currently, am getting error as
output:
|TASK [prereq1_setup : Validate kubeconfig set?] *****************************************************
fatal: [target1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'KUBECONFIG'\n\nThe error appears to be in '/Users/testu/ansible/ansible-team/team_deploy/roles/prereq1_setup/tasks/main.yaml': line 57, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Validate kubeconfig set?\"\n ^ here\n"}

Any hint how to set env variables so they are available for all roles inside a playbook?
The mechanism you are using is correct, and that environment variable is being correctly set, but it is set in the environment, and not in the ansible facts. Those facts are gathered before the playbook boots up, and thus your environment: happens after fact gathering, which explains why ansible_env does not contain it
You have a few paths forward, depending on what you prefer:
Explicitly re-gather facts inside the playbook (or even change your playbook to gather_facts: no and invoke setup: manually)
Stop looking for the environment in ansible_env, with the trust that it is actually there, and just use the commands which need the environment variable
Explicitly declare a separate fact to make that variable available to both the environment: and to the ansible tasks
If you want the first one, it would look like:
-
name: Deploy Team Services Playbook
hosts: all
gather_facts: no
environment:
whatever: goes here
pre_tasks:
- setup:
roles:
- and so forth
You can confirm the second via:
- name: ensure $KUBECONFIG is set
shell: echo $KUBECONFIG
And the third would look like:
- hosts: all
environment:
alpha: beta
vars:
alpha: beta
roles:
- # now {{ alpha }} is available to ansible and as $alpha in `commands:`

Related

ansible share variables between hosts during run time

I am using ansible 2.5.4 and I need to share variables between hosts.
I tried many examples thtat I saw on-line ( share with set_fact or using a dummy host ) and it is all not working.
maybe I am doing something different,
this is my playbook:
---
- hosts: master[0]
tasks:
- name: generate kubernetes BootrapToken
command: kubeadm token generate
register: generate_token_result
- set_fact: token="{{generate_token_result}}"
- hosts: new # requires creating new group in inventory.cfg named new
tasks:
- name: include docker-host role
include_role:
name: docker-host
when: not skip_nodes_setup
- name: include kubernetes-host role
include_role:
name: kubernetes-host
when: not skip_nodes_setup
- name: include kubernetes-operator role
include_role:
name: kubernetes-operator
when: not skip_nodes_setup
- name: join node to kubernetes cluster
command: "kubeadm join --token {{ hostvars['master[0]']['token']['stdout'] }} --discovery-token-unsafe-skip-ca-verification {{ hostvars['kubernetes_machines']['kube_apiserver'] }}"
I am getting the following error:
The task includes an option with an undefined variable. The error was: "hostvars['master[0]']" is undefined
the first task is able to run on master[0] but the second task does not recognize that host.
please help.
thanks
adding the inventory.cfg:
[kubernetes_machines:vars]
kube_apiserver=10.82.72.54:6443
[kubernetes_machines:children]
masters
nodes
new
[masters]
srv12
[nodes]
srv13
[new]
prd4
If you ask for "hostvars['master[0]']", you've got the entire master[0] inside quotes so you're referring to a host with the literal name master[0]. If you mean the first member of the master hostgroup, you need a variable reference, not a string, and you'll need to use the groups variable (and you need to remember your hostgroup is named masters not master):
hostvars[groups.masters.0]
You can find relevant documentation here.
Quoting from Playbook Basics
The hosts line is a list of one or more groups or host patterns
Pattern master[0] doesn't match hostname master[0]. If the hostname is master0 then the hostvars reference should be
hostvars['master0']
It's not clear why hosts: master[0] works. It should not according to the documentation. hosts: master.0 which should be the same doesn't work.

Dynamicly set HTTP_PROXY in an ansible playbook

I'm running a playbook either on a bunch of servers with no need of http_proxy and others with needs of it (on different runs).
I've read https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html but it doesn't really answer this...
here's an example:
- hosts: all
tasks:
- name: install vi
become: true
apt:
name: vi
state: present
I would like to launch it with a group of servers (let's say server01-atlanta) without proxy and in another run with a group of servers (let's say server01-berlin) with proxy and without changing the code between each run (so managing to do it with inventory variables).
I would
You can solve this with group_vars / host_vars in combination with environment variables. Her is a simple example based on the code from ansible docs.
---
- hosts: all
vars:
proxy: # default empty
tasks:
- apt: name=cobbler state=installed
environment:
http_proxy: "{{ proxy }}"
This is how you define a environment variable per task. You can also use normal ansible variables for this. There is also a example with proxy settings and variables in the docs. See: https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
In your inventory you can define the proxy variable per host or group:
atlanta:
hosts:
host1:
host2:
vars:
proxy: proxy.atlanta.example.com
See inventory docs for more details: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#host-variables
For anyone wondering you can also set this via the command line with the following:
ansible-playbook --extra-vars "http_proxy=$http_proxy" ...
As found here

Ansible - vars are not correctly propagated to handlers when role run in loop

I am asking for help with a problem of deploying multiple versions (different variables) of the app with the same role run from one playbook.
We have an app with multiple product families, which are different code versions. Each version has separate uWSGI vassal config and Nginx virtualhost config(/api/v2, /api/v3, ...).
The desired state would be to run playbook and configure the server with all versions specified.
Sadly, ansible's import_role/import_tasks can't be used with with_items, so include_role/include_tasks must be used (pitty because they do not honor role tags).
The include_role method would not be the biggest problem, but we use handlers to notify uWSGI touch to reload - on a code change, link change, virtualenv change, app_config change, ...).
But when using loop (with_items), the variables passed from the loop does not correctly propagate to handlers.
I tried this scenarios
playbook.yml - with_items loop inside the playbook
PROBLEM: Handler is run only for the first iteration of the loop.
#!/usr/bin/env ansible-playbook
# HAndler is run only once, from first notifier
- hosts: localhost
gather_facts: no
vars:
app_root: "/tmp/test_ansible"
app_versions:
- app_product_family: 1
app_release: "v1.0.2"
- app_product_family: 3
app_release: "v4.0.7"
tasks:
- name: Deploy multiple versions of app
include_role:
name: app
with_items: "{{ app_versions }}"
loop_control:
loop_var: app_version
vars:
app_product_family: "{{ app_version.app_product_family }}"
app_release: "{{ app_version.app_release }}"
tags:
- app
- app_debug
playbook_v2.yml - with_items loop inside role task
PROBLEM: Handler is run with the default value from "Defaults"
#!/usr/bin/env ansible-playbook
- hosts: localhost
gather_facts: no
roles:
- app_v2
vars:
app_v2_root: "/tmp/test_ansible_v2"
app_v2_versions:
- app_v2_product_family: 1
app_v2_release: "v1.0.2"
- app_v2_product_family: 3
app_v2_release: "v4.0.7"
Tasks roles/app_v2/main.yml
---
# Workaround because import_tasks can't be run with_items
- include_tasks: deploy.yml
when: app_v2_versions
with_items: "{{ app_v2_versions }}"
loop_control:
loop_var: app_v2
vars:
app_v2_product_family: "{{ app_v2.app_v2_product_family }}"
app_v2_release: "{{ app_v2.app_v2_release }}"
tags:
- app_v2
- app_v2_deploy
...
One idea was about writing a separate role for each product family, but they share nginx and uWSGI, so it will be lots of copy-pasting and sharing tasks (so tags would not work properly).
For now, I solved it with shell script wrapper, but this is not an ideal solution and does not work from Ansible tower.
Sample repo with tasks to reproduce problem (tested with ansible 2.4, 2.5, 2.6)
Any ideas & recommendations are very welcome.
The order of overrides for variables is broken for includes in Ansible. F.e. even set_fact in the included role will be shadowed by role defaults.
See this bug: https://github.com/ansible/ansible/issues/22025
It's closed but not fixed. My advice: use include and variables really carefully.
In practice I never use role includes with loop. If you need loop, include a tasklist in this loop (and that tasklist, in turn, may import_role).
Ok, It is a bug as #George Shuklin posted.
I will use my shell wrapper, which reads group_vars yaml and then runs the playbook multiple times according to the variable list length.
Sadly I hit multiple annoying bugs in ansible in last few weeks, kinda losing my trust in it ):
And probably everybody is using microservices and kubernetes, so need to speed up our migration (:

Is there a better way to conditionally add a user to a group with Ansible?

I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)

Ansible role_path undefined error

I have a task file that looks like this:
- name: Drop schemas
mysql_db: state=import name=mysql target={{ role_path }}/files/schemas/drop-imdb-perf.sql login_user={{ MYSQL_ROOT_USER }} login_password={{ MYSQL_ROOT_PWD }} login_host={{ inventory_hostname }}
I'm invoking it from a playbook that looks like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
I'm using Ansible version 1.9.4 so I'm thinking role_path should be a valid variable.
But when I run the playbook, I get this output:
TASK: [Drop schemas] **************************************************
fatal: [imdb] => One or more undefined variables: 'role_path' is undefined
I can't figure out why role_path is undefined. According to the ansible docs, it seems like for versions 1.8 and above it should be populated with the directory of the role in question, but I'm clearly mistaken about something or other.
I don't see you using any roles. Without looking into the Ansible code it seems obvious that role_path is defined within a role. Including a file of a role does not make it run in context of a role though.
If your include is by intend, role_path won't be defined. You could try to set it yourself together with the include like so:
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
role_path: ../roles/mysql
That might work or not, since role_path still is a magic variable and therefore might not be manually changed.
If you actually meant to include the role, then you need to define your playbook like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
roles:
- role: ../roles/mysql
But my guess is, you're trying to only run a single tasks file of that role and not the whole role. But what you're trying to do there seem to be against best practice. My recommendation would be to move the tag mysql-data-drop into the tasks of the file drop-perf.yml, because that's what tags are for: to trigger a limited set of tasks of roles or playbooks.

Resources