I am using ansible 2.5.4 and I need to share variables between hosts.
I tried many examples thtat I saw on-line ( share with set_fact or using a dummy host ) and it is all not working.
maybe I am doing something different,
this is my playbook:
---
- hosts: master[0]
tasks:
- name: generate kubernetes BootrapToken
command: kubeadm token generate
register: generate_token_result
- set_fact: token="{{generate_token_result}}"
- hosts: new # requires creating new group in inventory.cfg named new
tasks:
- name: include docker-host role
include_role:
name: docker-host
when: not skip_nodes_setup
- name: include kubernetes-host role
include_role:
name: kubernetes-host
when: not skip_nodes_setup
- name: include kubernetes-operator role
include_role:
name: kubernetes-operator
when: not skip_nodes_setup
- name: join node to kubernetes cluster
command: "kubeadm join --token {{ hostvars['master[0]']['token']['stdout'] }} --discovery-token-unsafe-skip-ca-verification {{ hostvars['kubernetes_machines']['kube_apiserver'] }}"
I am getting the following error:
The task includes an option with an undefined variable. The error was: "hostvars['master[0]']" is undefined
the first task is able to run on master[0] but the second task does not recognize that host.
please help.
thanks
adding the inventory.cfg:
[kubernetes_machines:vars]
kube_apiserver=10.82.72.54:6443
[kubernetes_machines:children]
masters
nodes
new
[masters]
srv12
[nodes]
srv13
[new]
prd4
If you ask for "hostvars['master[0]']", you've got the entire master[0] inside quotes so you're referring to a host with the literal name master[0]. If you mean the first member of the master hostgroup, you need a variable reference, not a string, and you'll need to use the groups variable (and you need to remember your hostgroup is named masters not master):
hostvars[groups.masters.0]
You can find relevant documentation here.
Quoting from Playbook Basics
The hosts line is a list of one or more groups or host patterns
Pattern master[0] doesn't match hostname master[0]. If the hostname is master0 then the hostvars reference should be
hostvars['master0']
It's not clear why hosts: master[0] works. It should not according to the documentation. hosts: master.0 which should be the same doesn't work.
Related
I have a generic tasks, like update a DNS records based on my machine -> server
Which I use it in my playbooks with include_tasks like so:
- name: (include) Update DNS
include_tasks: task_update_dns.yml
I have many of these "generic tasks" simply acting as a "shared task" in many playbooks.
But in some cases, I just want to run one of these generic tasks on a server, but running the following gives the below error:
ansible-playbook playbooks/task_update_dns.yml --limit $myserver
# ERROR
ERROR! 'set_fact' is not a valid attribute for a Play
Simply because it's not a "playbook" they are just "tasks"
File: playbooks/task_update_dns.yml
---
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
....
I know I can write a playbook, empty, that only "include" the task file, but I don't want to create a shallow playbook now for each task.
Is there a way to configure the task file in such way that I'll be able to run it for both include_tasks and as a "stand alone"/play command line ?
Have you tried to use custom tags for those tasks?
Create a playbook with all the tasks
---
- name: Update web servers
hosts: webservers
remote_user: root
tasks:
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
tags:
- set_entries
- name: (include) Update DNS
include_tasks: task_update_dns_2.yml
tags:
- dns
...
when you need to run only that specific task, you just need to add the --tag parameter in the execution with the task name
ansible-playbook playbooks/task_update_dns.yml --limit $myserver --tag set_entries
The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.
I am writing a playbook where i need to select host which will be a part of group which starts with name "hadoop". The host will be supplied as an extra variable in term of parent group. The task is about upgrading the java on all machines with repo but there are certain servers which dont have repo configured or are in dmz and can only use there local repo... i need to enable local_rpm:true so that when the playbook execute the server which belong to hadoop group have this fact enabled.
I tried like below :
- hosts: '{{ target }}'
gather_facts: no
become: true
tasks:
- name: enable local rpm
set_fact:
local_rpm: true
when: "'hadoop' in group_names"
tags: always
and then importing my role based on tag
It's probably better to use group_vars in this case.
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#group-variables
I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)
I am trying to set up a very hacky vm deployment Environment utilizing ansible 1.9.4 for my homeoffice.
I am pretty far, but the last step just won't work. I have a written 10loc plugin which generates temporary dns names for the staging vlan and I want to pass that value as a host to the next playbook role.
TASK: [localhost-eval | debug msg="{{ hostvars['127.0.0.1'].dns_record.stdout_lines }}"] ***
ok: [127.0.0.1] => {
"msg": "['vm103.local', 'vm-tmp103.local']"
}
It's accessible in the global playbook scope via the hostvars:
{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}
and should be passed on to:
- name: configure vm template
hosts: "{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
gather_facts: no
roles:
- template-vm-configure
which results in:
PLAY [configure vm template] **************************************************
skipping: no hosts matched
my inventory looks like this and seems to work. Hardcoding 'vm-tmp103.local'
gets the role running.
[vm]
vm[001:200].local
[vm-tmp]
vm-tmp[001:200].local
Thank you in advance, hopefully someone can point me in the right direction. Plan B would consists of passing the dns-records to a bash script for finalizing configuration, since I just want to configure the Network interface before adding the vm to the monitoring.
Edit:
Modified a play to use add hosts and add them to a temp group
- add_host: name={{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }} groups=just_vm
but it still doesn't match.
That's not intended behaviour I think.
#9733 lead me to a solution.
Adding this task let's me use staging as a group.
tasks:
- set_fact: hostname_next="{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
- add_host: group=staging name='{{ hostname_next }}'