Variable is defined but still getting undefined error - ansible

I am trying to write a playbook that completes some of its tasks on the machine that the playbook is running on. I know i can use local_action for this but I am just testing if the playbook works for now. Eventually I will need to use the delegate_to. I have defined the variable and I am using delegate_to: 'variable name' but I am getting this error. : " fatal [target node]: FAILED! => { msg": "'variablename' is undefined. Below is my playbook:
name: Create certs
gather_facts: true
vars:
master: "{{ nameofhost }}"
tasks:
- name: Run command
shell: Command to run
delegate_to: "{{ master }}"

You need to target your play to a target hosts of an inventory
name: Create certs
gather_facts: true
hosts: target_hosts
vars:
master: "{{ nameofhost }}"
tasks:
- name: Run command
shell: Command to run
delegate_to: "{{ master }}"
``` 
Your inventory files may look like that:
[target_hosts]
master ansible_host=your_master_dns_or_ip
And then ansible can target that inventory group and then reduce the task scope to master host. Or you can just use the localhost as target.

Related

Ansible: run certain yaml tasks file for all hosts

I try to run certain yaml tasks file for all hosts, as follows (main.yml):
- name: prepare nodes
include_tasks: node.yml node="{{ item }}"
loop: "{{ groups['all'] }}"
node.yml:
- block:
- name: Task 1...
...
- name: Task 100...
delegate_to: "{{ node }}"
However I get this error: Invalid options for include_tasks: node. I think it used to work in this manner. Anyway I tried to move loop from main.yml into node.yml (right after delegate_to). I also tried to skip node="{{ item }}" part. But I always get errors.
What is the proper way to apply a task file to several hosts within a role?
It should work if you put your node variable under vars then loop.
- name: include tasks
include_tasks: node.yml
vars:
node: '{{ item }}'
loop: "{{ groups['all'] }}"
Above code is working.
A play runs on the hosts you specified. You can run certain tasks on a subset of nodes using when.
But you can have multiple plays in a playbook. So you need to specify a play with hosts: all where you run the tasks you want to run everywhere and another one which runs the rest of the tasks.
Your playbook could look like this:
---
# This is a play
- name: run on all
hosts: all
vars:
somevar: 'test'
tasks:
- name: prepare nodes
include_tasks: node.yml
# This is another play
- name: run on group
hosts: hostgroup
vars:
somevar: 'example'
tasks:
- debug:
msg: 'This runs on all hosts in hostgroup'
# Both plays are in the same playbook

issue while including another playbook in ansible?

I have written a playbook named as master.yaml as defined below
- hosts: master
remote_user: "{{ ansible_user }}"
tasks:
- name: Get env
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
vars:
is_done: "false"
- include: slave.yaml
vars:
sethostname: "{{ group_user }}"
worker: worker
when: is_done == "true"
where: inventory_hostname in groups['worker']
I am trying run another playbook named as slave.yaml as defined below, after a certain condition is met.
- hosts: worker
remote_user: "{{ ansible_user }}"
tasks:
- name: Write to a file for deamon setup
copy:
content: "{{ sethostname }}"
dest: "/home/ubuntu/test.text"
Now i have two question:
I am not able to set the value of var isDone. slave.yaml should
only work when isDone is true.
2.How salve.yaml access the value worker. I have defined group worker in inventory.yaml
I do not know if it's the right way to go to reach your objective. However I tried to make this playbook work by keeping as much as possible your logic. Hope it helps.
The point is that you cannot use import_playbook inside the play. Check the module documentation for more information.
So I propose to share code with a role instead of a playbook. You will be able to share the slave role between the master playbook and another playbooks, a slave playbook for example.
The ansible folder structure is the following.
├── hosts
├── master.yml
└── roles
└── slave
└── tasks
└── main.yml
Master.yml
---
- name: 'Master Playbook'
# Using the serial keyword to run the playbook for each host one by one
hosts: master
serial: 1
remote_user: "{{ ansible_user }}"
tasks:
- name: 'Get env'
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
- name: 'Calling the slave role'
import_role:
name: 'slave'
# The return value of the command is stored in stdout
vars:
sethostname: "{{ group_user.stdout }}"
# Only run when the task get env has been done (state changed)
when: group_user.changed
# Delegate the call to the worker host(s) -> don't know if it's the expected behavior
delegate_to: 'worker'
Slave main.yml
---
- name: 'Write to a file for deamon setup'
copy:
content: "{{ sethostname }}"
dest: "/tmp/test.text"
At the end the /tmp/test.text contains the effective user group name.

Run ansible-playbook from localhost, and using vars from hosts file

Let's say that I want to run something locally. but I want to use the vars from the hosts file, so basically - I want to do for each line something locally.
In this example, I want to use ec2_tag from ansible.
hosts file for ansible playbook run:
[any]
123.123.123.123 region=eu-region ec2_instance_id=x-xxxxxxxxxxxxxxxxx
123.123.123.124 region=eu-region ec2_instance_id=x-xxxxxxxxxxxxxxxxx
ansible-playbook:
- name: something
hosts: any
tasks:
- name: test
ec2_tag:
region: "{{ region }}"
resource: "{{ ec2_instance_id }}""
state: list
register: ec2_tags
- debug: msg={{ ec2_tags }}
How can i loop localy on [any] vars? let's say get region?
It's running now with local_action and taking the vars from the hosts file.
- name: something
hosts: any
tasks:
- name: test
local_action: ec2_tag region={{ region }} resource={{ ec2_instance_id }} state=list
register: ec2_tags
- debug: msg={{ ec2_tags }}

ansible multiple per-host output to file

I want to run an Ansible playbook on multiple hosts and register outputs to a variable. Now using this variable, I want to copy output to single file. The issue is that in the end there is output of only one host in the file. How can I add output of all the hosts in a file one after the other. I don't want to use serial = 1 as it slows down execution considerably if we have multiple hosts.
-
hosts: all
remote_user: cisco
connection: local
gather_facts: no
vars_files:
- group_vars/passwords.yml
tasks:
- name: Show command collection
ntc_show_command:
connection: ssh
template_dir: /ntc-ansible/ntc-templates/templates
platform: cisco_ios
host: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
command: "{{commands}}"
register: result
- local_action:
copy content="{{result.response}}" dest='/home/user/show_cmd_ouput.txt'
result variable will be registered as a fact on each host the task ntc_show_command was run, thus you should access the value through hostvars dictionary.
- local_action:
module: copy
content: "{{ groups['all'] | map('extract', hostvars, 'result') | map(attribute='response') | list }}"
dest: /home/user/show_cmd_ouput.txt
run_once: true
You also need run_once because the action would still be run as many times as hosts in the group.

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

Resources