Ansible - 'when' is not a valid attribute for a Play - ansible

I'm trying to figure out how to "remove" the warning message [WARNING]: Could not match supplied host pattern, ignoring: ps_nodes, by fixing the root cause. The root cause for me is that when we do Linux machine creation we will have the ps_nodes hosts empty. So, I was trying to add the block: + when: (os_type|capitalize) == "Windows", to assure that Play to only execute when os_type is a Windows creation.
How can I achieve that? Because, what I'm trying is to use the when condiction, but looks like it's not possible, and I'm not sure what to search anymore.
Code example:
- name: "Start handling of vm specific delete scripts for Windows machines"
block:
hosts: ps_nodes
any_errors_fatal: false
gather_facts: false
vars:
private_ip_1: "{{ hostvars['localhost']['_private_ip_1']|default('') }}"
scripts: "{{ hostvars['localhost']['scripts'] }}"
sh_script_dir: "{{ hostvars['localhost']['sh_script_dir'] }}"
cred_base_hst: "{{ hostvars['localhost']['cred_base_hst'] }}"
cred_base_gst: "{{ hostvars['localhost']['cred_base_gst'] }}"
newline: "\n"
tasks:
- import_tasks: roles/script/tasks/callWindowsScripts.yml
when: action == 'delete'
when: (os_type|capitalize) == "Windows"
Error using 'when' for a Play:
ERROR! 'when' is not a valid attribute for a Play
The error appears to be in '/opt/projectX/playbooks/create_vm.yml': line 265, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
##############################################################################
- name: \"Start handling of vm specific delete scripts for Windows machines\"
^ here

I think the problem is the indentation. Use 'and':
- name: "Start handling of vm specific delete scripts for Windows machines"
block:
hosts: ps_nodes
any_errors_fatal: false
gather_facts: false
vars:
private_ip_1: "{{ hostvars['localhost']['_private_ip_1']|default('') }}"
scripts: "{{ hostvars['localhost']['scripts'] }}"
sh_script_dir: "{{ hostvars['localhost']['sh_script_dir'] }}"
cred_base_hst: "{{ hostvars['localhost']['cred_base_hst'] }}"
cred_base_gst: "{{ hostvars['localhost']['cred_base_gst'] }}"
newline: "\n"
tasks:
- import_tasks: roles/script/tasks/callWindowsScripts.yml
when: action == 'delete' and (os_type|capitalize) == "Windows"
Got it,
What if you use a host that exists, like localhost, check the number of hosts in ps_nodes and delegate_to them?
Something like this:
hosts: localhost
vars:
tasks:
- import_tasks: roles/script/tasks/callWindowsScripts.yml
delegate_to: ps_nodes
when: {{ ps_nodes | length > 0}}

Same issue and fixed by "indent":
- hosts: test
roles:
- role: test
vars:
k: 1
when: "'dbg' in ansible_run_tags"

Related

Ansible: Get Variable with inventory_hostname

I have the following passwords file vault.yml:
---
server1: "pass1"
server2: "pass2"
server3: "pass3"
I am loading these values in a variable called passwords:
- name: Get Secrets
set_fact:
passwords: "{{ lookup('template', './vault.yml')|from_yaml }}"
delegate_to: localhost
- name: debug it
debug:
var: passwords.{{ inventory_hostname }}
The result of the debugging task shows me the result I want to get: The password for the specific host.
But if I set the following in a variables file:
---
ansible_user: root
ansible_password: passwords.{{ inventory_hostname }}
This will not give me the desired result. The ansible_password takes "passwords" literally and not as a variable.
How can I achieve the same result I got when debugging the passwords.{{ inventory_hostname }}?
Regarding the part
... if I set the following in a variables file ...
I am not sure since I miss some information about your use case and data flow. However, in general the syntax ansible_password: "{{ PASSWORDS[inventory_hostname] }}" might work for you.
---
- hosts: localhost
become: false
gather_facts: false
vars:
PASSWORDS:
SERVER1: "pass1"
SERVER2: "pass2"
SERVER3: "pass3"
localhost: "pass_local"
tasks:
- name: Debug var
debug:
var: PASSWORDS
- name: Set Fact 'ansible_password'
set_fact:
ansible_password: "{{ PASSWORDS[inventory_hostname] }}"
- name: Debug var
debug:
var: ansible_password
In that way you can access a element by name.

saving variables from playbook run to ansible host local file

I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!

set path when file exists in Ansible yml code

I'm trying to set a var only when a file exists, here is one of my attempts
---
- hosts: all
tasks:
- stat:
path: '{{ srch_path_new }}/bin/run'
register: result
- vars: srch_path="{{ srch_path_new }}"
when: result.stat.exists
This also didn't work
- vars: srch_path:"{{ srch_path_new }}"
The task you are looking for is called set_fact: and is the mechanism ansible uses to declare arbitrary "host variables", sometimes called "hostvars", or (also confusingly) "facts"
The syntax would be:
- set_fact:
srch_path: "{{ srch_path_new }}"
when: result.stat.exists
Also, while vars: is a legal keyword on a Task, its syntax is the same as set_fact: (or the vars: on the playbook): a yaml dictionary, not a key:value pair as you had. For example:
- debug:
msg: hello, {{ friend }}
vars:
friend: Jane Doe
and be aware that vars: on a task exist only for that task

Ansible rollback: run a group of tasks over list of hosts even when one of hosts failed

I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.

iteration using with_items and register

Looking for help with a problem I've been struggling with for a few hours. I want to iterate over a list, run a command, register the output for each command and then iterate with debug over each unique registers {{ someregister }}.stdout
For example, the following code will spit out "msg": "1" and "msg": "2"
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: result
with_items: "{{ numbers }}"
- debug: msg={{ item.stdout }}
with_items: "{{ result.results }}"
If however, I try and capture the output of a command in a register variable that is named using with_list, I am having trouble accessing the list or the elements within it. For example, altering the code slightly to:
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: "{{ item.name }}"
with_items: "{{ numbers }}"
- debug: var={{ item.name.stdout }}
with_items: "{{ numbers }}"
Gives me:
TASK [debug]
> ******************************************************************* fatal: [localhost]: FAILED! => {"failed": true, "msg": "'unicode
> object' has no attribute 'stdout'"}
Is it not possible to dynamically name the register the output of a command which can then be called later on in the play? I would like each iteration of the command and its subsequent register name to be accessed uniquely, e.g, given the last example I would expect there to be variables registered called "first" and "second" but there aren't.
Taking away the with_items from the debug stanza, and just explicitly defining the var or message using first.stdout returns "undefined".
Ansible version is 2.0.2.0 on Centos 7_2.
Thanks in advance.
OK so I found a post on stackoverflow that helped me better understand what is going on here and how to access the elements in result.results.
The resultant code I ended up with was:
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: echo_out
with_items: "{{ numbers }}"
- debug: msg="item.item={{item.item.name}}, item.stdout={{item.stdout}}"
with_items: "{{ echo_out.results }}"
Which gave me the desired result:
"msg": "item.item=first, item.stdout=1"
"msg": "item.item=second, item.stdout=2"
I am not sure if I understand the question correctly, but maybe this can help:
- debug: msg="{{ item.stdout }}"
with_items: echo_out.results
Please note that Ansible will print each item and the msg both - so you need to look carefully for a line that looks like "msg": "2".

Resources