Ansible error while evaluating conditional: 'action' is undefined - ansible

This is my first Ansible Role, so appologies for my ignorance...
I wish to start/stop/status the services on my server. I've a role for each task. In my playbook I 've a conditional check to decide which role should be invoked.
---
- name: Manage Kore Cluster
hosts: all
gather_facts: true
become: yes
become_method: sudo
become_user: root
tasks:
- name: kor_start
include_role:
name: kor_start
when: action == kor_start
- name: kor_status
include_role:
name: kor_status
when: action == kor_status
- name: kor_stop
include_role:
name: kor_stop
when: action == kor_stop
name/action names match the role names.
The conditional check 'action == kor_start' failed. The error was: error while evaluating conditional (action == kor_start): 'action' is undefined
Could you advise what do I miss? Thanks!

The error says 'action' is undefined. Fix the conditions. For example, default to an empty string. Also, kor_start, kor_status, and kor_stop are strings, not variables, I think. In this case, quote them
---
- name: Manage Kore Cluster
hosts: all
gather_facts: true
become: yes
become_method: sudo
become_user: root
tasks:
- name: kor_start
include_role:
name: kor_start
when: action|d('') == 'kor_start'
- name: kor_status
include_role:
name: kor_status
when: action|d('') == 'kor_status'
- name: kor_stop
include_role:
name: kor_stop
when: action|d('') == 'kor_stop'
You can test the playbook, for example
shell> ansible-playbook playbook.yml -e action=kor_status
You can simplify the playbook
---
- name: Manage Kore Cluster
hosts: all
gather_facts: true
become: yes
become_method: sudo
become_user: root
vars:
valid_roles: [kor_start, kor_stop, kor_status]
tasks:
- name: "{{ action|d('undefined') }}"
include_role:
name: "{{ action }}"
when: action|d('undefined') in valid_roles

Related

How to check the OS version of host which in dynamically added to inventory

I'm trying to get server name as user input and if the server OS is RHEL7 it will proceed for further tasks. I'm trying with hostvars but it is not helping, kindly help me to find the OS version with when condition:
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- debug:
msg: "{{ hostvars['server1'].ansible_distribution_major_version }}"
When I execute the playbook, I'm getting below error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['server1']\" is undefined\n\nThe error appears to be in '/var/lib/awx/projects/pacemaker_RHEL_7_ST/main_2.yml': line 33, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
You need to gather_facts on the newly added host before you consume the variable. As an example, this will do it with automatic facts gathering.
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- name: Gather facts for newly added targets
hosts: cluster_nodes
# gather_facts: true <= this is the default
- name: Do <whatever> targeting localhost again
hosts: localhost
gather_facts: false # already gathered in play1
tasks:
# Warning!! bad practice. Looping on a group usually
# shows you should have a play targeting that specific group
- debug:
msg: "OS version for {{ item }} is 7"
when: hostvars[item].ansible_distribution_major_version | int == 7
loop: "{{ groups['cluster_nodes'] }}"
If you don't want to rely on automatic gathering, you can manually play the setup module, e.g. for the second play:
- name: Gather facts for newly added targets
hosts: cluster_nodes
gather_facts: false
tasks:
- name: get facts from targets
setup:

Cannot use ansible_user with local connection

I have a playbook for running on localhost:
- hosts: localhost
connection: local
gather_facts: true
roles:
- role: foo
- role: bar
- role: baz
Then in various tasks I want to use ansible_user, but I get the error:
fatal: [localhost]: FAILED! => {"msg": "'ansible_user' is undefined"}
Okay figured it out.
In any task that uses become: true, e.g.:
- name: foo
become: true
file:
path: /tmp/foo
state: directory
owner: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
group: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
mode: "0755"
So instead of ansible_user, we must use ansible_env.USER or ansible_env.USERNAME.
I think this is because ansible doesn't know which user to use - if it's a local connection, and we elevate permissions, then is the "ansible user" actually root, or is it the user running the playbook? Ansible gets confused so the variable is empty... I think.
(For this to work we must have gather_facts: true at the beginning of the play.)
Q: "Cannot use ansible_user with local connection."
A: There must be some kind of misconfiguration. The playbook below works as expected and there is no reason it wouldn't work with these tasks inside a role.
shell> cat playbook.yml
- hosts: localhost
connection: local
tasks:
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: false
- file:
path: /tmp/foo
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: false
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: true
- file:
path: /tmp/bar
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true

Deactivate the Current Ansible User with Ansible

In setting up a new Raspberry Pi with Ansible, I would like to perform the following actions:
Using the default pi user, create a new user named my_new_admin
Using the new my_new_admin user, deactivate the default pi user
Continue executing the playbook as my_new_admin
I am finding this difficult to achieve in a single playbook. Is it even possible to switch the active user like this in Ansible?
# inventory.yaml
---
all:
children:
rpis:
hosts:
myraspberrypi.example.com:
ansible_user: my_new_admin # or should `pi` go here?
...
# initialize.yaml
- hosts: rpis
remote_user: 'pi'
become: true
tasks:
- name: 'create new user'
user:
name: 'my_new_admin'
append: true
groups:
- 'sudo'
- name: 'add SSH key to my_new_admin'
*snip*
- name: 'lock default user'
remote_user: 'my_new_admin'
user:
name: 'pi'
expires: '{{ ("1970-01-02 00:00:00" | to_datetime).timestamp() | float }}'
password_lock: true
...
If you want to switch users, the easiest solution is to start another play. For example, the following playbook will run the first play as user pi and the second play as user root:
- hosts: pi
gather_facts: false
remote_user: pi
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
- hosts: pi
gather_facts: false
remote_user: root
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
In this playbook I'm being explicit about remote_user in both plays, but you could also set a user in your inventory and only override it when necessary. E.g., if I have:
pi ansible_host=raspberrypi.local ansible_user=root
Then I could rewrite the above playbook like this:
- hosts: pi
gather_facts: false
vars:
ansible_user: pi
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
- hosts: pi
gather_facts: false
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
Note that I'm setting the ansible_user variable here rather than using remote_user, because it looks as if ansible_user has precedence.

Ansible rollback: run a group of tasks over list of hosts even when one of hosts failed

I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.

Ansible-Execute a last mandatory role in the playbook even though any previous roles in the execution flow fails

I have ansible playbook something like this
---
- hosts: localhost
connection: local
tasks:
- set_fact:
build_date_time: "{{ ansible_date_time }}"
- hosts: localhost
connection: local
gather_facts: false
vars:
role: Appvariables
base_image_tag: Base
roles:
- role: begin-building-ami
- hosts: just_created
remote_user: "{{ default_user }}"
vars:
role: AppName
roles:
- { role: role1, become: yes }
- role: role2
- { role: role3, become: yes }
- { role: role4, become: yes }
- { role: role5, become: yes }
- hosts: localhost
connection: local
gather_facts: false
vars:
role: Appname
ansible_date_time: "{{ build_date_time }}"
roles:
- finish-building-ami
Here in this case we have a situation to execute the finish-building-ami role where we terminate the instance after baking the ami. If any reason any of the previous role1-role5 fails in the flow fails It stops the playbook and we have the failed instance which we needed to terminate automatically.Right now we are going and terminating it manually if it fails.
So needed to run finish-building-ami(mandatory role where we stop the instance and take ami and terminate the instance at last ) if even any of the role1-role5 fails in the above mentioned playbook.
You can rewrite your existing play to use import_role or include_role tasks instead of the roles section. This allows you to use blocks:
---
- hosts: localhost
gather_facts: false
tasks:
- block:
- import_role:
name: role1
- import_role:
name: role2
become: true
- import_role:
name: role3
rescue:
- set_fact:
role_failed: true
- hosts: localhost
gather_facts: false
tasks:
- debug:
msg: This task runs after our roles.

Resources