Cannot use ansible_user with local connection - ansible

I have a playbook for running on localhost:
- hosts: localhost
connection: local
gather_facts: true
roles:
- role: foo
- role: bar
- role: baz
Then in various tasks I want to use ansible_user, but I get the error:
fatal: [localhost]: FAILED! => {"msg": "'ansible_user' is undefined"}

Okay figured it out.
In any task that uses become: true, e.g.:
- name: foo
become: true
file:
path: /tmp/foo
state: directory
owner: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
group: "{{ansible_env.USER}}" # cannot use {{ansible_user}}
mode: "0755"
So instead of ansible_user, we must use ansible_env.USER or ansible_env.USERNAME.
I think this is because ansible doesn't know which user to use - if it's a local connection, and we elevate permissions, then is the "ansible user" actually root, or is it the user running the playbook? Ansible gets confused so the variable is empty... I think.
(For this to work we must have gather_facts: true at the beginning of the play.)

Q: "Cannot use ansible_user with local connection."
A: There must be some kind of misconfiguration. The playbook below works as expected and there is no reason it wouldn't work with these tasks inside a role.
shell> cat playbook.yml
- hosts: localhost
connection: local
tasks:
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: false
- file:
path: /tmp/foo
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: false
- debug:
msg: "ansible_user [{{ ansible_user }}]"
become: true
- file:
path: /tmp/bar
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true

Related

Ansible: Get Variable with inventory_hostname

I have the following passwords file vault.yml:
---
server1: "pass1"
server2: "pass2"
server3: "pass3"
I am loading these values in a variable called passwords:
- name: Get Secrets
set_fact:
passwords: "{{ lookup('template', './vault.yml')|from_yaml }}"
delegate_to: localhost
- name: debug it
debug:
var: passwords.{{ inventory_hostname }}
The result of the debugging task shows me the result I want to get: The password for the specific host.
But if I set the following in a variables file:
---
ansible_user: root
ansible_password: passwords.{{ inventory_hostname }}
This will not give me the desired result. The ansible_password takes "passwords" literally and not as a variable.
How can I achieve the same result I got when debugging the passwords.{{ inventory_hostname }}?
Regarding the part
... if I set the following in a variables file ...
I am not sure since I miss some information about your use case and data flow. However, in general the syntax ansible_password: "{{ PASSWORDS[inventory_hostname] }}" might work for you.
---
- hosts: localhost
become: false
gather_facts: false
vars:
PASSWORDS:
SERVER1: "pass1"
SERVER2: "pass2"
SERVER3: "pass3"
localhost: "pass_local"
tasks:
- name: Debug var
debug:
var: PASSWORDS
- name: Set Fact 'ansible_password'
set_fact:
ansible_password: "{{ PASSWORDS[inventory_hostname] }}"
- name: Debug var
debug:
var: ansible_password
In that way you can access a element by name.

How to check the OS version of host which in dynamically added to inventory

I'm trying to get server name as user input and if the server OS is RHEL7 it will proceed for further tasks. I'm trying with hostvars but it is not helping, kindly help me to find the OS version with when condition:
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- debug:
msg: "{{ hostvars['server1'].ansible_distribution_major_version }}"
When I execute the playbook, I'm getting below error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['server1']\" is undefined\n\nThe error appears to be in '/var/lib/awx/projects/pacemaker_RHEL_7_ST/main_2.yml': line 33, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
You need to gather_facts on the newly added host before you consume the variable. As an example, this will do it with automatic facts gathering.
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- name: Gather facts for newly added targets
hosts: cluster_nodes
# gather_facts: true <= this is the default
- name: Do <whatever> targeting localhost again
hosts: localhost
gather_facts: false # already gathered in play1
tasks:
# Warning!! bad practice. Looping on a group usually
# shows you should have a play targeting that specific group
- debug:
msg: "OS version for {{ item }} is 7"
when: hostvars[item].ansible_distribution_major_version | int == 7
loop: "{{ groups['cluster_nodes'] }}"
If you don't want to rely on automatic gathering, you can manually play the setup module, e.g. for the second play:
- name: Gather facts for newly added targets
hosts: cluster_nodes
gather_facts: false
tasks:
- name: get facts from targets
setup:

Deactivate the Current Ansible User with Ansible

In setting up a new Raspberry Pi with Ansible, I would like to perform the following actions:
Using the default pi user, create a new user named my_new_admin
Using the new my_new_admin user, deactivate the default pi user
Continue executing the playbook as my_new_admin
I am finding this difficult to achieve in a single playbook. Is it even possible to switch the active user like this in Ansible?
# inventory.yaml
---
all:
children:
rpis:
hosts:
myraspberrypi.example.com:
ansible_user: my_new_admin # or should `pi` go here?
...
# initialize.yaml
- hosts: rpis
remote_user: 'pi'
become: true
tasks:
- name: 'create new user'
user:
name: 'my_new_admin'
append: true
groups:
- 'sudo'
- name: 'add SSH key to my_new_admin'
*snip*
- name: 'lock default user'
remote_user: 'my_new_admin'
user:
name: 'pi'
expires: '{{ ("1970-01-02 00:00:00" | to_datetime).timestamp() | float }}'
password_lock: true
...
If you want to switch users, the easiest solution is to start another play. For example, the following playbook will run the first play as user pi and the second play as user root:
- hosts: pi
gather_facts: false
remote_user: pi
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
- hosts: pi
gather_facts: false
remote_user: root
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
In this playbook I'm being explicit about remote_user in both plays, but you could also set a user in your inventory and only override it when necessary. E.g., if I have:
pi ansible_host=raspberrypi.local ansible_user=root
Then I could rewrite the above playbook like this:
- hosts: pi
gather_facts: false
vars:
ansible_user: pi
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
- hosts: pi
gather_facts: false
tasks:
- command: whoami
register: whoami
- debug:
msg: "{{ whoami.stdout }}"
Note that I'm setting the ansible_user variable here rather than using remote_user, because it looks as if ansible_user has precedence.

Point Ansible to .pem file for a dynamic set of EC2 nodes

I'm pretty new to Ansible.
I get:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).", "unreachable": true}
at the last step when I try to this Ansible playbook
---
- name: find EC2 instaces
hosts: localhost
connection: local
gather_facts: false
vars:
ansible_python_interpreter: "/usr/bin/python3"
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
aws_region: "us-west-2"
vpc_subnet_id: "subnet-xxx"
ec2_filter:
"tag:Name": "airflow-test"
"tag:Team": 'data-science'
"tag:Environment": 'staging'
"instance-state-name": ["stopped", "running"]
vars_files:
- settings/vars.yml
tasks:
- name: Find EC2 Facts
ec2_instance_facts:
region: "{{ aws_region }}"
filters:
"{{ ec2_filter }}"
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_dns_name }}"
groupname: launched
loop: "{{ ec2.instances }}"
- name: Wait for the instances to boot by checking the ssh port
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
sleep: 10
timeout: 120
state: started
loop: "{{ ec2.instances }}"
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
tasks:
- name: ls
command: ls
I know I need to point Ansible to .pem file, I tried to add ansible_ssh_private_key_file to the inventory file but considering nodes are dynamic, not sure how to do it.
Adding ansible_ssh_user solved the problem
- name: install required packages on instances
hosts: launched
become: True
gather_facts: True
vars:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
ansible_ssh_user: "ec2-user"
tasks:
- name: ls
command: ls

Ansible rollback: run a group of tasks over list of hosts even when one of hosts failed

I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.

Resources