Unable to switch to a different user using become_user - ansible

I am writing Ansible playbook to create key-based ssh access on several hosts with a particular user.
I have following servers:
automation_host
Master
Slave1
Slave2
From automation host I will trigger Ansible to run the playbook which should first login to master with user1, then switch to user2, create ssh keys with user2 and copy the id_rsa.pub to slave nodes.
Inventory file contents:
[master]
172.xxx.xxx.xxx
[slaves]
172.xxx.xxx.xxx
172.xxx.xxx.xxx
[all:vars]
ansible_connection=ssh
ansible_ssh_user=user1
playbook.yml file:
- hosts: master
become_user: user2
become: yes
roles:
- name: passwordless-ssh
User2 is available on all hosts (except automation_host) and is added in sudoers as well.
In the passwordless-ssh role, I have added the lines included below to check which user is currently executing the tasks.
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
Debug message shows user1 ( I am expecting it to be user2)
ansible version: 2.5.2
I am very new to Ansible.

local_action will run on automation_host, change it to command
- hosts: master
become_user: user2
become: yes
tasks:
- name: get the username running the deploy
command: whoami
register: username_on_the_host
- debug: var=username_on_the_host['stdout']
- name: do something
command: echo 'hello'
when: username_on_the_host['stdout'] == 'user2'
- name: do something else
command: echo 'goodby'
when: username_on_the_host['stdout'] == 'user1'
Output
TASK [debug] *********************************************
ok: [master] => {
"username_on_the_host['stdout']": "user2"
}
TASK [do something] *********************************************
changed: [master]
TASK [do something else] *********************************************
do something else does not run.

Related

Run ansible playbook in role with specific playbook

I need run one playbook in main playbook with another hosts file e.g:
- hosts: starter
become: yes
tasks:
- name: start playbook2
shell: |
ansible-playbook -b -i host-cluster.txt install-cluster.yaml
args:
executable: /bin/bash
it's ok and run my playbook, but I know this structure is not correct! also, I need when playbook2 started I see result palybook2 in terminal, but I only see
TASK [install-cluster : Run task1 ] *******************************
I want to see result task1 in terminal.
Update:
I need run one role with specific file ( install-cluster.yaml) and with specific inventory hosts file (host-cluster.txt).
something like this:
- name: start kuber cluster
include_role:
name:kuber
tasks_from: cluster.yml
hosts: kuber-hosts.txt
You can load several inventories at once and then target different group(s)/host(s) in different plays. Just as an idea, you could have the following pseudo your_playbook.yml:
- name: Play1 does some stuff on starter hosts
hosts: starter
become: true
tasks:
- name: do a stuff on starter servers
debug:
msg: "I did a thing"
- name: Play2 starts cluster on kuber hosts
hosts: kuber
tasks:
- name: start kuber cluster
include_role:
name: kuber
tasks_from: cluster.yml
- name: Play3 does more stuff on starter hosts
hosts: starter
tasks:
- name: do more stuff on starter servers
debug:
msg: "I did more things"
You can then use this playbook with two different inventories at once if this is how you architectured it:
ansible-playbook -i inventories/starter -i inventories/kuber your_playbook.yml

Change remote_user within main.yml in role

I am new to ansible.
i am trying to create a role where i start the playbook as root and then in the next play i switch to a different user and continue. The following files are within the role itself.
---
# tasks file for /etc/ansible/roles/dashmn
#
- name: create users logged in as root
remote_user: root
import_tasks: whoami.yml
import_tasks: create_users.yml
import_tasks: set_sudoer.yml
- name: log in as dashadmin
remote_user: dashadmin
become: true
import_tasks: whoami.yml
import_tasks: disable_rootlogin.yml
import_tasks: update_install_reqs.yml
import_tasks: configure_firewall.yml
import_tasks: add_swap.yml
i added a sudoer task that adds users to /etc/sudoer.d
---
- name: set passwordless sudo
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%sudo'
line: '%sudo ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
I created a deploy.yml that uses the role i created as follows.
---
- hosts: test-mn
roles:
- dashmn
when i syntax-check the deploy.yml
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names
by default, this will change, but still be user configurable on deprecation. This feature will be removed in
version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: While constructing a mapping from /etc/ansible/roles/dashmn/tasks/main.yml, line 4, column 3, found
a duplicate dict key (import_tasks). Using last defined value only.
[WARNING]: While constructing a mapping from /etc/ansible/roles/dashmn/tasks/main.yml, line 10, column 3, found
a duplicate dict key (import_tasks). Using last defined value only.
Any help on how to organize this to make it better would be appreciated.
Now, my problem is that if in the tasks file i remove the plays themselves and just leave the import_tasks everything works but its not using the user dashadmin, its using root.
i would like to create the users and then only ever login as dashadmin and work as dashadmin.
I also get an error
FAILED! => {"msg": "Missing sudo password"}
something is clearly wrong, just not sure where ive gone wrong.
Here is /etc/sudoers file
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL) NOPASSWD: ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
First of all, the way you defined import_tasks will basically executes the last import_tasks only as the warning says.
Secondly, remote_user is used for logging in to defined host(s) but if you want to login as a user then execute tasks using a different user then you need to define become_user. By default, become_user is set to root.
So probably below is how you can change the role's import_tasks:
/etc/ansible/roles/dashmn/tasks/main.yml
- name: create users logged in as root
block:
- import_tasks: whoami.yml
- import_tasks: create_users.yml
- import_tasks: set_sudoer.yml
remote_user: root
- name: log in as dashadmin
block:
- import_tasks: whoami.yml
- import_tasks: disable_rootlogin.yml
- import_tasks: update_install_reqs.yml
- import_tasks: configure_firewall.yml
- import_tasks: add_swap.yml
remote_user: dashadmin
become: yes
Refer privilege escalation for more details.
Q: "Change remote_user within main.yml in a role"
Short answer: See the example in "Play 3" on how to change remote_user for each task.
Details: Keyword remote_user can be used in all playbook's objects: play, role, block, task. See Playbook Keywords.
The best practice is to connect to the remote host as an unprivileged user and escalate the privilege. For example,
- name: Play 1
hosts: test_01
remote_user: user1
become: true
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
ok: [test_01] =>
result.stdout: root
Without the escalation of priviledge the tasks will be executed by the remote_user at the remote host. For example,
- name: Play 2
hosts: test_01
remote_user: user1
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
ok: [test_01] =>
result.stdout: user1
It's possible to declare the remote_user for each task. For example
- name: Play 3
hosts: test_01
remote_user: user1
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: whoami
remote_user: user2
register: result
- debug:
var: result.stdout
gives
ok: [test_01] =>
result.stdout: user1
ok: [test_01] =>
result.stdout: user2
All plays can be put into one playbook.
Example of sudoers file
root.test_01# cat /usr/local/etc/sudoers
...
#includedir /usr/local/etc/sudoers.d
admin ALL=(ALL) NOPASSWD: ALL
user1 ALL=(ALL) NOPASSWD: ALL
user2 ALL=(ALL) NOPASSWD: ALL

`remote_user` is ignored in playbooks and roles

I have defined the following in my ansible.cfg
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
remote_user = ansible
However I have a playbook bootstrap.yaml where I connect with root rather than ansible
---
- hosts: "{{ target }}"
become: no
gather_facts: false
remote_user: root
vars:
os_family: "{{ osfamily }}}"
roles:
- role: papanito.bootstrap
However it seems that remote_user: root is ignored as I always get a connection error, because it uses the user ansible instead of root for the ssh connection
fatal: [node001]: UNREACHABLE! => {"changed": false,
"msg": "Failed to connect to the host via ssh:
ansible#node001: Permission denied (publickey,password).",
"unreachable": true}
The only workaround for this I could find is calling the playbook with -e ansible_user=root. But this is not convenient as I want to call multiple playbooks with the site.yaml, where the first playbook has to run with ansible_user root, whereas the others have to run with ansible
- import_playbook: playbooks/bootstrap.yml
- import_playbook: playbooks/networking.yml
- import_playbook: playbooks/monitoring.yml
Any suggestions what I am missing or how to fix it?
Q: "remote_user: root is ignored"
A: The playbook works as expected
- hosts: test_01
gather_facts: false
become: no
remote_user: root
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
"result.stdout": "root"
But, the variable can be overridden in the inventory. For example with the inventory
$ cat hosts
all:
hosts:
test_01:
vars:
ansible_connection: ssh
ansible_user: admin
the result is
"result.stdout": "admin"
Double-check the inventory with the command
$ ansible-inventory --list
Notes
It might be also necessary to double-check the role - role: papanito.bootstrap
See Controlling how Ansible behaves: precedence rules
I faced a similar issue, where ec2 instance required different username to ssh with. You could try with below example
- import_playbook: playbooks/bootstrap.yml
vars:
ansible_ssh_user: root
Try this
Instead of “remote_user: root”use “remote_user: ansible” and additional “become: yes” ,”become_user: root”,”become_method: sudo or su”

ansible rolling restart playboook

Folks,
I'd like to have a service be restarted individually on each host, and wait for user input before continuing onto the next host in the inventory.
Currently, if you have the following:
- name: Restart something
command: service foo restart
tags:
- foo
- name: wait
pause: prompt="Make sure org.foo.FooOverload exception is not present"
tags:
- foo
It will only prompt once, and not really have the effect desired.
What is the proper ansible syntax to wait for user input before running the restart task on each host?
Use a combination of serial attribute and step option of a playbook.
playbook.yml
- name: Do it
hosts: myhosts
serial: 1
tasks:
- shell: hostname
Call the playbook with --step option
ansible-playbook playbook.yml --step
You will be prompted for every host.
Perform task: shell hostname (y/n/c): y
Perform task: shell hostname (y/n/c): ****************************************
changed: [y.y.y.y]
Perform task: shell hostname (y/n/c): y
Perform task: shell hostname (y/n/c): ****************************************
changed: [z.z.z.z]
For more information: Start and Step
I went ahead with this:
- name: Restart Datastax Agent
tags:
- agent
hosts: cassandra
sudo: yes
serial: 1
gather_facts: yes
tasks:
- name: Pause
pause: prompt="Hit RETURN to restart datastax agent on {{ inventory_hostname }}"
- name: Restarting Datastax Agent on {{ inventory_hostname }}
service: name=datastax-agent state=restarted

How to get the home directory of a sudoed user from ansible_env?

When I sudo as a user the ansible_env does not have the correct HOME variable set - "/root". However, if I echo the HOME env variable it is correct - "/var/lib/pgsql". Is there no other way to get the home directory of a sudo'ed user?
Also, I have already set "sudo_flags = -H" in ansible.cfg and I cannot login as postgres user.
- name: ansible_env->HOME
sudo: yes
sudo_user: postgres
debug: msg="{{ ansible_env.HOME }}"
- name: echo $HOME
sudo: yes
sudo_user: postgres
shell: "echo $HOME"
register: postgres_homedir
- name: postgres_homedir.stdout
sudo: yes
sudo_user: postgres
debug: msg="{{ postgres_homedir.stdout }}"
Result:
TASK: [PostgreSQL | ansible_env->HOME] ****************************************
ok: [postgres] => {
"msg": "/root"
}
TASK: [PostgreSQL | echo $HOME] ***********************************************
changed: [postgres]
TASK: [PostgreSQL | postgres_homedir.stdout] **********************************
ok: [postgres] => {
"msg": "/var/lib/pgsql"
}
I can replicate the output above by running the playbook as the root user (either locally with - hosts: localhost, or by SSHing as root). The facts gathered by Ansible are those of the root user.
If this is what you are doing, then your workaround seems to be the best way of getting the postgres user's $HOME variable.
Even though you add sudo_user: postgres to the ansible_env->HOME task, the fact will not change since it is gathered at the start of the play.
"~" expands as the non-root user if you use "become: no". In the example below I combine that with an explicit "sudo" to do something as root in the home directory:
- name: Setup | install
command: sudo make install chdir=~/git/emacs
become: no

Resources