Ansible and "sudo su - " - ansible

I'm trying to create an ansible playbook that will work in my current work environment. I login to servers as user "myuser" using ssh keys. I was never given a password, so I don't know it. Most of the commands I run are executed as a different non-root user - e.g. "appadmin". I become these users via "sudo su - appadmin", since I don't have the passwords for this user either.
Different variations I've tried either complain "sudo: a password is required" or time out after 12 seconds. I'll show this second example.
The playbook is very simple:
---
- hosts: sudo-test
gather_facts: False
remote_user: myuser
become: yes
become_user: appadmin
tasks:
- name: who
shell: whoami > qwert.txt
My host entry is as follows:
[sudo-test]
appserver.example.com ansible_become_method=su ansible_become_exe="sudo su"
This is the error I get:
pablo#host=> ansible-playbook test_sudo.yml
PLAY [sudo-test] ****************************************************************************************************
TASK [who] **********************************************************************************************************
fatal: [appserver.example.com]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
to retry, use: --limit #/home/pablo/ansible_dir/test_sudo.retry
PLAY RECAP **********************************************************************************************************
appserver.example.com : ok=0 changed=0 unreachable=0 failed=1
At this point I agree that the playbook and inventory are configured correctly. I believe the issue is that /etc/sudoers doesn't permit my "appadmin" user to run in a way that allows me to leverage ansible's ability to become another user. This thread describes a similar scenario - and limitation.
The relevant section of /etc/sudoers looks like this:
User myuser may run the following commands on this host:
(root) NOPASSWD: /bin/su - appadmin
It seems I would have to have the sysadmin change this to:
User myuser may run the following commands on this host:
(root) NOPASSWD: /bin/su - appadmin *
Does this sound right?

i dont find any issue with yaml, infact i got it tested in my ansible2.8 environment.
---
- hosts: node1
gather_facts: False
remote_user: ansible
become: yes
become_user: testuser
tasks:
- name: who
shell: whoami
register: output
- debug: var=output
and inventory:
[node1]
node1.example.com ansible_become_method=su ansible_become_exe="sudo su"
output:
TASK [debug] ****************************************************************************************************************************
ok: [node1.example.com] =>
I would request you to increase ssh timer (uncomment timeout line and set it to 60, whatever seconds you wish) in ansible.cfg file and observer this scenario.
# SSH timeout
#timeout = 300

Try this one:
- hosts: application
become: yes
become_exe: "sudo su - appadmin"
become_method: su
tasks:

Related

This command has to be run under the root user. But I am root only

I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)

`remote_user` is ignored in playbooks and roles

I have defined the following in my ansible.cfg
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
remote_user = ansible
However I have a playbook bootstrap.yaml where I connect with root rather than ansible
---
- hosts: "{{ target }}"
become: no
gather_facts: false
remote_user: root
vars:
os_family: "{{ osfamily }}}"
roles:
- role: papanito.bootstrap
However it seems that remote_user: root is ignored as I always get a connection error, because it uses the user ansible instead of root for the ssh connection
fatal: [node001]: UNREACHABLE! => {"changed": false,
"msg": "Failed to connect to the host via ssh:
ansible#node001: Permission denied (publickey,password).",
"unreachable": true}
The only workaround for this I could find is calling the playbook with -e ansible_user=root. But this is not convenient as I want to call multiple playbooks with the site.yaml, where the first playbook has to run with ansible_user root, whereas the others have to run with ansible
- import_playbook: playbooks/bootstrap.yml
- import_playbook: playbooks/networking.yml
- import_playbook: playbooks/monitoring.yml
Any suggestions what I am missing or how to fix it?
Q: "remote_user: root is ignored"
A: The playbook works as expected
- hosts: test_01
gather_facts: false
become: no
remote_user: root
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
"result.stdout": "root"
But, the variable can be overridden in the inventory. For example with the inventory
$ cat hosts
all:
hosts:
test_01:
vars:
ansible_connection: ssh
ansible_user: admin
the result is
"result.stdout": "admin"
Double-check the inventory with the command
$ ansible-inventory --list
Notes
It might be also necessary to double-check the role - role: papanito.bootstrap
See Controlling how Ansible behaves: precedence rules
I faced a similar issue, where ec2 instance required different username to ssh with. You could try with below example
- import_playbook: playbooks/bootstrap.yml
vars:
ansible_ssh_user: root
Try this
Instead of “remote_user: root”use “remote_user: ansible” and additional “become: yes” ,”become_user: root”,”become_method: sudo or su”

Simple ansible example that connects to new server as root with password

I want to provision a new vps. The way this is typically done: 1) try login manually as a non-root user, and 2) if that fails then perform the provisioning.
But I can't connect. I can't even login as root. (I can ssh from the shell, so the password is correct.)
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: set root password
set_fact: ansible_password={{ ROOT_PASSWORD }}
- name: try login with password
local_action: "command ssh -q -o BatchMode=yes -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'"
ignore_errors: true
changed_when: false
# more stuff here...
I tried the following, but all don't connect:
I stored the password in a variable like above
I prompted for the password using ansible-playbook -k playbook.yml
I moved the password to the inventory file
[server]
42.42.42.42 ansible_user=root ansible_password=foo
I added the ssh flag -o PreferredAuthentications=password to force password auth
But none of the above connects. I always get the error
root#42.42.42.42: Permission denied (publickey,password).
If I remove -o BatchMode=yes then it prompts me for a password, and does connect. But that prevents automation, the idea is to do this without user intervention.
What am I doing wrong?
This is a new vps, nothing is set up yet - so I'm looking for the simplest possible example of a playbook that connects using root and a password.
You're close. The variable is ansible_ssh_password, not ansible_ssh_pass. The variables with _ssh in the name are legacy names, so you can juse use ansible_user and ansible_password instead.
If I have an inventory like this:
[server]
example ansible_host=192.168.122.148 ansible_user=root ansible_password=secret
Then I can run this command successfully:
$ ansible all -i hosts -m ping
example | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
If the above ad-hoc command works correctly, then a playbook should work correctly as well. E.g., still assuming the above inventory, I can use the following playbook:
---
- hosts: all
gather_facts: false
tasks:
- ping:
And I can call it like this:
$ ansible-playbook playbook.yml -i hosts
PLAY [all] ***************************************************************************
TASK [ping] **************************************************************************
ok: [example]
PLAY RECAP ***************************************************************************
example : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
...and it all works just fine.
Try using --ask-become-pass
ansible-playbook -k playbook.yml --ask-become-pass
That way it's not hardcoded.
Also, inside the playbook you can invoke:
---
- hosts: all
become: true
gather_facts: no
All SO answers and blog articles I've seen so far recommend doing it the way I've shown.
But after spending much time on this, I don't believe it could work that way, so I don't understand why it is always recommended. I noticed that ansible has changed its API many times, and maybe that approach is simply outdated!
So I came up with an alternative, using sshpass:
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: try login with password (using out-of-band ssh connection)
local_action: command sshpass -p {{ ROOT_PASSWORD }} ssh -q -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'
ignore_errors: true
register: exists_user
- name: ping if above succeeded (using in-band ssh connection)
remote_user: root
block:
- name: set root ssh password
set_fact:
ansible_password: "{{ ROOT_PASSWORD }}"
- name: ping
ping:
data: pong
when: exists_user is success
This is just a tiny proof of concept.
The actual use case is to try connect with a non-root user, and if that fails, to provision the server. The above is the starting point for such a playbook.
Unlike #larsks' excellent alternative, this does not assume python is installed on the remote, and performs the ssh connection test out of band, assisted by sshpass.

Ansible systemctl --user for another user

I'm logging into another computer as the unprivileged user ansible and I'd like to have ansible enable a systemd user service for the user bob.
Without ansible, the solution would be ssh bob#machine followed by systemctl --user enable service
However, with ansible, there are two problems:
Newer versions of ansible will refuse to become the unprivileged user bob if already logged in as another unprivileged user (ansible).
Even if this worked, dbus would not be started and systemctl would not be able to talk to systemd (if I understood this correctly).
A horribly ugly workaround would be to execute the shell command, have the remote host ssh into itself as bob and run the systemctl raw command there.
Is there a nicer way to get this done?
Both options are feasible.
remote_user: bob
- hosts: test_01
become: no
remote_user: admin
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: whoami
remote_user: bob
register: result
- debug:
var: result.stdout
gives (abridged)
result.stdout: admin
result.stdout: bob
pipelining = true quoting from Becoming an Unprivileged User
Use pipelining. When pipelining is enabled, Ansible doesn’t save the module to a temporary file on the client. Instead, it pipes the module to the remote python interpreter’s stdin. Pipelining does not work for python modules involving file transfer (for example: copy, fetch, template), or for non-python modules.
- hosts: test_01
become: no
remote_user: admin
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
- command: whoami
become_user: bob
become_method: sudo
become: yes
register: result
- debug:
var: result.stdout
gives (abridged) the same result
result.stdout: admin
result.stdout: bob
with
shell> grep pipe ansible.cfg
pipelining = true

Using --become for ansible_connection=local

With a personal user account (userx) I run the ansible playbook on all my specified hosts. In ansible.cfg the remote user (which can become root) to be used is:
remote_user = ansible
For the remote hosts this all works fine. It connects as the user Ansible, and executes all tasks as wished for, also changing information (like /etc/ssh/sshd_config) which requires root rights.
But now I also want to execute the playbook on the Ansible host itself. I put the following in my inventory file:
localhost ansible_connection=local
which now indeed executes on localhost. But as userx, and this results in "Access denied" for some task it needs to do.
This is of course somewhat expected, since remote_user tells something about remote, not the local user. But still, I expected that the playbook would --become locally too, to execute the tasks as root (e.g. sudo su -). It seems no to do that.
Running the playbook with --become -vvv tells me
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: userx
and it seems not to try to execute the tasks with sudo. And without using sudo, the task fails.
How can I tell ansible to to use sudo / become on the local connection too?
Nothing special is required. Proof:
The playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- command: whoami
register: whoami
- debug:
var: whoami.stdout
The execution line:
ansible-playbook playbook.yml --become
The result:
PLAY [localhost] ***************************************************************************************************
TASK [command] *****************************************************************************************************
changed: [localhost]
TASK [debug] *******************************************************************************************************
ok: [localhost] => {
"changed": false,
"whoami.stdout": "root"
}
PLAY RECAP *********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
The ESTABLISH LOCAL CONNECTION FOR USER: message will always show the current user, as it the account used "to connect".
Later the command(s) called from the module get(s) executed with elevated permissions.
Of course, you can add become: yes on either play level or for individual tasks.

Resources