I'm having an issue where the Ansible service module is failing due to a sudo password issue:
fatal: [192.168.1.10]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.1.10 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "rc": 1}
to retry, use: --limit #/Volumes/HD/Users/user/Ansible/playbooks/stop-homeassistant.retry
My playbook has just one task, to stop the service. It looks like:
---
- hosts: 192.168.1.10
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Or, in the case of systemd:
systemd: state=stopped name=home-assistant#homeassistant enabled=yes
I'm running the playbook like so:
ansible-playbook -u homeassistant playbooks/stop-homeassistant.yml
However, passwordless sudo is setup for that user on that box (in /etc/sudoers.d):
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl restart home-assistant#homeassistant
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl stop home-assistant#homeassistant
If I ssh into that box as homeassistant, and I run:
sudo systemctl stop home-assistant#homeassistant
The home-assistant#homeassistant service will stop cleanly without asking for a sudo password.
Any idea why the systemctl command would run perfectly as the user on the box, but then fail in the service/systemd module?
Try configuring passwordless sudo on your target machines:
homeassistant ALL=NOPASSWD: ALL
Configuring specific commands with a NOPASSWD flag in /etc/sudoers does not work with Ansible.
Details here: https://github.com/ansible/ansible/issues/5712
Ok, please modify your playbook as below:
hosts: 192.168.1.10
remote_user: home-assistant
become: true
become_method: sudo
become_user: root
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Now,
Run as ansible-playbook <playbook-name>.
If above command fails due to password, please run as
ansible-playbook playbook.yml --user=<username> --extra-vars "ansible_sudo_pass=<yourPassword>"
Related
I am new to Ansible and I'm trying to write my first Ansible playbook to enable root login via ssh two remote ubuntu servers.
By default, ssh to the two remote ubuntu servers as root is disabled. In order to enable the root login via ssh, I normally do this
#ssh to server01 as an admin user
ssh admin#server01
#set PermitRootLogin yes
sudo vim /etc/ssh/sshd_config
# Restart the SSH server
service sshd restart
Now I'd like to do this via Ansible playbook.
This is my playbook
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
notify:
- restart ssh
handlers:
- name: restart ssh
service:
name: sshd
state: restarted
I run the playbook as the admin user which was created in these two remote servers
ansible-playbook enable-root-login.yml -u admin --ask-pass
Unfortunately, the playbook is failed due to the permission denied.
fatal: [server01]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Could not make backup of /etc/ssh/sshd_config to /etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~: [Errno 13] Permission denied: '/etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~'"}
Can anyone please advise what is wrong with my playbook?
Thanks
When you edit sshd_config file you use sudo then you need to specify to the task that it must be executed with other user. You have to set the keyword become: yes, by default the become_user will be root and the become_method will be sudo and you also could to specifiy the become_password.
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
become: yes
notify:
- restart ssh
handlers:
- name: restart ssh
systemctl:
name: sshd
state: restarted
Documentation:
https://docs.ansible.com/ansible/latest/user_guide/become.html#using-become
I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.
I want to perform administrative tasks with ansible in a secure environment:
On the server :
root is not activated
we connect throught ssh to a not sudoer account (public/private key, I usually use ssh-agent not to type the passphrase each and every time)
change to a user which belongs to sudo group
then we perform administrative tasks
Here is the command I execute :
ansible-playbook install_update.yaml -K
the playbook :
---
- hosts: server
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_user: admin_account
become_method: su
apt:
name: "*"
state: latest
The hosts file :
[server]
192.168.1.50 ansible_user=ssh_account
But this doesn't allow me to do the tasks: for this particular playbook, It raises this error :
fatal: [192.168.1.50]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-get upgrade --with-new-pkgs ' failed: E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n", "rc": 100, "stdout": "", "stdout_lines": []}
which gives the idea that there is a privilege issue...
I would be really glad if someone had an idea !!
Best regards
PS: I have added to sudoers file the nopasswd for this admin account and if I run this playbook it works :
---
- hosts: pi
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_method: su
become_user: rasp_admin
shell: bash -c "sudo apt update"
I guess that when I changed user via su command from ssh_account, I would like to specify that with the admin_accound, my commands have to be run with sudo, but I failed finding the right way to do it...any ideas ??
PS: a workarround is to download a shell file et execute it with ansible but I find it is not satisfying...any other idea ?
This question already has answers here:
Missing sudo password in Ansible
(14 answers)
Ansible: sudo without password
(3 answers)
Closed 4 years ago.
I create a vm in the azure cloud with the following ansible script:
---
- name: azure playbook
hosts: localhost
vars_files: ['vars.yaml']
tasks:
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: "{{account_prefix}}_rg"
vm_size: Standard_D1
name: "{{account_prefix}}-vm1"
storage_account_name: "{{account_prefix}}store1"
network_interface_names: "{{account_prefix}}vm1eth0"
ssh_password_enabled: false
admin_username: owen
ssh_public_keys:
- { path: /home/owen/.ssh/authorized_keys,
key_data: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH0q4pmdkJcc/JPVJui5uWMV12GsJAsDCosfUSSFZfTIx92bb9FC3hx1zU7tD1+Zw3aQW13m6ZS2T ... YnvieSbdD3v}
image:
offer: CentOS
publisher: OpenLogic
sku: '7.2'
version: latest
but when running a further script to add another user:
---
- name: create user
hosts: my-vm1.westeurope.cloudapp.azure.com
# vars_files: ['vars.yaml']
remote_user: owen
tasks:
- name: Create User
user:
name: andrea
password: $6$rounds=656000$1AspdTb0lfOSc5yM$bAkPgHkuHwap/j6f0P88WxOdjxq3MCRO7/qgufYB.s/4t4k99wwtu/.../
group: users
shell: /bin/bash
become: true
I get "sudo: a password is required" error:
PLAY [create user] *************************************************************
TASK [setup] *******************************************************************
fatal: [my-vm1.westeurope.cloudapp.azure.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #8-add-admin-user-to-vm-with-userpswd-already.retry
My inventory looks like this:
my-vm1.westeurope.cloudapp.azure.com ansible_ssh_private_key_file=/home/myuser/.ssh/id_rsa ansible_user=owen ansible_become=true
So how can the user have sudo privileges and so use ansible 'become' and the like?
Note that the same result happens when ansible_user and ansible_become are omitted from the inventory file.
EDIT: If I ssh on to the vm as owen (from the box with the ssh private key, that created the vm) then I am able to run sudo visudo -f /etc/sudoers and access that file. So does owen have sudo privileges or not? I'm getting confused now!! Am I misunderstanding the error from the ansible add user script?
EDIT2: I think this question is invalid - as the user does have sudo privileges added manually through the portal. I'm still not sure what's going on but I don't think this question is coherent - or really represents the actual problem I'm trying to solve.
You can either change the sudo config for the user owen with this command:
sudo visudo -f /etc/sudoers
and change the line with user owen to this:
owen ALL=(ALL) NOPASSWD:ALL
then sudo won't require Ansible to enter the password. Or you could instruct Ansible to ask you for the password with the parameter --ask-become-pass like this:
ansible-playbook site.yml --ask-become-pass
I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart