Ansible Service Restart Failed - ansible

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.

As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

Related

how can I use ansible to push playbooks with SSH keys authentification

I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.

Usage of async in ansible task raised privileged errors

I get puzzled a lot of time with the following issue.
I try so launch a process (here just a silly java -version) using async feature.
I run the ansible-playbook using my user which has a remote account as sudoer in the docker host. The other account with which I'd like to start the command is toto
So I wrote this
- name: test escalation
shell: id ; echo "shell says toto"
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
- name: java escalation
shell:
cmd: "/data/tools/java/jdk8u232-b09/bin/java -version &"
async: 10
# Don't wait
poll: 0
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
If i run this, I have
TASK [java escalation] ************************************************************************************************************
fatal: [main]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/': Operation not permitted\nchown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/AnsiballZ_command.py': Operation not permitted\nchown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/async_wrapper.py': Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
Did anybody had the same issue ?
ansible --version
ansible 2.9.7
If I do not use the async feature (I can use any value for poll)
- name: java escalation
shell:
cmd: "/data/tools/java/jdk8u232-b09/bin/java -version &"
# async: 10
# Don't wait
poll: 0
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
It works fine
TASK [ java escalation] ************************************************************************************************************
changed: [main] => {"changed": true, "cmd": "/data/tools/java/jdk8u232-b09/bin/java -version &", "delta": "0:00:00.034427", "end": "2020-04-21 15:59:46.402081", "rc": 0, "start": "2020-04-21 15:59:46.367654", "stderr": "openjdk version \"1.8.0_232\"\nOpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09)\nOpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)", "stderr_lines": ["openjdk version \"1.8.0_232\"", "OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09)", "OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)"], "stdout": "", "stdout_lines": []}

Ansible error executing pm2 startup command

When executing ansible playbook with command: ansible-playbook 2_installJsReport.yml
CentOS 7.6
Ansible 2.7.10
i get an error saying:
TASK [make jsreport start at system restart] >*****************************************************************************>**************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["pm2", "startup"], >"delta": "0:00:00.601130", "end": "2019-04-24 12:59:33.091819", "msg": "non->zero return code", "rc": 1, "start": "2019-04-24 12:59:32.490689", "stderr": >"", "stderr_lines": [], "stdout": "[PM2] Init System found: systemd\n[PM2] To >setup the Startup Script, copy/paste the following command:\nsudo env >PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin >/home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 >startup systemd -u username --hp /home/username", "stdout_lines": ["[PM2] >Init System found: systemd", "[PM2] To setup the Startup Script, copy/paste >the following command:", "sudo env >PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin >/home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 >startup systemd -u username --hp /home/username"]}
Ansible script
---
- hosts: localhost
tasks:
- name: make jsreport start at system restart
command: pm2 startup
The "error" message contains instructions you are supposed to follow to configure the startup:
[PM2] Init System found: systemd
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin /home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
If you follow those instructions, it suggests that you should replace your task with something like:
---
- hosts: localhost
tasks:
- name: make jsreport start at system restart
become: true
command: pm2 startup systemd -u username --hp /home/username
environment:
PATH: "{{ ansible_env.PATH }}"

Service and systemd module asks for sudo password

I'm having an issue where the Ansible service module is failing due to a sudo password issue:
fatal: [192.168.1.10]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.1.10 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "rc": 1}
to retry, use: --limit #/Volumes/HD/Users/user/Ansible/playbooks/stop-homeassistant.retry
My playbook has just one task, to stop the service. It looks like:
---
- hosts: 192.168.1.10
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Or, in the case of systemd:
systemd: state=stopped name=home-assistant#homeassistant enabled=yes
I'm running the playbook like so:
ansible-playbook -u homeassistant playbooks/stop-homeassistant.yml
However, passwordless sudo is setup for that user on that box (in /etc/sudoers.d):
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl restart home-assistant#homeassistant
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl stop home-assistant#homeassistant
If I ssh into that box as homeassistant, and I run:
sudo systemctl stop home-assistant#homeassistant
The home-assistant#homeassistant service will stop cleanly without asking for a sudo password.
Any idea why the systemctl command would run perfectly as the user on the box, but then fail in the service/systemd module?
Try configuring passwordless sudo on your target machines:
homeassistant ALL=NOPASSWD: ALL
Configuring specific commands with a NOPASSWD flag in /etc/sudoers does not work with Ansible.
Details here: https://github.com/ansible/ansible/issues/5712
Ok, please modify your playbook as below:
hosts: 192.168.1.10
remote_user: home-assistant
become: true
become_method: sudo
become_user: root
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Now,
Run as ansible-playbook <playbook-name>.
If above command fails due to password, please run as
ansible-playbook playbook.yml --user=<username> --extra-vars "ansible_sudo_pass=<yourPassword>"

unable to restart iptables from ansible ( Interactive authentication required)

How to restart iptables service from Ansible (in order to reload config file /etc/sysconfig/iptables)
I have handler restart iptables defined as
service: name=iptables enabled=yes state=restarted
But it produces following error message:
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true,
"msg": "Failed to stop iptables.service: Interactive authentication
required.\n Failed to start iptables.service: Interactive
authentication required.\n"}
I am working with CentOS Linux release 7.2.1511 (Core)
I was not running my handler command as root. If handler contains become: yes then handler works fine.
- name: restart iptables
become: yes
service: name=iptables enabled=yes state=restarted
Another way of refreshing iptables configuration, without restarting it is
- name: reload iptables
become: yes
shell: iptables-restore < /etc/sysconfig/iptables

Resources