unable to restart iptables from ansible ( Interactive authentication required) - ansible

How to restart iptables service from Ansible (in order to reload config file /etc/sysconfig/iptables)
I have handler restart iptables defined as
service: name=iptables enabled=yes state=restarted
But it produces following error message:
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "failed": true,
"msg": "Failed to stop iptables.service: Interactive authentication
required.\n Failed to start iptables.service: Interactive
authentication required.\n"}
I am working with CentOS Linux release 7.2.1511 (Core)

I was not running my handler command as root. If handler contains become: yes then handler works fine.
- name: restart iptables
become: yes
service: name=iptables enabled=yes state=restarted
Another way of refreshing iptables configuration, without restarting it is
- name: reload iptables
become: yes
shell: iptables-restore < /etc/sysconfig/iptables

Related

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}

This is my ansible playbook and I'm only running into an issue on the final task for starting and enabling Grafana.
---
- name: Install Grafana
hosts: hosts
become: yes
tasks:
- name: download apt key
ansible.builtin.apt_key:
url: https://packages.grafana.com/gpg.key
state: present
- name: Add Grafana repo to sources.list
ansible.builtin.apt_repository:
repo: deb https://packages.grafana.com/oss/deb stable main
filename: grafana
state: present
- name: Update apt cache and install Grafana
ansible.builtin.apt:
name: grafana
update_cache: yes
- name: Ensure Grafana is started and enabled
ansible.builtin.systemd:
name: grafana-server
state: started
enabled: yes
This is the error I received:
TASK [Ensure Grafana is started and enabled]
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
This is also the configuration of my hosts file just in case:
[hosts]
localhost
[hosts:vars]
ansible_connection=local
ansible_python_interpreter=/usr/bin/python3
I'm pretty much just trying to have it run these two commands I have in a bash script
sudo systemctl start grafana-server
sudo systemctl enable grafana-server.service
Got it sorted out- turns out my system wasn't booted using systemd as init system. So I changed the Ansible module from ansible.builtin.systemd to ansible.builtin.sysvinit

Use Ansible playbook to enable and disable root login

I am new to Ansible and I'm trying to write my first Ansible playbook to enable root login via ssh two remote ubuntu servers.
By default, ssh to the two remote ubuntu servers as root is disabled. In order to enable the root login via ssh, I normally do this
#ssh to server01 as an admin user
ssh admin#server01
#set PermitRootLogin yes
sudo vim /etc/ssh/sshd_config
# Restart the SSH server
service sshd restart
Now I'd like to do this via Ansible playbook.
This is my playbook
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
notify:
- restart ssh
handlers:
- name: restart ssh
service:
name: sshd
state: restarted
I run the playbook as the admin user which was created in these two remote servers
ansible-playbook enable-root-login.yml -u admin --ask-pass
Unfortunately, the playbook is failed due to the permission denied.
fatal: [server01]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Could not make backup of /etc/ssh/sshd_config to /etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~: [Errno 13] Permission denied: '/etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~'"}
Can anyone please advise what is wrong with my playbook?
Thanks
When you edit sshd_config file you use sudo then you need to specify to the task that it must be executed with other user. You have to set the keyword become: yes, by default the become_user will be root and the become_method will be sudo and you also could to specifiy the become_password.
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
become: yes
notify:
- restart ssh
handlers:
- name: restart ssh
systemctl:
name: sshd
state: restarted
Documentation:
https://docs.ansible.com/ansible/latest/user_guide/become.html#using-become

Service and systemd module asks for sudo password

I'm having an issue where the Ansible service module is failing due to a sudo password issue:
fatal: [192.168.1.10]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.1.10 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "rc": 1}
to retry, use: --limit #/Volumes/HD/Users/user/Ansible/playbooks/stop-homeassistant.retry
My playbook has just one task, to stop the service. It looks like:
---
- hosts: 192.168.1.10
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Or, in the case of systemd:
systemd: state=stopped name=home-assistant#homeassistant enabled=yes
I'm running the playbook like so:
ansible-playbook -u homeassistant playbooks/stop-homeassistant.yml
However, passwordless sudo is setup for that user on that box (in /etc/sudoers.d):
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl restart home-assistant#homeassistant
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl stop home-assistant#homeassistant
If I ssh into that box as homeassistant, and I run:
sudo systemctl stop home-assistant#homeassistant
The home-assistant#homeassistant service will stop cleanly without asking for a sudo password.
Any idea why the systemctl command would run perfectly as the user on the box, but then fail in the service/systemd module?
Try configuring passwordless sudo on your target machines:
homeassistant ALL=NOPASSWD: ALL
Configuring specific commands with a NOPASSWD flag in /etc/sudoers does not work with Ansible.
Details here: https://github.com/ansible/ansible/issues/5712
Ok, please modify your playbook as below:
hosts: 192.168.1.10
remote_user: home-assistant
become: true
become_method: sudo
become_user: root
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Now,
Run as ansible-playbook <playbook-name>.
If above command fails due to password, please run as
ansible-playbook playbook.yml --user=<username> --extra-vars "ansible_sudo_pass=<yourPassword>"

Why Ansible keeps giving me error "Could not find the requested service httpd: cannot check nor set state"?

I am doing a dry run on installing apache web server on a centos 7 box.
This is the webserver.yml file:
--- # Outline to Playbook Translation
- hosts: apacheWeb
user: aleatoire
sudo: yes
gather_facts: no
tasks:
- name: date/time stamp for when the playbook starts
raw: /bin/date > /home/aleatoire/playbook_start.log
- name: install the apache web server
yum: pkg=httpd state=latest
- name: start the web service
service: name=httpd state=started
- name: install client software - telnet
yum: pkg=telnet state=latest
- name: install client software - lynx
yum: pkg=lynx state=latest
- name: log all the packages installed on the system
raw: yum list installed > /home/aleatoire/installed.log
- name: date/time stamp for when the playbook ends
raw: /bin/date > /home/aleatoire/playbook_end.log
When I do a dry run with:
ansible-playbook webserver.yml --check
I keep getting this error:
fatal: [<ip_address>]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service httpd: cannot check nor set state"}
to retry, use: --limit #/home/aleatoire/Outline/webserver.retry
I tried adding ignore_issues: true and that did not work either.
--check is not going to actually install the httpd package if it's not there yet. So then the service: call will fail if there is no httpd unit file installed yet.
You can use --syntax-check option instead.

Ansible Service Restart Failed

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

Resources