This task (name) has extra params, which is only allowed - ansible

Having an issue with ansible.builtin.shell and ansible.builtin.command. Probably not using them right, but usage matches the docs examples.
Ansible version 2.10.3
In roles/rabbitmq/tasks/main.yml
---
# tasks file for rabbitmq
# If not shut down cleanly, the following will fix:
# systemctl stop rabbitmq-server
- name: Stop RabbitMQ service
ansible.builtin.service:
name: rabbitmq-server
state: stopped
become: yes
# rabbitmqctl force_boot
# https://www.rabbitmq.com/rabbitmqctl.8.html
# force_boot Ensures that the node will start next time, even if it was not the last to shut down.
- name: Force RabbitMQ to boot anyway
ansible.builtin.shell: /usr/sbin/rabbitmqctl force_boot
# systemctl start rabbitmq-server
- name: Stop RabbitMQ service
ansible.builtin.service:
name: rabbitmq-server
state: started
become: yes
Resulting in the following error:
ERROR! this task 'ansible.builtin.shell' has extra params, which is only allowed in the following modules: shell, command, ansible.windows.win_shell, ...
The error appears to be in '.../roles/rabbitmq/tasks/main.yml': line 15, > column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
# force_boot Ensures that the node will start next time, even if it was not the last to shut down.
- name: Force RabbitMQ to boot anyway
^ here
I've tried ansible.builtin.command, both with and without the cmd: parameter.
What don't I understand about the usage?

Try this:
- name: Force RabbitMQ to boot anyway
command: "/usr/sbin/rabbitmqctl force_boot"
register: result
ignore_errors: True
I basically took out the ansible.builtin.. It works for me.
register captures the output into a variable named result.
ignore_errors is useful so that if an error occurs Ansible will not stop.
You can output that variable with:
- debug: var=result

Update your Ansible to the latest version, its a bug https://github.com/ansible/ansible/pull/71824

Try putting "" around your command i.e.
ansible.builtin.shell: "/usr/sbin/rabbitmqctl force_boot"

Related

handlers main.yml showing error handlers/main.yml file must contain list of tasks

I am building some role in handlers/main.yml I had spacified some handler jobs but I was unable to execute them. This is the error message:
ERROR! The handlers/main.yml file for role 'sample-mysql' must contain
a list of tasks
The error appears to be in '/home/automation/plays/roles/sample-mysql/handlers/main.yml': line 2, column 3, but maybe somewhere else in the file depending on the exact syntax problem.
The offending line appears to be:
---
handlers:
^ here
I had done some changes but still not working. I also want this file to load handlers from another file. Is it possible to do so? such as - include: directive
---
handlers:
- name: "Start mysql"
service:
enabled: true
name: mysqld
state: started
- name: "Start firewalld"
service:
enabled: true
name: firewalld
state: started
ERROR! The handlers/main.yml file for role 'sample-mysql' must contain
a list of tasks
The error appears to be in '/home/automation/plays/roles/sample-mysql/handlers/main.yml': line 2, column 3, but maybe somewhere else in the file depending on the exact syntax problem.
The offending line appears to be:
---
handlers:
^ here
In your role, there is file: handlers/main.yml.
Edit that file, and add this task:
---
- name: a task
shell: echo
save it
The handler file location is /repo_name/roles/sample-mysql/handlers/main.yaml
Inside the main.yaml add the below contents.
---
- name: Start mysql
service:
name: mysqld
state: started
enabled: true
To call this handler include the below line in your main playbook, /repo_name/roles/sample-mysql/tasks/main.yaml
notify: "Restart iis service"
You cannot simply execute a handler. Inside the sample-mysql folder, there should be folders called tasks and handlers.
If you use a separate file like handlers/main.yml, you must omit the keyword "handlers:". You will only need that keyword, if you put all in one role file.

Issue with handler restarting service

I have an issue where I was handed a gaming service from a dev group in Bulgaria that was coded and built last fall. the only documentation I have for the platform which is a video showing one of the devs running the playbooks to build the env. the video doesnt show version of ansible that is being used. I am trying to run playbooks on ubuntu 16.04 and ansible 2.0.0.2. I am trying to execute the following:
- hosts: nler-example-nxx
user: ubuntu
tasks:
- include: ../../templates/nler-nxx.yml
- name: change nodes ip in nxx config
replace:
path: /www/nler-nxx/conf/nxx.properties
regexp: '(\s+)nxx.defaultPeers=(.*)$'
replace: '\1nxt.defaultPeers=52.202.223.114; 35.173.242.207; 34.238.136.137; 54.164.46.62; 54.86.17.225;'
notify:
- restart nxx service
handlers:
- name: restart nxx service
become: yes
systemd: name=nxx state=restarted
And get this error:
ERROR! no action detected in task
The error appears to have been in '/etc/ansible/playbook/niler/example/nxx.yml': line 15, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
handlers:
- name: restart nxx service
^ here
In doing some research I am thinking this is a version conflict issue but no way to confirm what version they were using.
Indentation is wrong. Should be:
tasks:
- include: ../../templates/nler-nxx.yml
...
.
handlers:
- name: restart nxx service
...

ansible: how to restart auditd service on centos 7 get error about dependency

In my playbook, i have a task to update audit.rules and then notify a handler which should restart the auditd service.
task:
- name: 6.6.7 - audit rules configuration
template: src=X/ansible/templates/auditd_rules.j2
dest=/etc/audit/rules.d/audit.rules
backup=yes
owner=root group=root mode=0640
notify:
- restart auditd
handlers:
- name: restart auditd
service: name=auditd state=restarted
When the playbook runs, the audit rules are updated and a request is made to restart auditd but this fails as below.
RUNNING HANDLER [restart auditd] ***********************************************
fatal: [ipX-southeast-2.compute.internal]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to restart service auditd: Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only.\n"}
When i look at the unit definition for auditd, i can see refuseManualStop=yes. Is this why i cant restart the service? how does one over come this to pickup the new audit rules?
systemctl cat auditd.service
# /usr/lib/systemd/system/auditd.service
[Unit]
Description=Security Auditing Service
DefaultDependencies=no
After=local-fs.target systemd-tmpfiles-setup.service
Conflicts=shutdown.target
Before=sysinit.target shutdown.target
RefuseManualStop=yes
ConditionKernelCommandLine=!audit=0
Documentation=man:auditd(8) https://people.redhat.com/sgrubb/audit/
[Service]
ExecStart=/sbin/auditd -n
## To not use augenrules, copy this file to /etc/systemd/system/auditd.service
## and comment/delete the next line and uncomment the auditctl line.
## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/
ExecStartPost=-/sbin/augenrules --load
#ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
ExecReload=/bin/kill -HUP $MAINPID
# By default we don't clear the rules on exit. To enable this, uncomment
# the next line after copying the file to /etc/systemd/system/auditd.service
#ExecStopPost=/sbin/auditctl -R /etc/audit/audit-stop.rules
[Install]
WantedBy=multi-user.target
This has been explored, discussed, and resolved (mostly) in the Red Hat Bugzilla #1026648 and Anisble Issue # 22171 (github) reports.
Resolution
Use the ansible service module parameter use=service to force execution of the /sbin/service utility instead of the gathered-fact value of systemd (which invokes /sbin/systemctl) like this:
- service: name=auditd state=restarted use=service
Example playbook (pastebin.com)
Workaround:
Use the ansible command module to explicitly run the service executable like this:
- command: /sbin/service auditd restart
Analysis - root cause:
This is an issue created by upstream packaging of auditd.service unit. It will not start/stop/restart when acted upon by systemctl, apparently by design.
It is further compounded by the Ansible service control function, which uses the preferred method identified when system facts are gathered and "ansible_service_mgr" returns "systemd". This is regardless of the actual module used to manage the service.unit.
RHEL dev team may fix if considered a problem in upcoming updates (ERRATA)
Ansible dev team has offered a workaround and (as of 2.2) updated the service module with the use parameter.
Maybe is quit late for the answer, but if anyone else is facing the same problem, you can import the new rules for auditd with this command :
auditctl -R /path/to_your_rules_file
So, no need to restart auditd.service to import new rules
I would validate the auditd service reloading correctly as even using the command module with the command you specified will not work or behave in the manner you would expect it to;
Confirm via
service auditd status
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-starting_the_audit_service.html
Try instead
service auditd condrestart
:)
You should not change the parameter refuseManualStop, it's there to keep you system secure. What you could do instead, after creating new rules, reboot the host, wait for it, then continue with you playbook.
Playbook example:
- name: Create new rules file
copy:
src: 01-personalized.rules
dest: /etc/audit/rules.d/01-personalized.rules
owner: root
group: root
mode: 0600
register: result
- name: Reboot server
shell: "sleep 5 && reboot"
async: 1
poll: 0
when: result is changed
- name: Wait for server to become available
wait_for_connection:
delay: 60
sleep: 5
timeout: 300
when: result is changed
Change the manual stop to NO and try
sudo service auditd restart
If this works, then the code will also work
systemctl start auditd
and
systemctl enable auditd
is for version CentOS version7.
Follow the links for further help.
Link
Auditd in CentOS7

Indentation error while running ansible playbook

I'm a newbee to ansible. While running this playbook, getting below error. Although I searched for the solution over the web I was unable to get any help specific to my problem. The problem seems to be with "notify:" syntax but not sure what exactly. Please can anyone help me in finding where is the mistake.
Ansible playbook -
---
- hosts: droplets
remote_user: root
tasks:
- name: Check if service httpd running
service: name=httpd state=running
notify:
- start apache service
handlers:
- name: start apache
service: name=httpd state=started
...
Output:
root#zarvis:/home/luckee/ansible# ansible-playbook servicechk.yml -f 2
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/home/luckee/ansible/servicechk.yml': line 10, column 1, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- start apache service
^ here
Any help would be great !
1.) First line. There should be no whitespace characters before three dashes --- which is called also Document Boundary Marker:
A line beginning with “---” may be used to explicitly denote the beginning of a new YAML document.
2.)
notify:
- start apache service
replace with
notify:
- start apache
since you declaring start apache handler.
Correct syntax would be:
hosts: host1
remote_user: vagrant
tasks:
name: Check if service httpd running
service: name=httpd state=started
notify:
start_apache
handlers:
name: start apache
service: name=httpd state=started
...

Ansible handler runs only if changed: true

Installing ntp with Ansible,
I notify handler in order to start ntpd service:
Task:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
Handler:
---
# roles/common/handlers/main.yml
- name: start ntp
service: name=ntpd state=started
If service has not been installed, ansible installs and starts it.
If service has been installed already, but is not running, it does not notify handler: status of task is changed: false
That means, I cannot start it, if it has been already presented in OS.
Is there any good practice that helps to be sure that service has been installed and is in running state?
PS: I may do so:
---
# roles/common/tasks/ntp.yml
- name: ntp | installing
yum: name=ntp state=latest
notify: start ntp
changed: true
but I am not sure that it is good practice.
From the Intro to Playbooks guide:
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.
Handlers only run on change by design. If you change a configuration you often need to restart a service, but don't want to if nothing has changed.
What you want is to start a service if it is not already running. To do this you should use a regular task as described by #udondan :
- name: ntp | installing
yum:
name: ntp
state: latest
- name: ntp | starting
service:
name: ntpd
state: started
enabled: yes
Ansible is idempotent by design, so this second task will only run if ntp is not already running. The enabled line will set the service to start on boot. Remove this line if that is not desired behavior.
Why don't you just add a service task then? A handler usually is for restarting a service after configuration has changed. To ensure a service is running not matter what, just add at task like that:
- name: Ensure ntp is running
service:
name: ntpd
state: started

Resources