Ansible stop playbook if file present - ansible

Is it possible to stop a playbook during his execution if a define file is present on my node and also output to explain why the playbook has stopped?
It is to prevent an accidental re-execution of my playbook on a node that has my application already installed because I generate a password during this install and I don't want to reinitialise this password.

You can use the fail module to force a failure with a custom failure message.
Couple this with a check for a file, using the stat module, and this should work easily enough for you.
A quick example or a one run playbook might look something like this:
- name: check for foo.conf
stat: path=/etc/foo.conf
register: foo
- name: fail if already run on host
fail: msg="This host has already had this playbook run against it"
when: foo.stat.exists
- name: create foo.conf
file: path=/etc/foo.conf state=touch

Related

How can I run an ansible role locally?

I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?
Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"

Error while using vars_prompt in ansible playbook [duplicate]

Ansible shows an error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
What is wrong?
The exact transcript is:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in 'playbook.yml': line 10, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: My task name
^ here
Reason #1
You are using an older version of Ansible which did not have the module you try to run.
How to check it?
Open the list of modules module documentation and find the documentation page for your module.
Read the header at the top of the page - it usually shows the Ansible version in which the module was introduced. For example:
New in version 2.2.
Ensure you are running the specified version of Ansible or later. Run:
ansible-playbook --version
And check the output. It should show something like:
ansible-playbook 2.4.1.0
Reason #2
You tried to write a role and put a playbook in my_role/tasks/main.yml.
The tasks/main.yml file should contain only a list of tasks. If you specified:
---
- name: Configure servers
hosts: my_hosts
tasks:
- name: My first task
my_module:
parameter1: value1
Ansible tries to find an action module named hosts and an action module named tasks. It doesn't, so it throws an error.
Solution: specify only a list of tasks in the tasks/main.yml file:
---
- name: My first task
my_module:
parameter1: value1
Reason #3
The action module name is misspelled.
This is pretty obvious, but overlooked. If you use incorrect module name, for example users instead of user, Ansible will report "no action detected in task".
Ansible was designed as a highly extensible system. It does not have a limited set of modules which you can run and it cannot check "in advance" the spelling of each action module.
In fact you can write and then specify your own module named qLQn1BHxzirz and Ansible has to respect that. As it is an interpreted language, it "discovers" the error only when trying to execute the task.
Reason #4
You are trying to execute a module not distributed with Ansible.
The action module name is correct, but it is not a standard module distributed with Ansible.
If you are using a module provided by a third party - a vendor of software/hardware or another module shared publicly, you must first download the module and place it in appropriate directory.
You can place it either in modules subdirectory of the playbook or in a common path.
Ansible looks ANSIBLE_LIBRARY or the --module-path command line argument.
To check what paths are valid, run:
ansible-playbook --version
and check the value of:
configured module search path =
Ansible version 2.4 and later should provide a list of paths.
Reason #5
You really don't have any action inside the task.
The task must have some action module defined. The following example is not valid:
- name: My task
become: true
I can't really improve upon #techraf answer https://stackoverflow.com/a/47159200/619760.
I wanted to add reason #6 my special case
Reason #6
Incorrectly using roles: to import/include roles as a subtask.
This does not work, you can not include roles in this way as subtasks in a play.
---
- hosts: somehosts
tasks:
- name: include somerole
roles:
- somerole
Use include_role
According to the documentation
you can now use roles inline with any other tasks using import_role or include_role:
- hosts: webservers
tasks:
- debug:
msg: "before we run our role"
- import_role:
name: example
- include_role:
name: example
- debug:
msg: "after we ran our role"
Put the roles at the right place inline with hosts
Include the roles at the top
---
- hosts: somehosts
roles:
- somerole
tasks:
- name: some static task
import_role:
name: somerole
hosts: some host
- include_role:
name: example
You need to understand the difference between import/include static/dynamic
I got this error when I referenced the debug task as ansible.builtin.debug
Causes a syntax failure in CI (but worked locally):
- name: "Echo the jenkins job template"
ansible.builtin.debug:
var: template_xml
verbosity: 1
Works locally and in CI:
- name: "Echo the jenkins job template"
debug:
var: template_xml
verbosity: 1
I believe - but have not confirmed - that the differences in local vs CI was ansible versions.
Local : 2.10
CI : 2.7
Explanation of the error :
No tasks to execute means it can not do the action that was described in your playbook
Root cause:
the installed version of Ansible doesn't support it
How to check :
ansible --version
Solution:
upgrade Ansible to a version which supports the feature you are trying to use
How to upgrade Ansible:
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#selecting-an-ansible-version-to-install
Quick instruction for Ubuntu :
sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
P.S: followed this path and upgraded from version 2.0.2 to 2.9
After upgrade, same playbook worked like a charm
For me the problem occurred with "systemd" module. Turned out that my ansible --version was 2.0.0.2 and module was first introduced in version 2.2. Updating my ansible to latest version fixed the problem.
playbook.yaml
- name: "Enable and start docker service and ensure it's not masked"
systemd:
name: docker
state: started
enabled: yes
masked: no
Error
ERROR! no action detected in task
etc..
etc..
etc..
- name: "Enable and start docker service and ensure it's not masked"
^ here
In my case this was fix:
ansible-galaxy collection install ansible.posix

How to implement conditional logic in ansible regarding modified files?

In ansible, I want to restart a service only if the configuration was changed.
Here is an example:
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
- shell: service autofs reload
As you can see this code will always restart the autofs, even when the configuration file is not updated.
How can I improve this so it will restart only when the configuration file is changed?
Note: that's a generic question that is not specific to autofs, it could apply to any service where I do want to execute something IF a configuration file was changed, probably via lineinfile or ini_file core modules.
For a start you should be using the service module for controlling running services when running out. In general, if there's a proper module for something and it does what you need then you should do that and only resort to shelling out for edge cases.
Also, when an Ansible task runs it returns a series of facts that you can register to be able to use directly. This nearly always includes a changed attribute which is a boolean saying whether Ansible thinks it changed something (it can't always know. If a shell task returns something in stdout then it assumes something changed unless you directly override it with changed_when).
So you could go with something like this:
- hosts: workers
tasks:
- name: Set autofs options
lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
register: result
- name: reload autofs is autofs options are changed
service: name=autofs state=reloaded
when: result.changed
If you would create a role instead of using loose tasks right in your playbook you could work with handlers. Also see Best practices: Task And Handler Organization For A Role.
Tasks file of your role:
- name: Change autofs config
lineinfile: dest=/etc/default/autofs
regexp=^OPTIONS=
line="OPTIONS=\"-O soft\""
backup=yes
notify:
- Restart autfs
Then in your handlers file of the same role:
- name: Restart autfs
service: name=autfs
state=restarted
The handler gets notified whenever the config tasks has a changed state.
PS: I used the service module for managing the service. You should only use the shell module if no specific module for your tasks is available
Regarding roles:
One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part of the main playbooks page. See Playbook Roles and Include Statements. You absolutely should be using roles. Roles are great. Use roles. Roles! Did we say that enough? Roles are great.
I know this is old, but I landed on this and felt there was more to add to what others have already answered. You should probably use a handler, but you do not need to do a discreet role to use a handler. Here's an example of handlers used directly in the playbook.
---
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
notify: reload autofs
handlers:
- name: reload autofs
service: name=autofs state=reloaded
One thing to note.. handlers fire off at the end of the play. So if you have multiple tasks, and you're expecting the handler to fire after the task that notified it, you might be in for a bad time. You can do a 'meta' task to flush_handlers if you need that handler to fire off in a sequential order with your tasks, or you can go the register/conditional way like ydaetskcoR provided. In your play, with a single task, it doesn't really matter.

ERROR: tasks is not a legal parameter - Ansible playbook

In my playbook I have a conditional include statement to include a task:
tasks:
# Install Java if not present
- name: Execute Java
shell: java -version
register: result
ignore_errors: True
- include: tasks/java.yml
when: result | failed
...
When I execute playbook it give out an error:
user1#localhost:~ ansible-playbook tomcat.yml
ERROR: tasks is not a legal parameter in an Ansible task or handler
However when I replace this include statement with shell or something else, playbook runs as expected....
Ansible docs tells that task can be conditionally included, so why I am getting error here?
Solution: You should leave out the "tasks:" part in the included file.
Why it fails:
When you include, you are already in the tasks section so to Ansible it looks like:
- tasks:
tasks:
- name: https://www.digitalocean.com/pricing/
...
This happens when you define "tasks" within "tasks"
Tasks definitions within tasks definitions can happen if you try to include another playbook that has a tasks definition in it.

Ansible notify handlers in another role

Can I notify the handler in another role? What should I do to make ansible find it?
The use case is, e.g. I want to configure some service and then restart it if changed. Different OS have probably different files to edit and even the file format can be different. So I would like to put them into different roles (because the file format can be different, it can't be done by setting group_vars). But the way to restart the service is the same, using service module; so I'd like to put the handler to common role.
Is anyway to achieve this? Thanks.
You can also call handlers of a dependency role. May be cleaner than including files or explicitly listing roles in a playbook just for the purpose of role to role relationship. E.g.:
roles/my-handlers/handlers/main.yml
---
- name: nginx restart
service: >
name=nginx
state=restarted
roles/my-other/meta/main.yml
---
dependencies:
- role: my-handlers
roles/my-other/tasks/main.yml
---
- copy: >
src=nginx.conf
dest=/etc/nginx/
notify: nginx restart
You should be able to do that if you include the handler file.
Example:
handlers:
- include: someOtherRole/handlers/main.yml
But I don't think its elegant.
A more elegant way is to have a play that manages both roles, something like this:
- hosts: all
roles:
- role1
- role2
This will make both roles able to call other handlers.
But again I would suggest to make it all in one role and separate files and use a conditional include http://docs.ansible.com/playbooks_conditionals.html#conditional-imports
Hope that helps
You may import additional handlers from YourRole/handlers/main.yml file by using import_tasks.
So, if MyRole needs to call handlers in some OtherRole, roles/MyRole/handlers/main.yml will look like this:
- import_tasks: roles/OtherRole/handlers/main.yml
Of course roles/MyRole/handlers/main.yml may include additional handlers as well.
This way if I want to run MyRole without running tasks from the OtherRole, ansible will be able to correctly import and run handlers from the OtherRole
I had a similar issue, but needed to take many actions in the other dependent roles.
So rather than invoking the handeler - we set a fact like so:
- name: install mylib to virtualenv
pip: requirements=/opt/mylib/requirements.txt virtualenv={{ mylib_virtualenv_path }}
sudo_user: mylib
register: mylib_wheel_upgraded
- name: set variable if source code was upgraded
set_fact:
mylib_source_upgraded: true
when: mylib_wheel_upgraded.changed
Then elsewhere in another role:
- name: restart services if source code was upgraded
command: /bin/true
notify: restart mylib server
when: mylib_source_upgraded
Currently I'm using ansible v2.10.3 and it supports to call handlers on different roles. This was because the handlers are visible on the play-level, as per Ansible Docs says. You can see the docs mentioned that in the bottom-most point.
handlers are play scoped and as such can be used outside of the role they are defined in.
FYI, I tested the solution i.e. calling other role's handlers and it works! No need to import or else, just make sure that the roles are in the same playbook execution.
To illustrate:
roles/vm/handlers/main.yaml
---
- name: rebootvm
ansible.builtin.reboot:
reboot_timeout: 600
test_command: whoami
roles/config-files/tasks/main.yaml
---
- name: Copy files from local to remote
ansible.builtin.copy:
dest: /home/ubuntu/config.conf
src: config.conf
backup: yes
force: yes
notify:
- rebootvm
So when the config file (i.e. config.conf) changed, Ansible will send it to the remote location and it will notify the handler rebootvm, then the VM is rebooted.
P.S. I don't know what version exactly Ansible support this.
Edit: code indentation fix

Resources