How can I run an ansible role locally? - ansible

I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?

Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"

Related

How to have multiple tasks under one role in Ansible?

I am setting up Ansible as a newbie. I would like to group a few tasks under an Nginx role.
See my folder structure
The only way I could get this to work is to use include statements in my playbooks... but that means I needed to start changing everything to relative paths since Ansible stopped being able to find things.
My playbook currently:
- name:
Install Nginx, Node, and Verdeccia
hosts:
all
remote_user:
root
tasks:
- include: ../roles/nginx/tasks/install.yml
- include: ../roles/nginx/tasks/create_node_config.yml
vars:
hostname: daz.com
How can I reference the sub task in the playbook to be something like
tasks:
- nginx.install
and still be within best practices.
Your use of roles so far is not in line with the norm or the idea of Ansible. From a purely technical point of view, it is quite possible that you have multiple task bundles in yml files that you include in the playbook.
If you want to include a specific task file from a role, you should better do this via the module include_role with the parameter tasks_from.
However, the workflow with roles usually looks different.
in the folder tasks should always be a file main.yml, this is called automatically, if simply the role is included, no matter over which way.
in this main.yml you can then add further control logic to include your yml files as required.
As it looks to me, for example, you always need to install, but depending on the use case you want to have different configurations.
create in your nginx role a file defaults/main.yml.
---
config_flavor: none
This initializes the config_flavor variable with the string none.
create in your nginx role a file tasks/main.yml.
---
- name: include installation process
include_tasks: install.yml
- name: configure node
include_tasks: create_node_config.yml
when: config_flavor == "node"
- name: configure symfony
include_tasks: create_symfony_config.yml
when: config_flavor == "symfony"
- name: configure wordpress
include_tasks: create_wordpress_config.yml
when: config_flavor == "wordpress"
The main.yml will be included by default when the role is applied.
This is where the installation is done first, then the proper configuration is done.
Which configuration should be done is defined by the variable config_flavor. In the listed example the values are node, symfony and wordpress. In all other cases the installation will be done, but no configuration (so in the default case with none).
include your role in the playbook as follows
---
- name: Install Nginx, Node, and Verdeccia
hosts: all
remote_user: root
vars:
hostname: daz.com
roles:
- role: nginx
config_flavor: node
At this point you can set the value for config_flavor. If you set wordpress instead, the tasks create_wordpress_config.yml will be included automatically, using the logic from tasks/main.yml.
You can find more about roles in the Ansible Docs.
There are many more possibilities and ways, you will get to know them in the course of your learning. Basically I would recommend you to read a lot in the Ansible docs and to use them as a reference book.
Please refer to bellow document. you can use "include_role" with "tasks_from" attribute. you don't need to provide full path of task files, Just file names.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_role_module.html
Example:
- name: Run tasks/other.yaml instead of 'main'
ansible.builtin.include_role:
name: myrole
tasks_from: other

Allow certain roles to be run multiple times in an Ansible playbook

I made a role whose only job is to download and unpack binary packages from Artifactory. Just about everything I want to install requires me to use this role.
I have a deployment with three major components, and each component will be pulled from Artifactory using this same reusable role. The role takes parameters, for example the name of the package being installed, the Artifactory URL where the binary can be downloaded from.
The reusable role is called from /meta/dependancies.
The problem is that the reusable role only runs once. The second time it's run Ansible skips it (even though the parameters are different).
Is there a way I can tell Ansible that this role must always be run, even if has previously run with different parameters?
Just include the role multiple times in the playbook using different parameter values.
This should do the trick:
---
- hosts: server
tasks:
- include_role:
name: artifactory
vars:
artifact: 'artifact_1'
- include_role:
name: artifactory
vars:
artifact: 'artifact_2'

Run Ansible role without running dependencies defined in the role's meta

I have a role wp-vhost that has one role it depends on:
// roles/wp-vhost/meta/main.yml
---
dependencies:
- { role: nginx }
Each time I run wp-vhost, the nginx role will also run. I understand that this is fine and it's a desired behavior.
However, during my local development, time is unnecessarily lost on running the nginx role, when I want to run only the tasks defined in wp-vhosts since I know that nginx had run before and set-up the necessary environment for wp-vhost.
Is there a way to execute a playbook with roles, without executing roles' dependencies?
The way I would do this is to use Ansible tags and apply them to your "wp-vhost" specific code.
Assuming your wp-vhost role's main playbook is in main.yml, a good pattern is to spin out the actual tasks into a sub-playbook called something like wp-vhost.yml, included from main.yml, so the non-nginx code gets a tag that doesn't get applied to the nginx role. In this case:
- include: wp-vhost.yml
tags: wp-vhost
In order to use a tag for every chunk of Ansible code (whether an included tasks file or a role), try writing every role mentioned in dependencies like this:
- role: nginx
tags: nginx
When in testing mode, you can run just the wp-vhost specific parts like this:
$ ansible-playbook --tags wp-vhost main.yml
Or you can run the whole playbook including any dependencies like this - default is to run everything ignoring tags:
$ ansible-playbook main.yml
This makes it easy to quickly run just parts of a complex set of cascading roles and include files when testing, and also use the wp-vhost role normally in other roles' dependencies.
Impact on role structure
Careful use of tags doesn't affect role structure or use at all, and you would typically use tags only for testing.
For more complex roles, it's common to structure the tasks into separate files in any case, keeping the main.yml simple, like this:
- name: Set up base OS
include: base_os.yml
tags: base_os
- name: Ensure logs are rotated
include: logrotate.yml
tags: logrotate
- name: Create users and groups
include: users_groups.yml
tags: users_groups
Solution without include files
If you don't want to change the wp-vhosts use of include files, you would need to use blocks in the playbook (Ansible 2.0+):
- hosts: all
roles:
- role: nginx
tags: nginx
tasks:
- block:
- debug: msg=hello
- someaction: ...
tags: wp-vhosts
Note that the final tags: is aligned with the block: so applies to all tasks in that block. This is cleaner than splitting the playbook into multiple plays.
Non-tag alternative
You can use a when: condition on the role invocation in the wp-vhost role dependencies, and define a variable such as debug_mode to control this. However, such debug/test logic will clutter your codebase compared to defining a tag per role invocation or task file.

How to implement conditional logic in ansible regarding modified files?

In ansible, I want to restart a service only if the configuration was changed.
Here is an example:
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
- shell: service autofs reload
As you can see this code will always restart the autofs, even when the configuration file is not updated.
How can I improve this so it will restart only when the configuration file is changed?
Note: that's a generic question that is not specific to autofs, it could apply to any service where I do want to execute something IF a configuration file was changed, probably via lineinfile or ini_file core modules.
For a start you should be using the service module for controlling running services when running out. In general, if there's a proper module for something and it does what you need then you should do that and only resort to shelling out for edge cases.
Also, when an Ansible task runs it returns a series of facts that you can register to be able to use directly. This nearly always includes a changed attribute which is a boolean saying whether Ansible thinks it changed something (it can't always know. If a shell task returns something in stdout then it assumes something changed unless you directly override it with changed_when).
So you could go with something like this:
- hosts: workers
tasks:
- name: Set autofs options
lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
register: result
- name: reload autofs is autofs options are changed
service: name=autofs state=reloaded
when: result.changed
If you would create a role instead of using loose tasks right in your playbook you could work with handlers. Also see Best practices: Task And Handler Organization For A Role.
Tasks file of your role:
- name: Change autofs config
lineinfile: dest=/etc/default/autofs
regexp=^OPTIONS=
line="OPTIONS=\"-O soft\""
backup=yes
notify:
- Restart autfs
Then in your handlers file of the same role:
- name: Restart autfs
service: name=autfs
state=restarted
The handler gets notified whenever the config tasks has a changed state.
PS: I used the service module for managing the service. You should only use the shell module if no specific module for your tasks is available
Regarding roles:
One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part of the main playbooks page. See Playbook Roles and Include Statements. You absolutely should be using roles. Roles are great. Use roles. Roles! Did we say that enough? Roles are great.
I know this is old, but I landed on this and felt there was more to add to what others have already answered. You should probably use a handler, but you do not need to do a discreet role to use a handler. Here's an example of handlers used directly in the playbook.
---
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
notify: reload autofs
handlers:
- name: reload autofs
service: name=autofs state=reloaded
One thing to note.. handlers fire off at the end of the play. So if you have multiple tasks, and you're expecting the handler to fire after the task that notified it, you might be in for a bad time. You can do a 'meta' task to flush_handlers if you need that handler to fire off in a sequential order with your tasks, or you can go the register/conditional way like ydaetskcoR provided. In your play, with a single task, it doesn't really matter.

Ansible notify handlers in another role

Can I notify the handler in another role? What should I do to make ansible find it?
The use case is, e.g. I want to configure some service and then restart it if changed. Different OS have probably different files to edit and even the file format can be different. So I would like to put them into different roles (because the file format can be different, it can't be done by setting group_vars). But the way to restart the service is the same, using service module; so I'd like to put the handler to common role.
Is anyway to achieve this? Thanks.
You can also call handlers of a dependency role. May be cleaner than including files or explicitly listing roles in a playbook just for the purpose of role to role relationship. E.g.:
roles/my-handlers/handlers/main.yml
---
- name: nginx restart
service: >
name=nginx
state=restarted
roles/my-other/meta/main.yml
---
dependencies:
- role: my-handlers
roles/my-other/tasks/main.yml
---
- copy: >
src=nginx.conf
dest=/etc/nginx/
notify: nginx restart
You should be able to do that if you include the handler file.
Example:
handlers:
- include: someOtherRole/handlers/main.yml
But I don't think its elegant.
A more elegant way is to have a play that manages both roles, something like this:
- hosts: all
roles:
- role1
- role2
This will make both roles able to call other handlers.
But again I would suggest to make it all in one role and separate files and use a conditional include http://docs.ansible.com/playbooks_conditionals.html#conditional-imports
Hope that helps
You may import additional handlers from YourRole/handlers/main.yml file by using import_tasks.
So, if MyRole needs to call handlers in some OtherRole, roles/MyRole/handlers/main.yml will look like this:
- import_tasks: roles/OtherRole/handlers/main.yml
Of course roles/MyRole/handlers/main.yml may include additional handlers as well.
This way if I want to run MyRole without running tasks from the OtherRole, ansible will be able to correctly import and run handlers from the OtherRole
I had a similar issue, but needed to take many actions in the other dependent roles.
So rather than invoking the handeler - we set a fact like so:
- name: install mylib to virtualenv
pip: requirements=/opt/mylib/requirements.txt virtualenv={{ mylib_virtualenv_path }}
sudo_user: mylib
register: mylib_wheel_upgraded
- name: set variable if source code was upgraded
set_fact:
mylib_source_upgraded: true
when: mylib_wheel_upgraded.changed
Then elsewhere in another role:
- name: restart services if source code was upgraded
command: /bin/true
notify: restart mylib server
when: mylib_source_upgraded
Currently I'm using ansible v2.10.3 and it supports to call handlers on different roles. This was because the handlers are visible on the play-level, as per Ansible Docs says. You can see the docs mentioned that in the bottom-most point.
handlers are play scoped and as such can be used outside of the role they are defined in.
FYI, I tested the solution i.e. calling other role's handlers and it works! No need to import or else, just make sure that the roles are in the same playbook execution.
To illustrate:
roles/vm/handlers/main.yaml
---
- name: rebootvm
ansible.builtin.reboot:
reboot_timeout: 600
test_command: whoami
roles/config-files/tasks/main.yaml
---
- name: Copy files from local to remote
ansible.builtin.copy:
dest: /home/ubuntu/config.conf
src: config.conf
backup: yes
force: yes
notify:
- rebootvm
So when the config file (i.e. config.conf) changed, Ansible will send it to the remote location and it will notify the handler rebootvm, then the VM is rebooted.
P.S. I don't know what version exactly Ansible support this.
Edit: code indentation fix

Resources