I have a large playbook that uses multiple roles to setup new servers. I'd like to re-use the playbook but for the decommission stage instead of calling into role_name/tasks/main.yml and having a lot of when: statements, I'd like to tell Ansible to call the role but start in role_name/tasks/decommission.yml.
As a first test I setup my main.yml file like this:
- name: "Provisioning new server"
block:
- name: "Include the provisioning steps."
include_tasks: provision.yml
when:
- not decom
- name: "DECOM - Unregister from Satellite server"
block:
- name: "DECOM - Include the deprovision steps."
include_tasks: decommission.yml
when:
- decom
But that's getting really ugly to maintain. Is this possible or am I overlooking an alternative way to setup the playbook?
Q: "Tell Ansible to call the role but start in role_name/tasks/decommission.yml"
A: Use include_role
- include_role:
name: role_name
tasks_from: decommission.yml
,or import_role
- import_role:
name: role_name
tasks_from: decommission.yml
See Re-using files and roles on what is the difference between including and importing a role.
Related
I have 3 tasks that will get the version from the database and I have 3 roles inside these roles few tasks have a dependency on the version.
I can't use a variable here because the value is dynamic and I cant set facts because I should be able to run individual roles and only a few tasks from each role with tags.
My Example below.
Role1:-
Tasks I need to run multiple times(example):-
- name: get version
block:
- command: "mongo get_version.js"
register: op
- name: filter version number from output
shell: "echo {{ present_version.stdout }}| grep 'version'"
register: present_version
tags: role1_db_tag
- name: do something
- name: do something when a version is something
when: present_version == "something"
tags: role1_db_tag
Role2:
- name: get version
block:
- command: "mongo get_version.js"
register: op
- name: filter version number from output
shell: "echo {{ present_version.stdout }}| grep 'version'"
register: present_version
tags: role2_db_tag
- name: do some other things
- name: do something else when a version is something
when: present_version == "something"
tags: role2_db_tag
Now I am running the get version tasks 3 times in each role now.
Is there any way like handlers just to mention some name for these repetitive tasks and call when required?
Expecting:-
Role2:
- name: fetch version
function: get_version
register: present_version
tags: role1_db_tag
- name: do something else when a version is something
when: present_version == "something"
tags: role2_db_tag
PS: Is there a better way than include_* or import_*.
As you have already surmised, the way you would typically handle this is by moving the get version task into a separate role, and then including that in the dependent roles using e.g. import_role:
- import_role:
name: get_version
If you have a number of small shared tasks, you can create a "library" role with multiple task files:
- import_role:
name: library
tasks_from: get_version
See "Using Roles" for a discussion of include_role vs import_role.
You can get similar behavior by adding a dependency to your role
instead. E.g., in Role1, create a file meta/main.yml with the
following content:
dependencies:
- get_version
This will automatically run the get_version role before role1 any
time you run role1.
See "Using Role Dependencies" for more information.
I have written lots of small yaml task files for an ansible project.
Only few of these tasks files are being reused (say 30%).
I wonder how to manage this big list of tasks, should I convert all of them to roles, and call the playbook with roles:
Playbook as below (pls ignore the syntaxe), will be clear, But I do not like to have a role for each simple task.
- name: playbook with roles for each task
hosts: all
roles:
- small_task_1
- small_task_2
- small_task_3
- small_task_4
....
....
- small_task_20
I liked the idea of putting them 2-3 roles, and call the task with include_task (or import_task), but the problem is, to each call of a task, I have to add "import_role: , name: , tasks_from" EVEN THOUGH It's from the same role!
$ ansible-galaxy role list
$ /home/user1/.ansible/roles
roles_a, (unknown version)
roles_b, (unknown version)
in roles_a and roles_b, I may have around 10 yaml tasks for each
,,,
name: playbook with 2 roles
hosts: all
- name: small task 1
include_role:
name: role_a
tasks_from: small_task_1
- name: small task 2
include_role:
name: role_a
tasks_from: small_task_2
......
......
- name: small task 9
include_role:
name: role_a
tasks_from: small_task_9
- name: small task 10
include_role:
name: role_a
tasks_from: small_task_10
,,,
Above thing is not very handy...
I would like to group the tasks in a "role" (or any other ansible thing), and call the tasks from that group... something like :
,,,
name: playbook with 2 roles
hosts: all
- name: small tasks
include_role:
name: role_a
tasks_from: small_task_1
tasks_from: small_task_2
......
......
tasks_from: small_task_9
tasks_from: small_task_10
,,,
I've tried with block , but it does not shorten the playbook.
Can anyone guide me, please?
For your final example, you can just use a loop directive on your include_role task, like this:
- hosts: localhost
gather_facts: false
tasks:
- include_role:
name: role_a
tasks_from: "{{ item }}"
loop:
- small_task_1
- small_task_2
- small_task_3
- small_task_4
I would like to skip some of the plays based on a conditional value.
I could have a task at the beginning that would skip some of the other tasks when a condition is met.
I know I could always use set_fact and use this as a condition to run the other plays (roles and tasks) or evaluate the said condition directly in each play I want to skip.
However, since the plays already have a tagging system in place, is there something like set_tags in Ansible that I could leverage on to avoid having to add conditions all over my plays and bloat my already heavy playbook.
Something such as:
- name: skip terraform tasks if no file is provided
hosts: localhost
tasks:
set_tags:
- name: terraform
state: skip
when: terraform_files == ""
The two plays I'd like to skip:
- name: bootstrap a testing infrastructure
hosts: localhost
roles:
- role: terraform
state: present
tags:
- create_servers
- terraform
[...]
- name: flush the testing infrastructure
hosts: localhost
roles:
- role: terraform
state: absent
tags:
- destroy_servers
- terraform
If I understood correctly, the following should do the job:
- name: bootstrap a testing infrastructure
hosts: localhost
roles:
- role: terraform
state: present
when: terraform_files != ""
tags:
- create_servers
- terraform
[...]
- name: flush the testing infrastructure
hosts: localhost
roles:
- role: terraform
state: absent
when: terraform_files != ""
tags:
- destroy_servers
- terraform
This is basically the same as setting the when clause on every single tasks in the role. So each task will be inspected but skipped if the condition is false.
I would like to run a handler only once in an entire playbook.
I attempted using an include statement in the following in the playbook file, but this resulted in the handler being run multiple times, once for each play:
- name: Configure common config
hosts: all
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: common }
handlers:
- include: handlers/main.yml
- name: Configure metadata config
hosts: metadata
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: metadata }
handlers:
- include: handlers/main.yml
Here is the content of handlers/main.yml:
- name: restart autofs
service:
name: autofs.service
state: restarted
Here is an example of one of the tasks that notifies the handler:
- name: Configure automount - /opt/local/xxx in /etc/auto.direct
lineinfile:
dest: /etc/auto.direct
regexp: "^/opt/local/xxx"
line: "/opt/local/xxx -acdirmin=0,acdirmax=0,rdirplus,rw,hard,intr,bg,retry=2 nfs_server:/vol/xxx"
notify: restart autofs
How can I get the playbook to only execute the handler once for the entire playbook?
The answer
The literal answer to the question in the title is: no.
Playbook is a list of plays. Playbook has no namespace, no variables, no state. All the configuration, logic, and tasks are defined in plays.
Handler is a task with a different calling schedule (not sequential, but conditional, once at the end of a play, or triggered by the meta: flush_handlers task).
A handler belongs to a play, not a playbook, and there is no way to trigger it outside of the play (i.e. at the end of the playbook).
Solution
The solution to the problem is possible without referring to handlers.
You can use group_by module to create an ad-hoc group based on the result of the tasks at the bottom of each play.
Then you can define a separate play at the end of the playbook restarting the service on targets belonging to the above ad-hoc group.
Refer to the below stub for the idea:
- hosts: all
roles:
# roles declaration
tasks:
- # an example task modifying Nginx configuration
register: nginx_configuration
# ... other tasks ...
- name: the last task in the play
group_by:
key: hosts_to_restart_{{ 'nginx' if nginx_configuration is changed else '' }}
# ... other plays ...
- hosts: hosts_to_restart_nginx
gather_facts: no
tasks:
- service:
name: nginx
state: restarted
Possible solution
Use handlers to add hosts to in-memory inventory. Then add play to run restart service only for these hosts.
See this example:
If task is changed, it notify mark to restart to set fact, that host needs service restart.
Second handler add host is quite special, because add_host task only run once for whole play even in handler, see also documentation. But if notified, it will run after marking is done implied from handlers order.
Handler loops over hosts on which tasks were run and check if host service needs restart, if yes, add to special hosts_to_restart group.
Because facts are persistent across plays, notify third handler clear mark for affected hosts.
A lot of lines you hide with moving handlers to separate file and include them.
inventory file
10.1.1.[1:10]
[primary]
10.1.1.1
10.1.1.5
test.yml
---
- hosts: all
gather_facts: no
tasks:
- name: Random change to notify trigger
debug: msg="test"
changed_when: "1|random == 1"
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: primary
gather_facts: no
tasks:
- name: Change to notify trigger
debug: msg="test"
changed_when: true
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: hosts_to_restart
gather_facts: no
tasks:
- name: Restart service
debug: msg="Service restarted"
changed_when: true
A handler triggered in post_tasks will run after everything else. And the handler can be set to run_once: true.
It's not clear to me what your handler should do. Anyway, as for official documentation, handlers
are triggered at the end of each block of tasks in a play,
and will only be triggered once even if notified by multiple different
tasks [...] As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
So handlers are notified / executed once for each block of tasks.
May be you get your goal just keeping handlers after "all" target hosts, but it doesn't seem a clean use of handlers.
.
I want to use the module mongodb_replication defined in greendayonfire.mongodb role.
I know I can use the module in my tasks after applying the role in the same play. But I don't want to apply the role (and execute all it's tasks). Is there any way to get to "include" the role without executing the tasks?
I want to have it like this
---
- hosts: mongodb-nodes
become: true
roles:
- base
- greendayonfire.mongodb
vars:
mongodb_package: mongodb-org
mongodb_version: "3.2"
mongodb_force_wait_for_port: true
mongodb_net_bindip: 0.0.0.0
mongodb_net_http_enabled: true
mongodb_replication_replset: "rs1"
mongodb_storage_prealloc: false
- hosts: mongodb-0
tasks:
- mongodb_replication: replica_set=rs1 host_name=item state=present
with_items:
- mongodb-0
- mongodb-1
- mongodb-2
where the second play is the one that runs the mongodb_replication module (only in the node mongodb-0). Right now it can't find the module.
I guess can I copy the module out of the role into my playbook but I will be cleaner if I could just import the module from the role (which I don't want to edit)
I found that it's possible to load the role without executing the task by using the when: false clause when referring to the role. This loads the vars, defaults, modules, etc.
- hosts: mongodb-0
roles:
- role: greendayonfire.mongodb
when: false
tasks:
- mongodb_replication: replica_set=rs1 host_name=item state=present
with_items:
- mongodb-0
- mongodb-1
- mongodb-2