I have a playbook with this structure:
---
- hosts: foo-servers
roles:
- foo_setup
become: yes
tags: tweaks
- hosts: bar-servers
roles:
- bar_setup
become: yes
tags: tweaks
[a few more server groups with a similar pattern]
I have a somewhat similar feature to deploy in all servers but each server group has it's own small differences, so I need to keep separate roles for each group.
I also want to run just a select group of tasks from each role, the ones tagged 'tweaks', in all hosts.
And all hosts should be run with raised privileges, but that is not true for all playbooks, so I want that setting to apply just to this playbook (no global vars).
I would like to move all the repeated parameters - become: yes and tags: tweaks outside of host: plays where they will be indicated to apply to all roles bellow. Something to the effect of
--
- all_hosts_this_playbook:
become: yes
tags: tweaks
- hosts: foo-servers
roles:
- foo_setup
- hosts: bar-servers
roles:
- bar_setup
I suppose this is possible in the command line. Like ansible-playbook setup_tweaks.yml --tags "tweak" --become? But is there a playbook equivalent? I'd rather have these in the file than in the command line, where I often forget to add stuff.
And looping doesn't work...
ERROR! 'loop' is not a valid attribute for a Play
- name: Make tweaks in many servers
become: yes
tags: tweaks
hosts: "{{ item.host }}"
roles:
- "{{ item.role }}"
loop:
- { host: 'foo-servers', role: 'foo_setup' }
- { host: 'bar-servers', role: 'bar_setup' }
I also want to add post_tasks: to be run in all servers (maybe they also need to be tagged?):
post_tasks_all_hosts:
- name: Upgrade system
apt:
autoremove: yes
autoclean: yes
update_cache: yes
upgrade: dist
tags: tweaks
- name: Reboot
shell: sleep 2 && reboot
async: 3
poll: 0
tags: tweaks
Is it possible to define playbook-wide pre_tasks or post_tasks?
Here Ansible: How to declare global variable within playbook? it is indicated that one 'cannot define a variable accessible on a playbook level', but in my case it's not variables - it's task parameters and post_tasks:.
Maybe the parameters and the 'pre/post tasks' are different problems with different solutions, but I decided to ask in the same place because they both fall on the same category of parameters that I'd like to set for the whole playbook, outside of host: plays.
Q: "I suppose this is possible in the command line. Like ansible-playbook setup_tweaks.yml --tags "tweak" --become? But is there a playbook equivalent?"
A: No. There is no such playbook equivalent.
Isn't this is a misunderstanding of the command-line option --tags
only run plays and tasks tagged with these values
versus
tag inheritance ?
Adding tags: to a play, or to statically imported tasks and roles, adds those tags to all of the contained tasks...When you apply tags: attributes to structures other than tasks, Ansible processes the tag attribute to apply ONLY to the tasks they contain. Applying tags anywhere other than tasks is just a convenience so you don’t have to tag tasks individually.
Details
In the play below tag "tweaks" is added to all of the contained tasks
- hosts: foo-servers
roles:
- foo_setup
tags: tweaks
The command below selects only tasks tagged "tweaks"
ansible-playbook setup_tweaks.yml --tags "tweak"
Q: "Is it possible to define playbook-wide pre_tasks or post_tasks?"
A: No. It's not possible. The scope of pre/post_tasks is the play.
Related
We have inherited quite a lot of ansible playbooks and roles/tasks underneath
One of the ansible playbook is triggered by a shell script after setting few variables
ansible-playbook -i inventory.yml install_multiple_nodes.yml
In the install_multiple_nodes.yml, it contains as following
- hosts: all_nodes
run_once: true
roles:
- set_keys
- download_rpms
become: true
- hosts: node1
roles:
- install_rpms_node1
- custom_actions_node1
become_user: node1_user
- hosts: node2
roles:
- install_rpms_node2
- custom_actions_node2
become_user: node2_user
... This continues for multiple nodes ...
The entire playbook takes about 1.5hours to run.
But many of the 'plays' within the playbook can be run in parallel. For instance in the above snippet, the "node1" and "node2" can be run in parallel, but later down in the chain some have to wait for another etc.
Is there a way we can
put a flag to say the node1 and node2 can run in parallel?
What's the best practice to have these type of playbooks? i.e. parallellise options and put dependency after a set of 'plays' , start next etc.
Following my comment, one thing you can try to work around the poor design:
- hosts: all_nodes
run_once: true
roles:
- set_keys
- download_rpms
become: true
tasks:
- name: Run machine specific roles
ansible.builtin.include_role:
name: "{{ item }}_{{ inventory_hostname }}"
loop:
- install_rpms
- custom_actions
Notes:
Untested, see in you own envitonment
This will surely break as soon as there is no specific role for a given node. Add conditions/tests if this is the case.
I am using ansible 2.9.4. My goal is to deny run some playbook on all nodes by accident or without tags. This is my app.yaml:
- hosts: all
remote_user: root
vars:
server_domain: mydomain.com
project_name: project
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: dev,test,prod
- name: "message"
fail:
msg: "You didn't choose environment 'dev,test,prod'"
when: suppress_message is not defined
roles:
- testrole
The problem is, when I am not use --limit option, the role testrole run successfully and then the error message occurs - too late if I already run it on all nodes.
Even when I specify tags --tags "mytag" it will not check if limit was specified.
By similar way I would like to force to use tags, so everytime when you run playbook, you should specify environment tag (dev, test, prod) - e.g ssh keys for different environments, configuration files, etc...
What I would expect from this, that If I not specified tag dev, test or prod, the suppress_message would not be specified so next task with name message would fail with message "You didn't choose environement".
The fact is, If I not specified any tag:
- supress message have state OK
- message is skipped
If I specify valid tag --tags "dev":
- supress message have state OK
- message is not even mentioned (I would expect skipping)
If I specify "invalid tag" --tags "dev123":
- supress message is not mentioned
- message is not mentioned
The solution for limit could be replace - hosts: all with - hosts: randomtext so when no limit is specified there will be no match but what about tags/environments? I am quiet lost about how ansible works. The logic about this decisions what will run is quiet chaotic from this example.
Below is an example playbook that should achieve what you need to do.
Hosts are defined in a myhosts variable on the command line, the first task will abort the play if this variable is not set
Through use of the two “special tags”, always and never, we can ensure that:
the above check always runs, and
the inclusion of your testrole never runs — unless dev, test, or prod tags are explicitly specified
There’s a helpful message before the inclusion of the testrole, so the user is not left confused if the play exits silently because of unset tags
- hosts: '{{ myhosts | default("localhost") }}'
tasks:
- name: Fail if hosts are not defined
run_once: true
fail:
msg: >
You must define hosts in the myhosts variable,
e.g. `-e myhosts=foo.example.com` on the command line
when: myhosts is undefined
tags:
- always
- name: Helpful message
run_once: true
debug:
msg: >
This playbook does nothing unless the environment is specified with
the `--tags` option on the command line (dev, test, or prod).
tags:
- always
- name: Include role only when tags are specified
include_role:
name: testrole
tags:
- never
- dev
- test
- prod
This would be then executed like so:
$ ansible-playbook app.yaml --extra-vars myhosts=foo.example.com --tags dev
Change tasks to pre_tasks.
The order is pre_tasks, roles, tasks, post_tasks.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
Playbook ordering is irrelevant.
Another option is add a task to include the role. So change
roles:
- testrole
to
- include_role:
name: testrole
Try using tags like this -
tags:
- dev,test,prod
In my playbook I have:
roles:
- role: role_1
- role: role_2
In my roles directory structure I have defined vars with different values under role_1 and role_2
When invoking the playbook, I am trying to ask it to only consider the role I am interested in at the moment, by using limit:
ansible-playbook -i ./hosts.yml myplaybook.yml --limit role_1
What I expect:
It will disregard the variable values under role_2, because I asked it to limit to role_1
What I get:
role_2 variables values override role_1, because role_2 is lexicographically later. ugh!
What else I tried:
I tried to use tags:
roles:
- role: role_1
tags: [ role_1_tag ]
- role: role_2
tags: [ role_2_tag ]
Result is still the undesired
Edit: After following the first answer
The syntax for include_role as given here (and in the documentation) does not work without modifications. In the end I made this:
tasks:
- name: include role_1
include_role:
name: role1
apply:
tags:
- role_1_tag
tags: always
- name: include role_2
include_role:
name: role_2
apply:
tags:
- role_2_tag
tags: always
But even then, When I run it, nothing is executed. I run it like this:
ansible-playbook -i ./hosts.yml % --limit role_1 -t role_1_tag
Trying another way:
tags: always
tasks:
- name: include role_1
include_role:
name: role1
apply:
tags:
- role_1_tag
- name: include role_2
include_role:
name: role_2
apply:
tags:
- role_2_tag
Ansible tries to proceed with the other tasks, but without accessing any of the vars under the role, resulting in variable not found error
Since I put the tags: outside of tasks I guessed the tags need to be mentioned for every task. Even doing that, I got the same result, which is:
fatal: [host_group]: FAILED! => {"msg": "'my_vars' is undefined"}
Edit2: Using public: yes seems to get Ansible to read the vars, however, again, it reads them all, and --limit has no effect
Q: "Only consider the role I am interested in at the moment."
A: Use include_role. For example
- include_role:
name: role_1
- include_role:
name: role_2
By default, the parameter public is no
This option dictates whether the role's vars and defaults are exposed to the playbook. If set to yes the variables will be available to tasks following the include_role task. This functionality differs from standard variable exposure for roles listed under the roles header or import_role as they are exposed at playbook parsing time, and available to earlier roles and tasks as well.
Notes
--limit select hosts to an additional pattern.
apply tags is not properly described in the documentation. See
include_role with apply tags does not work #52063
I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.
We have a "periodic" tag in our roles that is intended to be run at regular intervals by our Ansible box for file assurance, etc. Would it be possible to have a playbook for periodic runs that calls the other playbooks with the appropriate host groups and tags?
The only way to execute an Ansible playbook "with the appropriate host groups and tags" is to run ansible-playbook executable. This is the only case in which all the data structures starting from the inventory would be created in isolation from the currently running playbook.
You can simply call the executable using command module on the control machine:
- hosts: localhost
tasks:
- command: ansible-playbook {{ playbook }} --tags {{ tags }}
You can also use local_action or delegate_to.
It might be that you want to include plays, or use roles, however given the problem description in the question, it's impossible to tell.
Here is what we ended up with: It turns out that tags and variables passed on the command-line are inherited all the way down the line. This allowed us to pass this on the command line:
ansible-playbook -t periodic periodic.yml
Which calls a playbook like this:
---
- name: This playbook must be called with the "periodic" tag.
hosts: 127.0.0.1
any_errors_fatal: True
tasks:
- fail:
when: periodic not True
- name: Begin periodic runs for type 1 servers
include: type1-server.yml
vars:
servers:
- host_group1
- host_group2
- ...
- name: Begin periodic runs for type 2 servers
...
Our 'real' playbooks have - hosts: "{{ servers }}" so that they can be inherited from the parent. The tasks in our roles are tagged with "periodic" for things that need to be run on a schedule. We then use SystemD to schedule the runs. You can use cron, but SystemD is better IMHO. Examples can be provided upon request.