In our deployment strategy, our playbooks take on the following structure:
workflow.yml
- hosts: host1
tasks:
- name: setup | create virtual machines on host1
include_tasks: setup.yml
- name: run | import a playbook that will target new virtual machines
include: "virtual_machine_playbook/main.yml"
- hosts: host1
tasks:
- name: cleanup | destroy virtual machines on host1
include_tasks: destroy.yml
virtual_machine_playbook/main.yml
- hosts: newCreatedVMs
roles:
- install
- run
Which works great most of the time. However if for some reason, the virtual_machine_playbook/main.yml errors out, the last hosts block does not run and we are required to manually destroy our VMs. I wanted to know if there was a way to mandate that each hosts block run, regardless of what happens before it.
Other Notes:
The reason that we structure our playbooks this way is because we would like everything to be as contained as possible. Variables are created in each hosts block that are rather important to the ones that follow. Splitting them out into separate files and invocations is something we have not had much success with
We have tried the standard ansible approach for error handling as found here, but most of the options only apply at the task level (blocks, ignore_errors, etc.)
Related
The situation
I have a playbook whose execution I sometimes want to limit to specific hosts: web servers and application servers, e.g., --limit mywebserver, appserver1, appserver2. I have two groups:
webservers contain web servers including mywebserver
appservers contain application servers including appserver1 and appserver2
The playbook calls a play that I call for each of my webservers:
- name: Install and prepare Webservers
hosts: webservers
roles:
- role: webserver_setup
In that play, I execute tasks, e.g., setting up a website for each application server that is stored in a group variable:
- name: Configure website for all application servers
notify: Reload webserver
loop: "{{ groups['webserver_' + inventory_hostname] }}"
ansible.builtin.template:
src: ./templates/website.y2
dest: "/etc/nginx/sites-available/{{ hostvars[item]['app_server'] }}.conf"
mode: '0640'
owner: webserver
group: webserver
That works fine if I run the script without --limit. However, if I specify a host for --limit, then the loop in the play is still executed for all application servers, because it doesn't take the limit option into account. (No surprise here.)
My question
My problem is that I have a lot of application servers, and that I would like to limit the play's loop to only the hosts specified by --limit. How'd I do that?
Thoughts on solutions
There are special variables provided by Ansible, ansible_play_hosts, ansible_play_batch, but they only provide the hosts limited by --limit for the current play. In my case, these variables would contain only mywebserver, because the play is called for hosts: webservers.
I've thought about refactoring the play, such that it is called for each application server, instead, and use delegate_to to actually execute the task above on each associated webserver. However, I think this would have some drawbacks (ssh log in overhead for each app server, reload webserver config overhead for each app server, possible conflicts due to parallel task execution on the web server).
ansible_limit provides the limit that was passed on the command line, and can be expanded into a list of hostnames using the inventory_hostnames lookup. You could just use this in a condition; it's not entirely clear from the question what your actual structure is, but one of the following is probably right:
when: item in query('inventory_hostnames', ansible_limit | default('all'))
# OR
when: hostvars[item]['app_server'] in query('inventory_hostnames', ansible_limit | default('all'))
You might also be able to eliminate items from the loop entirely, though that makes the code a bit more complex and it's helpful to use intermediate variables to make it more readable:
- name: Configure website for all application servers
ansible.builtin.template:
src: website.y2 # Relative paths in template actions use `templates/` automatically, so you shouldn't specify it.
dest: /etc/nginx/sites-available/{{ hostvars[item]['app_server'] }}.conf
mode: "0640"
owner: webserver
group: webserver
notify: Reload webserver
loop: "{{ webserver_group | intersect(limit_hosts) }}"
vars:
webserver_group: "{{ groups['webserver_' ~ inventory_hostname] }}"
limit_hosts: "{{ query('inventory_hostnames', ansible_limit | default('all')) }}"
I'm trying to skip a task in a playbook based on the group in my inventory file.
Inventory:
[test]
testserver1
[qa]
qaserver1
[prod]
prodserver1
prodserver2
I'm pretty sure I need to add the when clause to my task, but not sure how to reference the inventory file.
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: inventory is not prod
Basically, I want run the task above on all the servers except for the ones in the group called prod.
You can have all your hosts in one environment, but I suggest to use different environment files for dev, staging, qa and prod. If you separate it by condition, it can happen quite fast, that you mess up some condition or forget to add it to a new task altogether and accidentally run a task on a prod host, that should not run there.
If you still want to have all your hosts in the same inventory, you can either separate them using different plays (you can have multiple in the same playbook) and then using hosts to specify where they should run.
For example:
- name: play one
hosts:
- qa
- test
tasks:
<all your tasks for qa and test>
- name: play two
hosts: prod
tasks:
<all your tasks for prod>
If you want to do it on a per-task level, you can use the group_names variable.
For example:
- name: Add AD group to local admin
ansible.windows.win_group_membership:
name: Administrators
members:
- "{{ local_admin_ad_group }}"
state: present
when: '"prod" not in group_names'
In that case you need to be really careful if you change things, so your conditions are still the way they are supposed to be.
I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?
Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"
I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.
I am using ansible to script a deployment for an API. I would like this to work sequentially through each host in my inventory file so that I can fully deploy to one machine at a time.
With the out box behaviour, each task in my playbook is executed for each host in the inventory file before moving on to the next task.
How can I change this behaviour to execute all tasks for a host before starting on the next host? Ideally I would like to only have one playbook.
Thanks
Have a closer look at Rolling Updates:
What you are searching for is
- hosts: webservers
serial: 1
tasks:
- name: ...
Using the --forks=1 specify number of parallel processes to use (default=5)
Strategy enable to parallel tasks in a per host basis. See https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html
There are 3 strategies: linear (the default), serial and free (quickest)
- hosts: all
strategy: free
tasks:
...