I have a list of hosts and roles to apply on each. They are hosted on a hypvervisor (XenServer / ESX).
I'm trying to revert each machine back to snapshot before executing the ansible plays. My flow of actions looks like this:
Take VM name from a file (machines.txt)
Connect the Hypervisor and through CLI revert the machine snapshot
Once SSH is up for the machine, connect and execute rest of the plays
Execute one machine at a time.
The best I came so far is executing the revert task on the hypervisor in one play, and then a different play to apply the roles. So that's how my playbook looks like:
---
- name: Reverting
hosts: xenhost
tasks:
- include_tasks: tasks/vms/revert_snap.yml
with_lines: cat "machines.txt"
- name: Ensure NTP is installed
hosts: temp
roles:
- ntp
The solution I've found so far is,
Execute a task of reverting snapshot from CLI and execute it on the Hypervisor
Execute my whole maintenance play on the machines
2.1 Wait for SSH on each
2.2 Apply roles
Clean up and take new snapshots on the Hypervisor
The drawback I have in this scenario is, first it reverts ALL machines, then apply all the roles and then reverting all of them in bulk.
I can't combine the tasks into a single play because they are executed on different hosts.
Any better solutions? - which will revert one machine, apply all roles, take snapshot, then continue to the next machine.
Any better solutions? - which will revert one machine, apply all roles, take snapshot, then continue to the next machine.
you could group all those tasks in a role, and execute the role with serial: 1
- hosts: hostgroup
serial: 1
roles:
- {role: rolename }
Related
I want to build a docker image locally and deploy it so it can then be pulled on the remote server I'm deploying to. To do this I first need to check out code from git to be built.
I have an existing role which installs git, sets up keys for reading from our repo etc. I want to run this role locally to check out the code I care about.
I looked at local action, delegate_to, etc but haven't figured out an easy way to do this. The best approach I could find was:
- name: check out project from git
delegate_to: localhost
include_role:
name: configure_git
However, this doesn't work I get a complaint that there is a syntax error on the name line. If I remove the delegate_to line it works (but runs on the wrong server). If I replace include_role with debug it will run locally. It's almost as if ansible explicitly refuses to run an included role locally, not that I can find that anywhere in the documentation.
Is there a clean way to run this, or other roles, locally?
Extract from the include_role module documentation
Task-level keywords, loops, and conditionals apply only to the include_role statement itself.
To apply keywords to the tasks within the role, pass them using the apply option or use ansible.builtin.import_role instead.
Ignores some keywords, like until and retries.
I actually don't know if the error you get is linked to delegate_to being ignored (I seriously doubt it is the case...). Meanwhile it's not the correct way to use it here:
- name: check out project from git
include_role:
name: configure_git
apply:
delegate_to: localhost
Moreover, this is most probably a bad idea. Let's imagine your play targets 100 servers: the role will run one hundred time (unless you also apply run_once: true). I would run my role "normally" on localhost in a dedicated play then do the rest of the job on my targets in the next one(s).
- name: Prepare env on localhost
hosts: localhost
roles:
- role: configure_git
- name: Do the rest on other hosts
hosts: my_group
tasks:
- name: dummy.
debug:
msg: "Dummy"
In our deployment strategy, our playbooks take on the following structure:
workflow.yml
- hosts: host1
tasks:
- name: setup | create virtual machines on host1
include_tasks: setup.yml
- name: run | import a playbook that will target new virtual machines
include: "virtual_machine_playbook/main.yml"
- hosts: host1
tasks:
- name: cleanup | destroy virtual machines on host1
include_tasks: destroy.yml
virtual_machine_playbook/main.yml
- hosts: newCreatedVMs
roles:
- install
- run
Which works great most of the time. However if for some reason, the virtual_machine_playbook/main.yml errors out, the last hosts block does not run and we are required to manually destroy our VMs. I wanted to know if there was a way to mandate that each hosts block run, regardless of what happens before it.
Other Notes:
The reason that we structure our playbooks this way is because we would like everything to be as contained as possible. Variables are created in each hosts block that are rather important to the ones that follow. Splitting them out into separate files and invocations is something we have not had much success with
We have tried the standard ansible approach for error handling as found here, but most of the options only apply at the task level (blocks, ignore_errors, etc.)
I have about 10 playbooks which do modular things, but all of them require restarting the machine to take effect.
Is there a simple way to say "Run these 10 playbooks, but skip the restart phase in all of them, only restarting at the end"?
Single playbook example
tasks:
- name: task1
...
- name: task2
...
- name: task3
...
- name: Reboot machine
become: yes
reboot:
Attempt at combining them
- import_playbook: pb1.yml
- import_playbook: pb2.yml
- import_playbook: pb3.yml
...
Problem with attempted solution
There are 10 reboots of the machine, when only one is necessary, and causes the job to take much longer than it should - an hour instead of 10 minutes for some of the slower machines
A simple fix is now removing the reboots from all the singular playbooks, except this introduces a new problem in that none of the singular playbooks work now as each one individually does need that reboot for it to work.
I could copy paste everything into more playbooks and manually remove the reboots, but I don't like moving away from single source, especially since these playbooks are constantly being tweaked and updated as code changes and new system behaviors need to be addressed.
In each playbook add the same unique tag my_reboot to the reboot task, then run all with --skip-tags my_reboot. Create a handler in your top level playbook, or post_tasks.
I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.
I am using ansible to script a deployment for an API. I would like this to work sequentially through each host in my inventory file so that I can fully deploy to one machine at a time.
With the out box behaviour, each task in my playbook is executed for each host in the inventory file before moving on to the next task.
How can I change this behaviour to execute all tasks for a host before starting on the next host? Ideally I would like to only have one playbook.
Thanks
Have a closer look at Rolling Updates:
What you are searching for is
- hosts: webservers
serial: 1
tasks:
- name: ...
Using the --forks=1 specify number of parallel processes to use (default=5)
Strategy enable to parallel tasks in a per host basis. See https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html
There are 3 strategies: linear (the default), serial and free (quickest)
- hosts: all
strategy: free
tasks:
...