master playbook create vms before loading inventory for bootstrapping - ansible

Is it possible to have one yaml file to create vms and bootstrap the new servers?
I have a master playbook
---
# Master playbook for
# - creating server
# - bootstrap server
#
- import_playbook: create_vm.yml
- import_playbook: bootsrap_vm.yml
import_playbook: create_vm.yml using hosts: localhost
import_playbook: bootsrap_vm.yml using hosts: all
Im using a dynamic inventory. But the bootstrap_vm.yaml does not know about the newly created servers. Is it possible to somehow update the inventory after the vms are created and before the bootstrapping starts?

meta: refresh_inventory was added in ansible 2.0 specifically for this type of requirement.
(meta: )refresh_inventory (added in Ansible 2.0) forces the reload of the inventory, which in the case of dynamic inventory scripts means they will be re-executed. If the dynamic inventory script is using a cache, Ansible cannot know this and has no way of refreshing it (you can disable the cache or, if available for your specific inventory datasource (e.g. aws), you can use the an inventory plugin instead of an inventory script). This is mainly useful when additional hosts are created and users wish to use them instead of using the add_host module.
Add this as a task at the end or your create_vm.yml playbook, or in a specific play in between the 2 playbooks.
Ref: https://docs.ansible.com/ansible/latest/modules/meta_module.html

Related

Loop over other hosts within ansible playbook using serial

I have 80+ hosts that run my application, and I'm updating a long-existing ansible playbook to change our load balancer. In our current load balancer setup, hosts can be added/removed from the load balancer in one ansible play by shelling out to the AWS CLI. However, we're switching to a load balancer configured on a handful of our own hosts, and we will take hosts in and out by manipulating text files on those hosts using Ansible. I need an inner loop over different hosts within a playbook while using Serial.
I'm having trouble structuring the playbook such that I can fan out blockinfile commands to hosts in group tag_Type_edge while deploying to the 80 tag_Type_app hosts with serial: 25%.
Here's what I want to be able to do:
---
- hosts: tag_Type_app
serial: "25%"
pre_tasks:
- name: Gathering ec2 facts
action: ec2_metadata_facts
- name: Remove from load balancers
debug:
msg: "This is where I'd fan out to multiple different hosts from group tag_Type_edge to manipulate
text files to remove the 25% of hosts from tag_Type_app from the load balancer"
tasks:
- name: Do a bunch of work to upgrade the app on the tag_Type_app machines while out of the load balancer
debug:
msg: "deploy new code, restart service"
post_tasks:
- name: Put back in load balancer
debug:
msg: "This is where I'd fan out to multiple different hosts from group tag_Type_edge to manipulate
text files to *add* the 25% of hosts from tag_Type_app back into the load balancer"
How can I structure this to allow for the inner loop over tag_Type_edge while using serial: 25% on all the tag_Type_app boxes?
If I may say so, yuk.
On the ACA project, each host had a file called /etc/ansible/facts.d/load_balancer_state.fact. That was just an INI file that set lb_state to enabled, disabled, or stopped.
We then ran the setup module (or gather_facts: yes), to get the state of each host, and ran the template module to create the load balancer config file. Very clean. Very simple.
To change a state, change the file and re-run the template module.
This was dynamic inventory.
If you have static inventory, it's even easier. Just set lb_state set on each host, either in an INI file like host1 lb_state=enabled, or in files in the host_vars directory. Change the inventory, re-run the template module, and (if necessary) tell the loadbalancer to reload the config file.

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

How to create dynamic variables in ansible

The real scenario, want to get a resource id of sqs in AWS, which will be returned after the execution of a playbook. So, using this variable in files to configure the application.
Persisting variables from one playbook to another
checking out the documentation, modules like set_fact and register have scope only for that specific host. There are many purpose of using the variables from one host to another.
Alternatives I can think of:
using Command module and echoing the variables to a file. Later, using the variable file using vars section or include.
Setting the env variables and then accessing it but this will be difficult.
So what is the solution?
If you're gathering facts, you can access hostvars via the normal jinja2 + variable lookup:
e.g.
- hosts: serverA.example.org
gather_facts: True
...
tasks:
- set_fact:
taco_tuesday: False
and then, if this has run, on another host:
- hosts: serverB.example.org
...
tasks:
- debug: var="{{ hostvars['serverA.example.org']['ansible_memtotal_mb'] }}"
- debug: var="{{ hostvars['serverA.example.org']['taco_tuesday'] }}"
Keep in mind that if you have multiple Ansible control machines (where you call ansible and ansible-playbook from), you should take advantage of the fact that Ansible can store its facts/variables in a cache (currently Redis and json), that way the control machines are less likely to have different hostvars. With this, you could set your control machines to use a file in a shared folder (which has its risks -- what if two control machines are running on the same host at the same time?), or set/get facts from a Redis server.
For my uses of Amazon data, I prefer to just fetch the resource each time using a tag/metadata lookup. I wrote an Ansible plugin that allows me to do this a little more easily as I prefer this to thinking about hostvars and run ordering (but your mileage may vary).
You can pass variables On The Command Line: http://docs.ansible.com/ansible/playbooks_variables.html#passing-variables-on-the-command-line
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
You can use local connection to run playbook = get variable and apply it to another playbook:
- hosts: 127.0.0.1
connection: local
- shell: ansible-playbook -i ...
register: sqs_id
- shell: ansible-playbook -i ... -e "sqs_id={{sqs_id.stdout}}"
Also delegation might be useful in this scenario:
http://docs.ansible.com/ansible/playbooks_delegation.html#delegation
Also you can store output in the local file and use (http://docs.ansible.com/ansible/playbooks_delegation.html#delegation):
- name: take a sqs id
local_action: command cat ~/sqs_id
PS:
I don't understand why you can't write complex playbook where will be included many roles that will share variables?
You can write "common" variables to a host_vars or group_vars this way all the servers has access to it.
Another way may be to create a custom ansible module/lookup plugin to hide all the boilerplate code and get an easy and flexible access to the variables you need.
I had a similar issue with azure DevOps pipelines.
I created VM:s with terraform, ssh-keys and windows username/password was generated by terraform and stored it in a KeyVault.
So I then needed to query KeyVault before running Ansible on all created VM:s. I ended up using Azure python SDK to get all secrets. I also generate an inventory file and a host_vars folder with a file for each VM.
The actual play-book is now very basic and does the job perfectly. All variables for terraform and ansible is in a json file. And the python script is less than 30 lines.

How to put condition for hosts in ansible playbook using ec2 dynamic inventory

In my Ansible playbook I want to use host with condition. For example: if the tag_Name_WebServer and tage_Environment_development.
So it should return me the list of the servers which are belongs to development environment and should have the Name tag as Webserver. Please help me to fix it.
I have tried with:
---
- name: Development WebServer Configuration
hosts: tag_Name_WebServer:tag_Name_Development
But the result I am getting is the instance list of all from WebServer and all from Environment Development. How to put the proper condition?
Got the solution. The condition should be like below:
---
- name: Development WebServer Configuration
hosts: tag_Name_WebServer:&tag_Name_Development
You will get the intersection of the two group in this case.

Resources