Detect Ansible changes in Terraform and execute them - ansible

I want to combine Ansible with Terraform so that Terraform creates the machines and Ansible will provision them. Using terraform-provisioner-ansible it's possible to bring them seamlessly together. But I saw a lack of change detection, which doesn't happen when Ansible runs standalone.
TL;DR: How can I apply changes made in Ansible to the Terraform Ansible plugin? Or at least execute the ansible plugin on every update so that Ansible can handle this itself?
Example use case
Consider this playbook which installs some packages
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
which is integrated into Terraform using the plugin
resource "libvirt_domain" "ubuntu18" {
# ...
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
will fork fine on the first run. But later I notice some package was missing
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
- vim # This is a new package
When running terraform plan I'll get No changes. Infrastructure is up-to-date. My new package vim will never got installed! So Ansible didn't run because if Ansible runs, it would install the new package.
The problem seems to be the provisioner itself:
Creation-time provisioners are only run during creation, not during updating or any other lifecycle. They are meant as a means to perform bootstrapping of a system.
But what is the correct way of applying updates? I tried a null_ressource with depends_on link to my vm ressource, but Terraform doesn't detect changes on the Ansible part, too. Seems to be a lack of change detection from the Terraform plugin.
In the doc I only found destroy time provisioners. But none for updates. I could destroy and re-create the machine. This would slow down things a lot. I like the Ansible aproach of checking what is presend and only apply changes which aren't already present, this seems a good way of provisioning.
Isn't it possible to do something similar with Terraform?
With my current experience (more Ansible than Terraform), I don't see any other way as dropping the nice plugin and execute Ansible on my own. But this would also drop the nice integration. So I need to generate inventory files on my own or even by hand (which misses the automation approach in my point of view).
source_code_hash may be an option but is inflexible: When having multiple plays/roles, I need to do this by hand for every single file which keeps error-prone easily.

Use a null_ressource with pseudo trigger
The idea from tedsmitt uses a timestamp as trigger, which seems the only way to force a provisioner. Howver running ansible-playbook plain from the CLI would create overhead of maintaining the inventory by hand. You can't call the python dynamic inventory script from here since terraform apply need to complete before
In my point of view, a better approach would be running the ansible provisioner here:
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
depends_on = ["libvirt_domain.ubuntu18"]
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
Only drawbag here is: Terraform will recognize a pseudo change everytime
Terraform will perform the following actions:
-/+ null_resource.ansible-provisioner (new resource required)
id: "3365240528326363062" => <computed> (forces new resource)
triggers.%: "1" => "1"
triggers.build_number: "2019-06-04T09:32:27Z" => "2019-06-04T09:34:17Z" (forces new resource)
Plan: 1 to add, 0 to change, 1 to destroy.
This seems the best compromise to me, according to other workarounds avaliable.
Run Ansible manually with dynamic inventory
Another way I found is the dynamic inventory plugin, detailled description can be found in this blog entry. It integrates into Terraform and let you specify ressources as inventory host, some example:
resource "ansible_host" "k8s" {
inventory_hostname = "192.168.2.2"
groups = ["test"]
vars = {
ansible_user = "ubuntu"
ansible_ssh_private_key_file = "~/.ssh/id_rsa"
}
}
The Python script use this information to generate a dynamic inventory, which can be used like this:
ansible-playbook -i /etc/ansible/terraform.py ansible-test.yml
A big benefit is: It keeps your configuration DRY. Terraform has the leading configuration file, no need to also maintain separate Ansible files. And also the ability for variable usage (e.g. the inventory hostname shouldn't be hardcoded for production usage as in my example).
In my use case (Provision Rancher testcluster) the null_ressource approach seems better since EVERYTHING is build with a single Terraform command. No need to additionally executing Ansible. But depending on the requirements, it can be better to keep Ansible a seperate step, so I posted this as alternative.
Installing the plugin
When trying this solution, remember that you need to install the corresponding Terraform plugin from here:
version=0.0.4
wget https://github.com/nbering/terraform-provider-ansible/releases/download/v${version}/terraform-provider-ansible-linux_amd64.zip -O terraform-provisioner-ansible.zip
unzip terraform-provisioner-ansible.zip
chmod +x linux_amd64/*
mv linux_amd64 ~/.terraform.d/plugins
And also notice, that the automated provisioner from the solution above needs to be removed first, since it has the same name (may conflict).

As you mentioned in your question, there is no change detection in the plugin. You could implement a trigger on a null_resource so that it runs on every apply.
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
provisioner "local-exec" {
command = "ansible-playbook ansible-test.yml"
}
}

You can try this, It works for me.
resource "null_resource" "ansible-swarm-setup" {
local_file.ansible_inventory ]
#nhu
triggers= {
instance_ids = join(",",openstack_compute_instance_v2.swarm-cluster-hosts[*].id)
}
connection {
type = "ssh"
user = var.ansible_user
timeout = "3m"
private_key = var.private_ssh_key
host = local.cluster_ips[0]
}
}
When it detects the changes in instance index/ids then it will triger ansible playbook.

Related

Can I add roles in a private git repo as a meta/dependancy in Ansible?

I have a bunch of Ansible roles that I'd like to reuse. They are each kept in a repo in a private BitBucket.
I want to add projects that are hosted in Git as meta/dependencies for my the roles I'm working on but I can't quite figure out the syntax is.
In this non-working example, a role requires another role to be deployed first with parameters prior to running.
FYI, The remote role "acm_layout" is intended to create a standard directory layout for the server, so that my role can run knowing that all of the standard directories already exist.
---
dependencies:
- { role: project_keys } # Works fine, just reuses a local role
- name: acm_layout # Doesn't work, but this is what I want to fix
src: ssh://git#bigcompany.com/acm/acm_layout.git
scm: git
version: feature/initialize
application_storage_dir: "{{base_storage_dir}}"
application_data_dir: "{{app_data_dir}}"
When I runt this I get the following error:
ERROR! the role 'acm_layout' was not found in [lots of paths deleted]
The error appears to have been in '/home/zs5fgzg/_tmp/horizon_deployment_scf/ansible/roles/horizon_layout/meta/main.yaml': line 4, column 6, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- { role: horizon_keys }
- src: ssh://git#bigcompany.com:7999/acm/acm_layout.git
^ here
So what's the correct way to do this?
Yes, you can use ansible-galaxy install with requirements.yml option to get roles remotely. Create requirements.yml as follows:
https://github.com/avinash6784/elk-stack/blob/master/requirements.yml
And run the following command:
$ ansible-galaxy install -r requirements.yml -p roles/
For more info on how to get roles using ansible-galaxy please visit
http://docs.ansible.com/ansible/latest/galaxy.html

Resolve Local Files by Playbook Directory?

I have the following Ansible role which simply does the following:
Create a temporary directory.
Download Goss, a server testing tool, into that temporary directory.
Upload a main Goss YAML file for the tests.
Upload additional directories for additional included tests.
Here are a couple places where I'm using it:
naftulikay.python-dev
naftulikay.ruby-dev
Specifically, these playbooks upload a local file adjacent to the playbook named goss.yml and a directory goss.d again adjacent to the playbook.
Unfortunately, it seems that Ansible logic has changed recently, causing my tests to not work as expected. My role ships with a default goss.yml, and it appears that when I set goss_file: goss.yml within my playbook, it uploads degoss/files/goss.yml instead of the Goss file adjacent to my playbook.
If I'm passing the name of a file to a role, is there a way to specify that Ansible should look up the file in the context of the playbook or the current working directory?
The actual role logic that is no longer working is this:
# deploy test files including the main and additional test files
- name: deploy test files
copy: src={{ item }} dest={{ degoss_test_root }} mode=0644 directory_mode=0755 setype=user_tmp_t
with_items: "{{ [goss_file] + goss_addtl_files + goss_addtl_dirs }}"
changed_when: degoss_changed_when
I am on Ansible 2.3.2.0 and I can reproduce this across distributions (namely CentOS 7, Ubuntu 14.04, and Ubuntu 16.04).
Ansible searches for relative paths in role's scope first, then in playbook's scope.
For example if you want to copy file test.txt in role r1, search order is this:
/path/to/playbook/roles/r1/files/test.txt
/path/to/playbook/roles/r1/test.txt
/path/to/playbook/roles/r1/tasks/files/test.txt
/path/to/playbook/roles/r1/tasks/test.txt
/path/to/playbook/files/test.txt
/path/to/playbook/test.txt
You can inspect your search_path order by calling ansible with ANSIBLE_DEBUG=1.
To answer your question, you have to options:
Use filename that doesn't exist within role's scope. Like:
goss_file: local_goss.yml
Supply absolute path. For example, you can use:
goss_file: '{{ playbook_dir }}/goss.yml'
Ansible doesn't apply search logic if the path is absolute.

How to Include task from another role in ansible?

I want to include a task from a different role.
I would not want to hardcode it like
- name : Set topology based on Jenkins job name
include: ../../pre-req/tasks/set-topo.yml
tags: core
Is there a way to do this with dependency? I tried creating a meta directory with files and tasks, somehow it' s not getting triggered.
something like this
vim roles/pre-req/meta/main.yml
---
allow_duplicates: yes
dependencies:
- { role: topo, tags: ['core'] }
I would not want to hardcode it like
Why not? You want to include a task, and that's how you include a task.
If what you want to do is include the entire other role, Ansible 2.2 (released yesterday) added include_role.

How to override role's file on Ansible?

I am using the zzet.rbenv role on my playbook. It has a files/default-gems file that it copies to the provisioned system.
I need my playbook to check for a myplaybook/files/default-gems and use it if it exists, using the zzet.rbenv/files/default-gems if otherwise.
How can I do that?
After some research and trial/error. I found out that Ansible is not capable of checking if files exist between roles. This is due to the way role dependencies (which roles themselves) will get expanded into the one requiring it, making it part of the playbook. There are no tasks that will let you differentiate my_role/files/my_file.txt from required_role/files/my_file.txt.
One approach to the problem (the one I found the easiest and cleanest) was to:
Add a variable to the my_role with the path to the file I want to use (overriding the default one)
Add a task (identical to the one that uses the default file) that checks if the above variable is defined and run the task using it
Example
required_role
# Existing task
- name: some task
copy: src=roles_file.txt dest=some/directory/file.txt
when: my_file_path is not defined
# My custom task
- name: my custom task (an alteration of the above task)
copy: src={{ my_file_path }} dest=/some/directory/file.txt
when: my_file_path is defined
my_role
#... existing code
my_file_path: "path/to/my/file"
As mentioned by Ramon de la Fuente: this solution was accepted into the zzet.rbenv repo :)

Sharing Ansible roles

I have packages of Ansible roles that I would like to import into a project.
If I organise them by subdirectory I run into problems relating to dependency relative paths
i.e the shared role needs to know its relative location of where it would be installed if it uses meta dependencies
I would like to be able to just reference everything to the directory the playbook is being run from though this doesn't work
roles/roleA/meta
---
dependencies:
- { role: "{{ playbook_dir }}/roles/shared_roles/roleB"}
roles/shared_roles/roleB
...
I've tried multiple options and running out of ideas.
I looked into roles-path http://docs.ansible.com/intro_configuration.html#roles-path
Though I don't really want to have to uniquely name all roles as they ought to be namespaced / grouped.
Thanks
You could basically use ansible-galaxy to install the roles within ~/.ansible/roles/ path.
The path of .ansible in my macOS machine is in: ~/.ansible/roles/
All you need to do is just create a requirements.yml file which looks something like this:
- src: git+https://<githubURL-Master-role>.git
scm: git
name: Master-role
- src: git+https://<githubURL-dependent1-role>.git
scm: git
name: dependent1-role
- src: git+https://<githubURL-dependent2-role>.git
scm: git
name: dependent2-role
Run this command to install your roles in ~/.ansible/roles/ path:
sudo ansible-galaxy -vvv -r install requirements.yml
This will basically set your roles in the ~/.ansible/roles/ path. To understand it better please follow this documentation https://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html.
and then, like you have already mentioned, you should add all the dependent roles in your ##path/master-role/meta/main.yml file even before executing the anisble-galaxy command, in-order to execute the dependent roles before the master-role, when you run your master-role playbook.
The ##path/master-role/meta/main.yml file looks something like this:
galaxy_info:
author: Jithendra
description: blah...blah...blah
min_ansible_version: 1.2
galaxy_tags:'[]'
dependencies:
- { role: 'Master-role, when: blah == "true"'} #when condition is optional
- { role: 'dependent1-role, when blah == "false"'}
- { role: 'dependent2-role, when blah == "false"'}
Solved it by enforcing that all shared modules reference the name of the shared directory in all dependencies.
I wanted to leave this up to the calling project and hoped that dependencies would be relative to the role directory.
The solution works though and provides namespacing so I can include multiple roles with the same name as long as they are in separate directories.
I now have a project that includes 3 separate packages of roles
PLAY RECAP ********************************************************************
staging : ok=214 changed=5 unreachable=0 failed=0
Its all built using Maven, each shared group being a maven dependency. If anybody reads this and it helps i'll share the structure / poms etc.

Resources