Puppet conditional check between vagrant & ec2 - amazon-ec2

I've looked at the documentation for puppet variables and can't seem to get my head around how to apply this to the following situation:
if vagrant (local machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'dev.demo.com',
root => '/vagrant/public',
}
else if aws ec2 (remote machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'demo.com',
root => '/home/ubuntu/demo.com/public',
}
Thanks

Try running facter on both your vagrant host and your EC2 instance, and look for differences. I suspect that 'facter virtual' may be different between the two hosts, or that the EC2 may return a bunch of ec2_ facts that won't be present on the vagrant host.
Then you can use this fact as a top level variable as per below. I switched to a case statement as well, since that's a little easier to maintain IMHO, plus you can use the default block for error checking.
case $::virtual {
'whatever vagrant returns' : {
<vagrant specific provisionin>
}
'whatever the EC2 instance returns' : {
<EC2 specific provisioning>
}
default : {
fail("Unexpected virtual value of $::virtual")
}
}

NOTE: In the three years since this response was posted, Vagrant has introduced the facter hash option. See #thomas' answer below for more details. I believe that this is the right way to go and makes my proposed kernel command line trick pretty obsolete. The rationale for using a fact hasn't changed, though, only strengthened (e.g. Vagrant currently supports AWS provider).
ORIGINAL REPLY: Be careful - you assume that you only use virtualbox for vagrant and vice versa, but Vagrant is working on support for other virtualization technologies (e.g. kvm), and you might use VirtualBox without vagrant one day (e.g. for production).
Instead, the trick I use is to pass the kernel a "vagrant=yes" parameter when I build the basebox, which is then accessible via /proc/cmdline. Then you can create a new fact based on that (e.g. /etc/vagrant file and check for it in subsequent facter runs).

Vagrant has a great utility for providing Puppet facts:
facter (hash) - A hash of data to set as available facter variables
within the Puppet run.
For example, here's a snippet from my Vagrantfile with the Puppet setup:
config.vm.provision "puppet", :options => ["--fileserverconfig=/vagrant/fileserver.conf"] do |puppet|
puppet.manifests_path = "./"
puppet.module_path = "~/projects/puppet/modules"
puppet.manifest_file = "./vplan-host.pp"
puppet.facter = {
"vagrant_puppet_run" => "true"
}
end
And then we make use of that fact for example like this:
$unbound_conf = $::vagrant_puppet_run ? {
'true' => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf.vagrant',
default => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf',
}
file { '/etc/unbound/unbound.conf':
owner => root,
group => root,
notify => Service['unbound'],
source => $unbound_conf,
}
Note that the fact is only available during puppet provision time.

Related

Detect Ansible changes in Terraform and execute them

I want to combine Ansible with Terraform so that Terraform creates the machines and Ansible will provision them. Using terraform-provisioner-ansible it's possible to bring them seamlessly together. But I saw a lack of change detection, which doesn't happen when Ansible runs standalone.
TL;DR: How can I apply changes made in Ansible to the Terraform Ansible plugin? Or at least execute the ansible plugin on every update so that Ansible can handle this itself?
Example use case
Consider this playbook which installs some packages
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
which is integrated into Terraform using the plugin
resource "libvirt_domain" "ubuntu18" {
# ...
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
will fork fine on the first run. But later I notice some package was missing
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
- vim # This is a new package
When running terraform plan I'll get No changes. Infrastructure is up-to-date. My new package vim will never got installed! So Ansible didn't run because if Ansible runs, it would install the new package.
The problem seems to be the provisioner itself:
Creation-time provisioners are only run during creation, not during updating or any other lifecycle. They are meant as a means to perform bootstrapping of a system.
But what is the correct way of applying updates? I tried a null_ressource with depends_on link to my vm ressource, but Terraform doesn't detect changes on the Ansible part, too. Seems to be a lack of change detection from the Terraform plugin.
In the doc I only found destroy time provisioners. But none for updates. I could destroy and re-create the machine. This would slow down things a lot. I like the Ansible aproach of checking what is presend and only apply changes which aren't already present, this seems a good way of provisioning.
Isn't it possible to do something similar with Terraform?
With my current experience (more Ansible than Terraform), I don't see any other way as dropping the nice plugin and execute Ansible on my own. But this would also drop the nice integration. So I need to generate inventory files on my own or even by hand (which misses the automation approach in my point of view).
source_code_hash may be an option but is inflexible: When having multiple plays/roles, I need to do this by hand for every single file which keeps error-prone easily.
Use a null_ressource with pseudo trigger
The idea from tedsmitt uses a timestamp as trigger, which seems the only way to force a provisioner. Howver running ansible-playbook plain from the CLI would create overhead of maintaining the inventory by hand. You can't call the python dynamic inventory script from here since terraform apply need to complete before
In my point of view, a better approach would be running the ansible provisioner here:
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
depends_on = ["libvirt_domain.ubuntu18"]
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
Only drawbag here is: Terraform will recognize a pseudo change everytime
Terraform will perform the following actions:
-/+ null_resource.ansible-provisioner (new resource required)
id: "3365240528326363062" => <computed> (forces new resource)
triggers.%: "1" => "1"
triggers.build_number: "2019-06-04T09:32:27Z" => "2019-06-04T09:34:17Z" (forces new resource)
Plan: 1 to add, 0 to change, 1 to destroy.
This seems the best compromise to me, according to other workarounds avaliable.
Run Ansible manually with dynamic inventory
Another way I found is the dynamic inventory plugin, detailled description can be found in this blog entry. It integrates into Terraform and let you specify ressources as inventory host, some example:
resource "ansible_host" "k8s" {
inventory_hostname = "192.168.2.2"
groups = ["test"]
vars = {
ansible_user = "ubuntu"
ansible_ssh_private_key_file = "~/.ssh/id_rsa"
}
}
The Python script use this information to generate a dynamic inventory, which can be used like this:
ansible-playbook -i /etc/ansible/terraform.py ansible-test.yml
A big benefit is: It keeps your configuration DRY. Terraform has the leading configuration file, no need to also maintain separate Ansible files. And also the ability for variable usage (e.g. the inventory hostname shouldn't be hardcoded for production usage as in my example).
In my use case (Provision Rancher testcluster) the null_ressource approach seems better since EVERYTHING is build with a single Terraform command. No need to additionally executing Ansible. But depending on the requirements, it can be better to keep Ansible a seperate step, so I posted this as alternative.
Installing the plugin
When trying this solution, remember that you need to install the corresponding Terraform plugin from here:
version=0.0.4
wget https://github.com/nbering/terraform-provider-ansible/releases/download/v${version}/terraform-provider-ansible-linux_amd64.zip -O terraform-provisioner-ansible.zip
unzip terraform-provisioner-ansible.zip
chmod +x linux_amd64/*
mv linux_amd64 ~/.terraform.d/plugins
And also notice, that the automated provisioner from the solution above needs to be removed first, since it has the same name (may conflict).
As you mentioned in your question, there is no change detection in the plugin. You could implement a trigger on a null_resource so that it runs on every apply.
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
provisioner "local-exec" {
command = "ansible-playbook ansible-test.yml"
}
}
You can try this, It works for me.
resource "null_resource" "ansible-swarm-setup" {
local_file.ansible_inventory ]
#nhu
triggers= {
instance_ids = join(",",openstack_compute_instance_v2.swarm-cluster-hosts[*].id)
}
connection {
type = "ssh"
user = var.ansible_user
timeout = "3m"
private_key = var.private_ssh_key
host = local.cluster_ips[0]
}
}
When it detects the changes in instance index/ids then it will triger ansible playbook.

Is it possible to load a Vagrantfile config from within another Vagrantfile?

Vagrant Multi-Machine functionality seems pretty cool, however one thing that bothers me (or is isn't immediately apparent) is that there doesn't seem to be a good way to share configuration options between a "parent" Vagrantfile and "children" Vagrantfiles. Is there a way to effectively and maintainably share configuration options between the parent and it's children? An example might make this more clear.
Let's assume I've got a platform which is comprised of 3 apps/services: API, Web, and Worker.
Let's presume a directory structure of the following:
/some_platform
/api
# app code...
Vagrantfile
/web
# app code...
Vagrantfile
/worker
# app code...
Vagrantfile
Vagrantfile
Let's say /some_platform/api/Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.box = "debian/jessie64"
end
Presumably the web and worker Vagrantfiles look similar.
Now, using the wonders of Multi-Machine I rely on Vagrant coordinate these VMs, and /some_platform/Vagrantfile looks like:
Vagrant.configure("2") do |config|
config.vm.define "web" do |api|
api.vm.box = "debian/jessie64"
end
config.vm.define "web" do |web|
web.vm.box = "debian/jessie64"
end
config.vm.define "web" do |worker|
worker.vm.box = "debian/jessie64"
end
end
I realize this example is contrived, but it's easy to see how once you get more and more complex config declarations, it's annoying and hazardous to have that config duplicated in two places.
You might be wondering "Why does each project have it's own Vagrantfile?" Doing so provides a single source of truth for how the server that app runs on should be setup. I realize there are provisioners you can use (and I will use them), but you still have to declare a few other things outside of that and I want to keep that DRY so that I can either bring up a cluster of apps via Multi-Machine, or I can work on a single app and change it's VM/server setup.
What I'd really love is a way to merge other Vagrantfiles into a "parent" file.
Is that possible? Or am I crazy for trying? Any clever ideas on how to achieve this? I've mucked about with some yaml files and POROs to skate around this issue, but none of the hacks feel very satisfying.
Good news!
You can in fact apply DRY principles in a Vagrantfile.
First: Create a file /some_platform/DRY_vagrant/Vagrantfile.sensible to hold some sensible defaults :
Vagrant.configure("2") do |config|
# With the setting below, any vagrantfile VM without a 'config.vm.box' will
# automatically inherit "debian/jessie64"
config.vm.box = "debian/jessie64"
end
Second: Create a file /some_platform/DRY_vagrant/Vagrantfile.worker for the 'worker' virtual machine :
Vagrant.configure("2") do |config|
config.vm.define "worker" do |worker|
# This 'worker' VM will not inherit "debian/jessie64".
# Instead, this VM will explicitly use "debian/stretch64"
worker.vm.box = "debian/stretch64"
end
end
Finally: Create a file /some_platform/Vagrantfile to tie it all together :
# Load Sensible Defaults
sensible_defaults_vagrantfile = '/some_platform/DRY_vagrant/Vagrantfile.sensible'
load sensible_defaults_vagrantfile if File.exists?(sensible_defaults_vagrantfile)
# Define the 'api' VM within the main Vagrantfile
Vagrant.configure("2") do |config|
config.vm.define "api" do |api|
# This 'api' VM will automatically inherit the "debian/jessie64" which we
# configured in Vagrantfile.sensible
# Make customizations to the 'api' VM
api.vm.hostname = "vm-debian-jessie64-api"
end
end
# Load the 'worker' VM
worker_vm_vagrantfile = '/some_platform/DRY_vagrant/Vagrantfile.worker'
load worker_vm_vagrantfile if File.exists?(worker_vm_vagrantfile)
This approach can be used for almost any other vagrantfile config options. It is not limited to just the "config.vm.box" setting.
Hope this helped!
There are 2 things you can look at (probably more, but those two comes to my mind)
look at How to template Vagrantfile using Ruby? its an example how you can read the content of another file, Vagrantfile is just a ruby script so you can use all the power of ruby.
vagrant has a concept of loading and merging, see from doc so if you wanted to do something anytime you run a vagrant command, you could create a Vagrantfile under your ~/.vagrant.d/ folder and it will always run
one drawback (or at least to pay attention) : the Vagrantfile is a ruby script that is evaluated each (and every time) a vagrant command is executed (up, status, halt ....)

How to config timezone with Vagrant, Puppet and Hiera?

I'm using PuPHPet for my testing environments, which is based on Vagrant/Puppet+Hiera.
In the config.yml (Hiera config file) I would like to add section for my timezone
and with command vagrant provision setup it properly.
It's that possible?
You can install Time Zone plugin for Vagrant (vagrant plugin install vagrant-timezone) and configure Vagrantfile in the following way:
Vagrant.configure("2") do |config|
if Vagrant.has_plugin?("vagrant-timezone")
config.timezone.value = "UTC"
end
# ... other stuff
end
Instead of UTC you can also use :host to synchronize timezone with the host.
Just add your timezone to whatever key you want in your hiera file, let's call it timezone. The value for which and the puppet code you'd need to set that timezone depends on the system you're firing up, but I'll assume RedHat flavor of unix.
I recommend setting that to any valid value you'd see under /usr/share/zoneinfo. As an example your key may look like:
timezone: 'US/Pacific'
Then you'd use the file puppet type to symlink /etc/localtime to the full path of the timezone:
$tz = hiera('timezone')
file {'/etc/localtime': ensure => link, target => "/usr/share/zoneinfo/${tz}"}

How to run several boxes with Vagrant?

I need to run several boxes with Vagrant.
Is there a way to do that?
These don't relate one to another in any way, they can be thought as different environments using for test so it seems that multi-machine setup has nothing to do with this.
The best way is to use an array of hashes. You can define the array like:
servers=[
{
:hostname => "web",
:ip => "192.168.100.10",
:box => "saucy",
:ram => 1024,
:cpu => 2
},
{
:hostname => "db",
:ip => "192.168.100.11",
:box => "saucy",
:ram => 2048,
:cpu => 4
}
]
Then you just iterate each item in server array and define the configs:
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
node.vm.box = machine[:box]
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
node.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", machine[:ram]]
end
end
end
end
You can definitely run multiple Vagrant boxes concurrently, as long as their configuration does not clash with one another in some breaking way, e.g. mapping the same network ports on the host, or using same box names/IDs inside the same provider. There's no difference from having multiple boxes running on a provider manually, say multiple boxes on VirtualBox, or having them registered and started up by Vagrant. The result is the same, Vagrant just streamlines the process.
You can either use so called multi-machine environment to manage these boxes together in one project/Vagrantfile. They don't necessarily have to be somehow connected, ease of management may be the reason alone, e.g. if you need to start them up at the same time.
Or you can use separate projects/Vagrantfiles and manage the machines from their respective directories, completely separated.
In case of running multiple instances of the same project, you need multiple copies of the project directory, as Vagrant stores the box state in the .vagrant directory under the project.
You need just to copy the directory holding Vagrantfile to the new place and run vagrant up from it.
Make sure you copy the dir prior to starting up the box for the first time or Vagrant will think that these two locations refer to the same box. Or, if you already did vagrant up before copying the directory, then delete copied_directory/.vagrant after you make the copy.
You can even use different Vagrantfile(s) in same directory for different machine configuration or boxes
VAGRANT_VAGRANTFILE=Vagrantfile.ubntu_1404_64 VAGRANT_DOTFILE_PATH=.vagrant_ub140464 vagrant up
OR
VAGRANT_VAGRANTFILE=Vagrantfile.ubntu_1404_32 VAGRANT_DOTFILE_PATH=.vagrant_ub140432 vagrant up
Both the Vagrantfile can reside in same directory
I was able to have a single vagrantfile:
Vagrant.configure("2") do |winconfig|
# stuff
end
Vagrant.configure("2") do |nixconfig|
# stuff
end
Vagrant.configure("2") do |macconfig|
# stuff
end
and then I can bring them up with vagrant up --parallel. As others have mentioned, different vagrant files may be better for maintainability.
https://www.vagrantup.com/docs/vagrantfile/tips.html#loop-over-vm-definitions
(1..3).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.provision "shell",
inline: "echo hello from node #{i}"
end
end
Copying the directory holding Vagrantfile to a new place and spinning up the new machine there is the most straight forward way, if machines are not co-operating.
However, you may not want to copy/paste provisioning scripts for VCS tracking/backtracking purposes. Keep all your scripts in a folder, ex. dev, and put your Vagrantfile under numbered folders under dev, ex. dev/10. When you create a newer version, ex. dev/11 and not need the older one, you may just delete it. And refer to the common provisioning scripts using relative path:
config.vm.provision "shell", path: "../provisioner.sh"

Vagrant not able to find home/vagrant/hiera.yaml

I have inherited a python app that uses Puppet, Vagrant and VirtualBox to provision a local Centos test machine.
This app was written on Mac and I'm developing on Windows.
When I run vagrant up I see a large list of command line errors, the most relevant of which is:
Running Puppet with site.pp..
Warning: Config file /home/vagrant/hiera.yaml not found, using Hiera defaults
WARN: Fri Apr 25 16:32:24 +0100 2014: Not using Hiera::Puppet_logger. It does not report itself to b
e suitable.
I know what Hiera is, and why it's important, but I'm not sure how to fix this.
The file hiera.yaml is present in the repo but it's not found at home/vagrant/hiera.yaml, instead it's found at ./puppet/manifests/hiera.yaml.... Similarly if I ssh into the box, there is absolutely nothing whatsoever inside home/vagrant but I can find this file when I look in /tmp
So I'm confused, is vagrant looking inside the Virtual box and expecting this file to be found at home/vagrant/hiera.yaml? Or have I inherited an app that did not work properly in the first place? I'm really stuck here and I can't get in touch with the original dev.
Here are some details from my Vagrantfile:
Vagrant.configure("2") do |config|
# Base box configuration ommitted
# Forwarded ports ommitted
# Statically set hostname and internal network to match puppet env ommitted
# Enable provisioning with Puppet standalone
config.vm.provision :puppet do |puppet|
# Tell Puppet where to find the hiera config
puppet.options = "--hiera_config hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
# Boilerplate Vagrant/Puppet configuration
puppet.module_path = "puppet/modules"
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "site.pp"
# Custom facts provided to Puppet
puppet.facter = {
# Tells Puppet that we're running in Vagrant
"is_vagrant" => true,
}
end
# Make the client accessible
config.vm.synced_folder "marflar_client/", "/opt/marflar_client"
end
It's really strange.
There's this puppet.options line that tells Puppet where to look for hiera.yaml and currently it simply specifies the file name. Now, when you run vagrant up, it mounts the current directory (the one that has the Vagrantfile in it) into /vagrant on the guest machine. Since you're saying hiera.yaml is actually found in ./puppet/manifests/hiera.yaml, I believe you want to change the puppet.options line as follows:
puppet.options = "--hiera_config /vagrant/puppet/manifests/hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
To solve this warning, you may first try to check from where this file is referenced, i.e.:
$ grep -Rn hiera.yaml /etc/puppet
/etc/puppet/modules/stdlib/spec/spec_helper_acceptance.rb:13: on hosts, '/bin/touch /etc/puppet/hiera.yaml'
/etc/puppet/modules/mysql/spec/spec_helper_acceptance.rb:34: shell("/bin/touch #{default['puppetpath']}/hiera.yaml")
In this case it's your vagrant file.
You may also try to find the right path to it via:
sudo updatedb && locate hiera.yaml
And either follow the #EvgenyChernyavskiy suggestion, create a symbolic link if you found the file:
sudo ln -vs /etc/hiera.yaml /etc/puppet/hiera.yaml
or create the empty file (touch /etc/puppet/hiera.yaml).
In your case you should use the absolute path to your hiera.yaml file, instead of relative.
The warning should be gone.

Resources