How to multi-provision in kitchen.yml? - vagrant

I have a Vagrantfile in which I am provisioning different Vm's by looping through a json file. eg.
cluster_config.each do |cluster|
cluster_name = cluster[0] # name of node
nodes_config = (JSON.parse(File.read("test_data_bags/myapp/_default.json")))['clusters'][cluster_name]['nodes']
nodes_config.each do |node|
config.vm.define node_name do |nodeconfig|
processes = node_values['processes']
processes.each do |process|
nodeconfig.vm.provision :chef_solo do |chef|
chef.data_bags_path = 'test_data_bags'
chef.run_list = run_list
chef.roles_path = "roles"
"myapp" => {
"cluster_name" => cluster_name,
"role" => node_role
},
}
end
end
end
end
I would like to do the same within kitchen ie. take an array of attributes and foreach array item - run recipe xyz - this is so I can write some tests using test-kitchen , is this possible?
Thanks

There's a few different workarounds to accomplish this, but they are all definitely workarounds. There is an issue that was opened to discuss support of multiple boxes on test-kitchen, and you can go there to read more about why this probably won't be supported any time soon. TL;DR: it's not really a goal of the project.
Workarounds include:
Chef-provisioning can bootstrap more servers from a single provisioned server/test-suite
Kitchen-nodes provisioner can share data about each server to the other test-suites in your setup
A custom Vagrantfile template for test-kitchen

Related

Ruby Vagrant network configuration duplication trouble, some objects references issue

I will say right away - I'm experienced in Python for example, but Ruby is totally new for me so this question may be not related to Vagrant at all, I don't know, sorry.
I want to create two VMs on my host and created Vagrantfile:
Vagrant.configure("2") do |config|
# Image config
config.vm.box = "ubuntu/bionic64"
config.disksize.size = '40GB'
# Nodes specific configs
config.vm.define "node_1_1" do |node|
node.vm.network "public_network", ip: "192.168.3.11", bridge: "enp4s0", netmask: "255.255.248.0"
node.vm.hostname = "vm-ci-node-1-1"
end
config.vm.define "node_1_2" do |node|
node.vm.network "public_network", ip: "192.168.3.12", bridge: "enp4s0", netmask: "255.255.248.0"
node.vm.hostname = "vm-ci-node-1-2"
end
# Nodes generic configs
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus=2
end
end
It works fine.
Then I decided to remove parameters hardcode and optimize it for next case with more than two VMs and for correct work on another machines with another bridge interface names. So I replaced Nodes specific configs section with:
target_interface = nil
for if_addr in Socket.getifaddrs
if if_addr.addr.ipv4? and if_addr.addr.ip_address.include? '192.168'
target_interface = if_addr.name
end
end
hostindex = 8
guestindices = [1, 2]
# Nodes specific configs
for guestindex in guestindices
vm_code = 'node_' + hostindex.to_s() + '_' + guestindex.to_s()
ip = '192.168.3.' + hostindex.to_s() + guestindex.to_s()
hostname = 'vm-ci-node-' + hostindex.to_s() + '-' + guestindex.to_s()
config.vm.define vm_code.dup do |node|
node.vm.network "public_network", ip: ip.dup, bridge: target_interface, netmask: "255.255.248.0"
node.vm.hostname = hostname.dup
end
end
Then I run vagrant up - no errors, but if I try to SSH to these two machines - I get strange behaviour:
node_8_1 IP address is 192.168.3.82, hostname - vm-ci-node-8-2
node_8_2 IP address is 192.168.3.82, hostname - vm-ci-node-8-2
As you see - it is same. Also there are another interface with same IP 10.0.2.15 - and it is trouble too, but it existed on previous version of config too.
I suspected that there are some Ruby references troubles so I used dup (I repeat, I'm totally new to Ruby, sorry). But it does not seem to work.
VM codes are different - node_8_1 and node_8_2, but IP and hostnames are same.
Could anybody please point me where I'm wrong?
I think the for guestindex in guestindices part is the cause of your trouble.
Try using guestindices.each do |i| or shorter (1..2).each do |i| instead.
Vagrant is mentioning this in their documentation for multi-machine provisioning, see https://www.vagrantup.com/docs/vagrantfile/tips.html#loop-over-vm-definitions:
The for i in ... construct in Ruby actually modifies the value of i for each iteration, rather than making a copy. Therefore, when you run this, every node will actually provision with the same [value].

Could not find class at /tmp/vagrant-puppet

I can't provision, i search in the other threads but no one is useful!
I'm using Vagrant with Puppet both of them at the latest version, my project structure is:
prova
|
|__Vagrantfile
|
|__puppet
|__manifests
| |__ubonda.pp
|
|__modules
|
|__apache
|
|__manifests
|
|__apache.pp
My VagrantFile is:
Vagrant.configure("2") do |config|
config.vm.define "ubonda" do |vm0|
vm0.vm.hostname = "ubonda"
vm0.vm.box = "hashicorp/precise64"
vm0.vm.provision "puppet" do |puppet|
puppet.manifests_path = 'puppet/manifests'
puppet.module_path = 'puppet/modules'
puppet.manifest_file = "ubonda.pp"
end
end
end
My ubonda.pp file is:
# default path
Exec {
path => ["/usr/bin", "/bin", "/usr/sbin", "/sbin", "/usr/local/bin", "/usr/local/sbin"]
}
include apache
My apache.pp file is:
class apache {
# install apache
package { "apache2":
ensure => present,
require => Exec["apt-get update"]
}
# ensures that mode_rewrite is loaded and modifies the default configuration file
file { "/etc/apache2/mods-enabled/rewrite.load":
ensure => link,
target => "/etc/apache2/mods-available/rewrite.load",
require => Package["apache2"]
}
# create directory
file {"/etc/apache2/sites-enabled":
ensure => directory,
recurse => true,
purge => true,
force => true,
before => File["/etc/apache2/sites-enabled/vagrant_webroot"],
require => Package["apache2"],...
}
}
If i launch vagrant provision i obtain :
==> ubonda: Running provisioner: puppet...
==> ubonda: Running Puppet with ubonda.pp...
==> ubonda: warning: Could not retrieve fact fqdn
==> ubonda: Could not find class apache for ubonda at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/ubonda.pp:6 on node ubonda
I search in the tmp/manifests folder of the ubonda vm and inside there is only the ubonda.pp while in the tmp/modules there is the apache but the two are not connected, so I tried to copy inside the manifest but nothing has changed, how can I do?
The solution is very simple but I leave the question for posterity.
The file containing the puppet specifications must be named as init.pp for it to be correctly recognized

How do I reuse variables in quoted line in a Vagrantfile?

I'm putting together a Vagrantfile that can be used to spin up multiple VMs, it mostly works apart from the bit where I need to tell ansible which playbook to use. This is being imposed on top of an existing structure so there's little scope to change file locations and that.
Here's an extract of the relevant bits from my Vagrantfile:
hosts = [
{ name: 'myhost01', hostname: 'vg-myhost01', ip: '172.172.99.99', memory:'512', cpu: 1, box: 'centos', port_forward: [] },
]
config.vm.provision :ansible do |ansible|
ansible.playbook = ['install/mydir/install_', :name, '.yml']
end
Basically I'm trying to figure out how to get it to end up with a setting like
ansible.playbook = 'install/mydir/install_myhost01.yml'
but I can't seem to get the right syntax to get it to recognise :name as a variable in that context. It either tries to run install_.yml, install_name.yml or most commonly gives the error:
`initialize': no implicit conversion of Symbol into String (TypeError)
Any suggestions?
It's more a question about Ruby syntax. You can use:
ansible.playbook = 'install/mydir/install_' + name + '.yml'
or (thanks to Frédéric Henri)
ansible.playbook = "install/mydir/install_#{name}.yml"
To reference the value from the hash (per OP's own suggestion):
ansible.playbook = "install/mydir/install_#{host[:name]}.yml"

Vagrant - How to use nfs for all synced folders?

I want to instruct Vagrant (through Vagrantfile) to use nfs for all synced_folder declared.
I guess it will be something like this:
vm.synced_folder.each do ...
... use nfs
end
But I don't know ruby's syntax.
Now experimenting with:
# Use nfs for better performance
config.vm.synced_folder.each do |id, options|
if ! options[:type]
options[:type] = "nfs"
end
end
One thing you can try is to declare all the sync folder you want to make and then loop through that - something like
sharedfolderlist = {
"/folder_vm_1" => "folder_from_host/",
"/folder_vm_1" => "/can_be_full_path_folder_from_host/",
}
sharedfolder.each do |vm, host|
config.vm.synced_folder host, vm, nfs: true
end
Its not brilliant but could do the job.

Chef template loop: can't convert Chef::Node::immutableMash into String

I've got a Vagrant setup in which I'm trying to use Chef-solo to generate an conf file which loops though defined variables to pass to the application. Everything is working except the loop and I'm not familiar enough with Ruby/Chef to spot the error.
I'm going to lay out the whole chain of events in case there is something along the way that is the problem, but the first portions of this process seem to work fine.
A config file is written in yaml and includes env variable definitions to be passed:
...
variables:
- DEBUG: 2
...
The config file is read in by the Vagrantfile into a ruby hash and used to create the Chef json nodes:
...
settings = YAML::load(File.read("config.yaml"))
# Provision The Virtual Machine Using Chef
config.vm.provision "chef_solo" do |chef|
chef.json = {
"mysql" => {"server_root_password" => "secret"},
"postgresql" => {"password" => {"postgres" => "secret"}},
"nginx" => {"pid" => "/run/nginx.pid"},
"php-fpm" => {"pid" => "/run/php5-fpm.pid"},
"databases" => settings["databases"] || [],
"sites" => settings["sites"] || [],
"variables" => settings["variables"] || []
}
...
A bunch of chef cookbooks are run (apt, php, nginx, mysql etc) and finally my custom cookbook which is whats giving me grief. The portion of the cookbook responsible for creating a the conf file is shown here:
# Configure All Of The Server Environment Variables
template "#{node['php-fpm']['pool_conf_dir']}/vars.conf" do
source "vars.erb"
owner "root"
group "root"
mode 0644
variables(
:vars => node['variables']
)
notifies :restart, "service[php-fpm]"
end
And the vars.erb is just a one-liner
<%= #vars.each {|key, value| puts "env[" + key + " = " + value } %>
So, when I run all this chef spits out an error about not being able to convert a hash to a string.
can't convert Chef::Node::immutableMash into String
So for some reason this is coming across as an immutableMash and the value of key ends up being the hash [{"DEBUG"=>2}] and value ends up a nil object, but I'm not sure why or how to correct it.
The hash is ending up as the value of key in your example because the YAML file declares DEBUG: 2 as a list member of variables. This translates to variables being an array with a single hash member.
Try changing the template code to this:
<%= #vars[0].each {|key, value| puts "env[" + key + " = " + value } %>
Or try changing the YAML to this and not changing the template code:
variables:
DEBUG: 2
Either change will get your template loop iterating over the hash that you are expecting.

Resources