How to template Vagrantfile using Ruby? - ruby

I have several Vagrantfile, each one for a different provider, since Vagrant has a limitation that doesn't allow to make two or more provisions at the same time using the same Vagrantfile.
So, I split into two or more Vagrantfiles, but my "body", my provisions scripts are the same for both, the only thing that changes is the provider block.
For example:
local_nagios/Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
set = YAML.load_file '../../../settings.yaml'
Vagrant.configure(2) do |nagios|
nagios.vm.provider :virtualbox do |provider, override|
override.vm.box = 'ubuntu/trusty64'
override.vm.hostname = 'nagios.company.com'
override.vm.synced_folder '.', '/vagrant', disabled:true
override.vm.network 'public_network', bridge:set['network_interface'], ip:set['dev_nagios_ip']
provider.memory = 4096
provider.cpus = 2
end
install = set['devops_home'] + '/vagrant/lib/install'
nagios.vm.provision 'shell', path: install + '/basic'
nagios.vm.provision 'shell', path: install + '/devops'
step = set['devops_home'] + '/vagrant/server/nagios/steps'
nagios.vm.provision 'shell', path: step + '/install', args: [set['nagios_password']]
end
digital_nagios/Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
set = YAML.load_file '../../../settings.yaml'
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
override.vm.box = 'digital_ocean'
override.vm.hostname = 'nagios.company.com'
override.vm.synced_folder '.', '/vagrant', disabled:true
override.ssh.private_key_path = '~/.ssh/id_rsa'
provider.token = 'my-token'
provider.image = 'ubuntu-14-04-x64'
provider.region = 'fra1'
provider.size = '4gb'
end
install = set['devops_home'] + '/vagrant/lib/install'
nagios.vm.provision 'shell', path: install + '/basic'
nagios.vm.provision 'shell', path: install + '/devops'
step = set['devops_home'] + '/vagrant/server/nagios/steps'
nagios.vm.provision 'shell', path: step + '/install', args: [set['nagios_password']]
end
I wonder if is possible to make a template from this. Or import my common area like this:
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
...
end
import '../provision.rb'
end
I'm not familiar with Ruby, so any suggestion would be very appreciated!

you can do something like
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
...
end
eval File.read("../Vagrantfile-common")
end

Related

is a loop in packer config.json file possible?

is it possible so do loops in packer ?
I have a Vagrantfile with a loop and want to convert it to packer. I already searched on google and stack overflow but didn't find the right solution.
I have problems converting this into a json format:
(0..$NODE_COUNT).each do |I|
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BOX_IMAGE = "VAGRANT CLOUD IMAGE NAME"
$TBSP_VERSION = "2.0"
# number of additional tendermint validation nodes to be provisioned
$NODE_COUNT = 1
Vagrant.configure("2") do |config|
(0..$NODE_COUNT).each do |i|
config.vm.define "node#{i}" do |subconfig|
subconfig.vm.box = $BOX_IMAGE
subconfig.vm.hostname = "node#{i}"
subconfig.vm.network "private_network", ip: "192.168.50.#{i + 10}"
subconfig.disksize.size = '10GB'
subconfig.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
v.customize ["modifyvm", :id, "--name", "v2.0-abci-tm-d-multi-node#{i}-server"]
end
subconfig.trigger.after :up do |trigger|
if(i == $NODE_COUNT)
trigger.info = "last Node-#{$NODE_COUNT} is up"
trigger.run = {path: "provision_tendermint.sh", :args => "'#{$NODE_COUNT}'"}
end
end
subconfig.vm.provision "shell", path: "provision.sh", :args => "'#{i}' '#{$NODE_COUNT}'"
end
end
end
No, Packer JSON doesn't currently support doing any kind of looping. You will have to approach this in a different manner. I think is you are just using Vagrant to provision VMs and not create an Image, then you should consider using Terraform instead.

Unable to add Raspbian image to Vagrant-libvirt Virtual machine

Am trying to create a virtual machine for Raspbian using Vagrant-libvirt plugin. However I failed find out how to add '2017-09-07-raspbian-stretch-lite.img' image.
Am able to boot the image using qemu.
qemu-system-arm -kernel kernel-qemu-4.4.34-jessie -cpu arm1176 -m 256 -machine versatilepb -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw vga=normal console=ttyAMA0" -drive "file=./2017-09-07-raspbian-stretch-lite.img,index=0,media=disk,format=raw" -no-reboot -serial stdio -curses
Here is my incomplete Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
Vagrant.configure("2") do |config|
config.vm.box_url = File.join(Dir.pwd, "2017-09-07-raspbian-stretch-lite.img")
config.vm.box = "Raspbian"
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "qemu"
#config.vm.box = "arm"
# vagrant issues #1673..fixes hang with configure_networks
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
libvirt.uri = 'qemu+unix:///system'
libvirt.host = 'virtualized'
libvirt.kernel = File.join(Dir.pwd, "OS/kernel-qemu-4.4.34-jessie")
#libvirt.kernel = File.join(Dir.pwd, "Ovmlinuz")
#libvirt.initrd = File.join(Dir.pwd, "initrd")
#libvirt.storage :file, :size => '20G', :path => "./OS/2017-09-07-raspbian-stretch-lite.img", :allow_existing => true, :shareable => true, :type => 'raw'
libvirt.emulator_path = '/usr/bin/qemu-system-arm'
libvirt.cmd_line = 'root=/dev/mmcblk0p2 devtmpfs.mount=0 rw'
libvirt.memory = 256
libvirt.cpu_model = 'arm1176'
libvirt.cpu_fallback = 'allow'
libvirt.graphics_type = 'none'
end
end
I've committed support to vagrant-libvirt for precisely this. This is a working config:
config.vm.define "rpi" do |rpi|
# Create rpi.vm
rpi.vm.hostname = "rpi"
rpi.vm.box = "raspbian-jessie-lite-2016-02-26"
rpi.vm.provider :libvirt do |v|
v.driver = 'qemu'
v.random_hostname = true
v.connect_via_ssh = false
v.memory = 1024
v.cpus = 1
v.volume_cache = 'none'
v.storage_pool_name = "vagrant"
v.kernel = File.join(Dir.pwd, "kernel")
v.initrd = File.join(Dir.pwd, "initrd")
v.machine_type = 'virt'
v.machine_arch = 'armv7l'
v.cpu_mode = 'custom'
v.cpu_model = 'cortex-a15'
v.cpu_fallback = 'allow'
v.cmd_line = 'rw earlyprintk loglevel=8 console=ttyAMA0,115200n8 rootwait root=/dev/vda2'
v.graphics_type = 'none'
v.disk_bus = 'virtio'
v.nic_model_type = 'virtio'
v.features = ["apic","gic version='2'"]
end
However, you will need to find or build a kernel and accompanying initrd that supports the virt machine type. You can take a look here for some background on that.

Vagrant virtualbox provider - provision and customize error with storagectl attach on subsequent "reload"

All - I am setting up a multi-machine environment with Vagrant and the VirtualBox provider on a MacOSX 10.9.5.
PROBLEM: I need to attach a SATA storage controller to a CentOS 7.1 guest VM to add additional vHDDs. I do this with the following customize snippet:
config.vm.define vmName = hostname do |node|
...snip...
node.vm.provider "virtualbox" do |v|
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
...
This works perfectly fine on a vagrant up - the SATA controller is attached, and I can subsequently createhd and storageattach them. However - when I do a vagrant reload, the customize is being run - causing the failure error condition:
Stderr: VBoxManage: error: Storage controller named 'SATA Controller' already exists
VBoxManage: error: Details: code VBOX_E_OBJECT_IN_USE (0x80bb000c), component SessionMachine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "AddStorageController(Bstr(pszCtl).raw(), StorageBus_SATA, ctl.asOutParam())" at line 1044 of file VBoxManageStorageController.cpp
Versions:
Vagrant version: 1.7.2
VirtualBox version: 4.3.28r100309
If I comment out the storagectl command prior to running the vagrant reload, then the reload succeeds correctly and there are no issues as an additional controller isn't attached. However - this isn't a sustainable solution.
Is there some option to the Vagrantfile that I haven't been able to dig up yet - to force it to NOT do a customize step? Or is this possibly a bug in the VirtualBox Vagrant provider re-running the customize steps when it shouldn't be?
~~shane
For complete reference - here is my full Vagrantfile (See the "WARNING" comments below for the customize issue):
# vi: set ft=ruby :
###
# set 'num_minions' to the number of minions you'd like built
# set 'num_disks' to the number of disks to build on the minion
# set 'disk_size' to the MB size of the disks to build for the minions
###
num_minions = 5 # number of minions to build
num_disks = 12 # number of vHDDs to attach to minions
disk_size = 256 # size (MB) of each disk to attach to minions
name_prefix = "minion" # minion name prefix to use
cur_dir = Dir.pwd
disks_dir = "#{cur_dir}/disks" # set the storage location for minion disks
int_net1 = "172.16.1" # internal net 1: first 3 octets of IP space
# NOTE "configs/minion" needs to be updated to
# match "master" IP set below - otherwise salt
# will not work
Vagrant.configure("2") do |config|
config.vm.box = "centos71"
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
config.vm.define "master-01", primary: true do |node|
node.vm.hostname = "master-01"
node.vm.network :private_network, ip: "#{int_net1}.10"
node.vm.provision "shell", inline: "echo master-01 IP set to #{int_net1}.10"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.install_master = true
salt.master_config = "configs/master"
salt.run_highstate = false
salt.master_key = 'keys/master-01.pem'
salt.master_pub = 'keys/master-01.pub'
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/master-01.pem'
salt.minion_pub = 'keys/master-01.pub'
salt.seed_master = { 'dash-01' => 'keys/dash-01.pub', 'master' => 'keys/master-01.pub' }
(1..num_minions).each do |v|
vn = "#{name_prefix}-%03d" % v
salt.seed_master = { "#{vn}" => "keys/#{vn}.pub" }
end
end
end
# MINIONS with additional disk devices
(1..num_minions).each do |i|
hostname = "#{name_prefix}-%03d" % i
minion_ip = (i.to_i + 100)
config.vm.define vmName = hostname do |node|
node.vm.hostname = vmName
node.vm.network :private_network, ip: "#{int_net1}.#{minion_ip}"
node.vm.provision "shell", inline: "echo #{vmName} IP set to #{int_net1}.#{minion_ip}"
node.vm.provider "virtualbox" do |v|
############# WARNING
############# WARNING: can't "vagrant reload" - as the 'customize' snippet is
############# WARNING: re-run on reload, and you can't inject a new controller
############# WARNING: of the same type seems to be a bug in the specific
############# WARNING: provider for vagrant or my usage???
############# WARNING
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
(1..num_disks).each do |disk|
diskname = File.expand_path("#{disks_dir}/#{vmName}-#{disk}.vdi")
v.customize ['createhd', '--filename', diskname, '--size', "#{disk_size}"]
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', disk, '--device', 0, '--type', 'hdd', '--medium', diskname]
end
end
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = "keys/#{vmName}.pem"
salt.minion_pub = "keys/#{vmName}.pub"
end
end
end
# setup Dashing web dashboard
config.vm.define "dash-01" do |node|
node.vm.hostname = "dash-01"
node.vm.network :private_network, ip: "#{int_net1}.11"
node.vm.provision "shell", inline: "echo dash-01 IP set to #{int_net1}.11"
node.vm.network "forwarded_port", guest: 80, host: 8080
node.vm.provision "shell", inline: "echo dash-01 port 80 forwarding: http://127.0.0.1:8080"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/dash-01.pem'
salt.minion_pub = 'keys/dash-01.pub'
end
end
end

vagrant sets up duplicate IP addresses; Ansible picks them up?

I've got the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
NUM_HOSTS = 3
def hostname(id)
"node#{id}"
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.memory = 512
vb.cpus = 1
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "vagrant_playbook.yml"
ansible.groups = {
"vagrant" => (1..NUM_HOSTS).collect { |id| hostname(id) }
}
end
NUM_HOSTS.times do |n|
id = n + 1
config.vm.define hostname(id), primary: id == 1 do |host|
host.vm.network :private_network, ip: "10.0.33.1#{id}"
end
end
end
This assigns the private addresses on enp0s8. However, it assigns duplicate IP addresses on enp0s3: 10.0.2.15. Unfortunately, Ansible seems to be picking the duplicate address up in ansible_default_ipv4 instead of the unique address, so services running on these boxes don't work as intended. So, is there a way to:
stop Vagrant from assigning duplicate IPs? (I'm using the virtualbox provider, if that helps)
change which interface ansible_default_ipv4 uses?
some other solution which I haven't thought to search for?
This is a workaround you can use in the meantime:
{{ ansible_all_ipv4_addresses | last }}
Will work assuming it's guaranteed that your ipv4 addresses for the target box will always be sorted in a predictable order.

Provisioning Vagrant with Chef role error

I'm relatively new to Chef, and I'm trying to setup a Vagrant Box using Digital Ocean as a provider and Chef as a provisioner. The issue seems to be with the roles, but as far as I can tell, they match up fine. Thanks.
Here is my Vagrantfile:
Vagrant.configure('2') do |config|
config.omnibus.chef_version = :latest
config.vm.provider :digital_ocean do |provider, override|
config.vm.hostname = 'majestic-chaos-ubuntu14.04x64'
override.ssh.private_key_path = '~/.ssh/id_rsa'
override.vm.box = 'digital_ocean'
provider.token = 'XXXXXXXXXXXX'
provider.image = 'Ubuntu 14.04 x64'
provider.region = 'nyc2'
provider.size = '512mb'
end
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ['../../cookbooks']
chef.roles_path = ['../../roles']
chef.add_role ("majestic-chaos-ubuntu14.04x64")
end
end
and my roles file:
name "majestic-chaos-ubuntu-14.04x64"
ssl_verify_mode :verify_peer
run_list(
"recipe[apt]",
"recipe[open-ssl]",
"recipe[build-essential]",
"recipe[chef-ruby_build]",
"recipe[nodejs-cookbook]",
"recipe[rbenv::user]",
"recipe[rbenv::vagrant]",
"recipe[zsh]",
"recipe[vim]",
"recipe[imagemagick]",
)
override_attributes(
rbenv: {
user_installs: [{
user: 'vagrant',
rubies: ["2.1.2"],
global: "2.1.2",
gems: {
"2.1.2" => [
{ name: "bundler" }
]
}
}]
}
)
And this is the error I'm getting:
[2014-09-18T16:05:48-04:00] INFO: *** Chef 11.16.2 ***
[2014-09-18T16:05:48-04:00] INFO: Chef-client pid: 2934
[2014-09-18T16:05:51-04:00] INFO: Setting the run_list to ["role[majestic-chaos-
ubuntu14.04x64]"] from CLI options
default: [2014-09-18T16:05:51-04:00] ERROR: Role majestic-chaos-ubuntu14.04x64 (included
by 'top level') is in the runlist but does not exist. Skipping expand.
[2014-09-18T16:05:51-04:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-
stacktrace.out
[2014-09-18T16:05:51-04:00] ERROR: The expanded run list includes nonexistent roles:
majestic-chaos-ubuntu14.04x64
[2014-09-18T16:05:51-04:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process
exited unsuccessfully (exit code 1)
I had a similar problem when launching a Vagrant image where the roles file was not being picked up. The solution I found for that was that for the kitchen.yml file the roles file should be in JSON format. (The roles file listed above does not look like JSON).
As documented here: https://docs.chef.io/config_yml_kitchen.html
roles_path The relative path to the directory in which role data is located. This data must be defined as JSON.
Here are a few ruby commands to convert your .rb to JSON
require 'chef'
role = Chef::Role.new
role.from_file("./roles/useful_api_role.rb")
puts JSON.pretty_generate(role)
As a side effect, this above code will load up an rb config and you can validate that it's producing the role variables you expect.
Try using absolute paths for the roles and cookbooks:
chef.cookbooks_path = File.expand_path('../../cookbooks', __FILE__)
chef.roles_path = File.expand_path('../../roles', __FILE__)

Resources