We've got a diverse dev team, one on Windows, another on Ubuntu and another on OSX. Being windows boy, I setup the first version of the vagrant setup script which works fabulously ;)
However, when running it on the Ubuntu host, the first time it gets to a provision step that calls a bash script, it fails due to permissions.
On windows, this doesn't matter as the samba share automatically has sufficient permissions to run the bash script (which resides within the project hierarchy, so is present in the /vagrant share on the VM), but with ubuntu I need to set the permissions on this file in the provision script before I call it.
This isn't the problem and to be honest I suspect even with the extra "chmod" step it would still work fine under windows, but, is there a way in the vagrant file to flag certain provisioning steps as 'Windows Only', 'Linux Only' or 'Mac Only'?
i.e. in pseduo code, something like.
.
.
if (host == windows) then
config.vm.provision : shell, : inline => "/vagrant/provisioning/only_run_this_on_windows.sh"
else if (host == linux) then
config.vm.provision : shell, : inline => "/vagrant/provisioning/only_run_this_on_linux.sh"
else if (host == osx) then
config.vm.provision : shell, : inline => "/vagrant/provisioning/only_run_this_on_osx.sh"
end if
.
.
Thanks in advance.
Note that Vagrant itself, in the Vagrant::Util::Platform class already implements a more advanced version of the platform checking logic in the answer by BernardoSilva.
So in a Vagrantfile, you can simply use the following:
if Vagrant::Util::Platform.windows? then
myHomeDir = ENV["USERPROFILE"]
else
myHomeDir = "~"
end
Find out current OS inside Vagrantfile.
Add this into your Vagrantfile:
module OS
def OS.windows?
(/cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM) != nil
end
def OS.mac?
(/darwin/ =~ RUBY_PLATFORM) != nil
end
def OS.unix?
!OS.windows?
end
def OS.linux?
OS.unix? and not OS.mac?
end
end
Then you can use it as you like.
if OS.windows? [then]
code...
end
Edit: was missing the ? on if condition.
Example used to test:
is_windows_host = "#{OS.windows?}"
puts "is_windows_host: #{OS.windows?}"
if OS.windows?
puts "Vagrant launched from windows."
elsif OS.mac?
puts "Vagrant launched from mac."
elsif OS.unix?
puts "Vagrant launched from unix."
elsif OS.linux?
puts "Vagrant launched from linux."
else
puts "Vagrant launched from unknown platform."
end
Execute:
# Ran provision to call Vagrantfile.
$ vagrant provision
is_windows_host: false
Vagrant launched from mac.
Here is a version using the Vagrant utils that checks for mac and windows:
if Vagrant::Util::Platform.windows?
# is windows
elsif Vagrant::Util::Platform.darwin?
# is mac
else
# is linux or some other OS
end
When I read the original question according to me it is not how to find out on which OS vagrant it self runs, but which OS do the virtual machines to be provisioned have. That is why you want to run a different provision script depending on the diffent OSses of the new VMs, eg: "/vagrant/provisioning/only_run_this_on_${OS_OF_NEW_VM}.sh".
Unfortunately Vagrant does not have this capability (yet), so this is my solution: I define my VMs on top of my vagrant file:
cluster = {
"control.ansible.RHEL76" => { :ip => "192.168.1.31", :type => 0, :cpus => 1, :mem => 1024, :box_image => "centos/7" },
"app01.ansible.RHEL76" => { :ip => "192.168.1.32", :type => 1, :cpus => 1, :mem => 1024, :box_image => "centos/7" },
"app02.ansible.RHEL76" => { :ip => "192.168.1.33", :type => 1, :cpus => 1, :mem => 1024, :box_image => "centos/7" },
"winserver" => { :ip => "192.168.1.34", :type => 2, :cpus => 1, :mem => 1024, :box_image => "mwrock/Windows2016" },
}
Then these conditions in my code can provision different ways depending on the OS of the VM's:
if "#{info[:box_image]}" == "mwrock/Windows2016" then
puts "is_windows_host: #{info[:box_image]}"
config.vm.provision : shell, inline => "/vagrant/provisioning/only_run_this_on_windows.psl"
end
if "#{info[:box_image]}" == "centos/7" then
puts "is_linux_host: #{info[:box_image]}"
config.vm.provision : shell, inline => "/vagrant/provisioning/only_run_this_on_linux.sh"
end
By the way, only the ansible controller should have ansible installed, because in real life (yes the office) I do not use vagrant but an ansible controller, which I also want in my lab (OK Virtual Box on both my Windows 10 desktop as well as my Ubuntu laptop). I use a condition to test "type = 0" (which I use for a ansible "controller"). Only the ansible controller starts ansible_local to provision the cluster of VMs with ansible.
if info[:type] == 0 then
cfg.vm.provision "shell", inline: "if [ `which ansible` ] ; then echo \"ansible available\"; else sudo yum -y update; sudo yum -y install epel-release; sudo yum -y install ansible; fi"
cfg.vm.provision "ansible_local" do |ansible|
ansible.extra_vars = { ansible_ssh_user: 'vagrant' }
ansible.inventory_path = "./production"
ansible.playbook = "rhelhosts.yml"
ansible.limit = "local"
end # ansible_local
This is displayed during a vagrant provision:
PS D:\Documents\vagrant\top> vagrant provision control.top.RHEL76
is_linux_host: centos/7
is_linux_host: centos/7
is_linux_host: centos/7
is_windows_host: mwrock/Windows2016
This is displayed during a vagrant provision:
PS D:\Documents\vagrant\ansible> vagrant provision control.ansible.RHEL76
is_linux_host: centos/7
is_linux_host: centos/7
is_linux_host: centos/7
is_windows_host: mwrock/Windows2016
--- many more lines, not relevant ---
Have fun experimenting and deploying your multimachine / multi OS labs!
Related
I can't provision, i search in the other threads but no one is useful!
I'm using Vagrant with Puppet both of them at the latest version, my project structure is:
prova
|
|__Vagrantfile
|
|__puppet
|__manifests
| |__ubonda.pp
|
|__modules
|
|__apache
|
|__manifests
|
|__apache.pp
My VagrantFile is:
Vagrant.configure("2") do |config|
config.vm.define "ubonda" do |vm0|
vm0.vm.hostname = "ubonda"
vm0.vm.box = "hashicorp/precise64"
vm0.vm.provision "puppet" do |puppet|
puppet.manifests_path = 'puppet/manifests'
puppet.module_path = 'puppet/modules'
puppet.manifest_file = "ubonda.pp"
end
end
end
My ubonda.pp file is:
# default path
Exec {
path => ["/usr/bin", "/bin", "/usr/sbin", "/sbin", "/usr/local/bin", "/usr/local/sbin"]
}
include apache
My apache.pp file is:
class apache {
# install apache
package { "apache2":
ensure => present,
require => Exec["apt-get update"]
}
# ensures that mode_rewrite is loaded and modifies the default configuration file
file { "/etc/apache2/mods-enabled/rewrite.load":
ensure => link,
target => "/etc/apache2/mods-available/rewrite.load",
require => Package["apache2"]
}
# create directory
file {"/etc/apache2/sites-enabled":
ensure => directory,
recurse => true,
purge => true,
force => true,
before => File["/etc/apache2/sites-enabled/vagrant_webroot"],
require => Package["apache2"],...
}
}
If i launch vagrant provision i obtain :
==> ubonda: Running provisioner: puppet...
==> ubonda: Running Puppet with ubonda.pp...
==> ubonda: warning: Could not retrieve fact fqdn
==> ubonda: Could not find class apache for ubonda at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/ubonda.pp:6 on node ubonda
I search in the tmp/manifests folder of the ubonda vm and inside there is only the ubonda.pp while in the tmp/modules there is the apache but the two are not connected, so I tried to copy inside the manifest but nothing has changed, how can I do?
The solution is very simple but I leave the question for posterity.
The file containing the puppet specifications must be named as init.pp for it to be correctly recognized
is it possible so do loops in packer ?
I have a Vagrantfile with a loop and want to convert it to packer. I already searched on google and stack overflow but didn't find the right solution.
I have problems converting this into a json format:
(0..$NODE_COUNT).each do |I|
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BOX_IMAGE = "VAGRANT CLOUD IMAGE NAME"
$TBSP_VERSION = "2.0"
# number of additional tendermint validation nodes to be provisioned
$NODE_COUNT = 1
Vagrant.configure("2") do |config|
(0..$NODE_COUNT).each do |i|
config.vm.define "node#{i}" do |subconfig|
subconfig.vm.box = $BOX_IMAGE
subconfig.vm.hostname = "node#{i}"
subconfig.vm.network "private_network", ip: "192.168.50.#{i + 10}"
subconfig.disksize.size = '10GB'
subconfig.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
v.customize ["modifyvm", :id, "--name", "v2.0-abci-tm-d-multi-node#{i}-server"]
end
subconfig.trigger.after :up do |trigger|
if(i == $NODE_COUNT)
trigger.info = "last Node-#{$NODE_COUNT} is up"
trigger.run = {path: "provision_tendermint.sh", :args => "'#{$NODE_COUNT}'"}
end
end
subconfig.vm.provision "shell", path: "provision.sh", :args => "'#{i}' '#{$NODE_COUNT}'"
end
end
end
No, Packer JSON doesn't currently support doing any kind of looping. You will have to approach this in a different manner. I think is you are just using Vagrant to provision VMs and not create an Image, then you should consider using Terraform instead.
I want to instruct Vagrant (through Vagrantfile) to use nfs for all synced_folder declared.
I guess it will be something like this:
vm.synced_folder.each do ...
... use nfs
end
But I don't know ruby's syntax.
Now experimenting with:
# Use nfs for better performance
config.vm.synced_folder.each do |id, options|
if ! options[:type]
options[:type] = "nfs"
end
end
One thing you can try is to declare all the sync folder you want to make and then loop through that - something like
sharedfolderlist = {
"/folder_vm_1" => "folder_from_host/",
"/folder_vm_1" => "/can_be_full_path_folder_from_host/",
}
sharedfolder.each do |vm, host|
config.vm.synced_folder host, vm, nfs: true
end
Its not brilliant but could do the job.
I have a Vagrantfile in which I am provisioning different Vm's by looping through a json file. eg.
cluster_config.each do |cluster|
cluster_name = cluster[0] # name of node
nodes_config = (JSON.parse(File.read("test_data_bags/myapp/_default.json")))['clusters'][cluster_name]['nodes']
nodes_config.each do |node|
config.vm.define node_name do |nodeconfig|
processes = node_values['processes']
processes.each do |process|
nodeconfig.vm.provision :chef_solo do |chef|
chef.data_bags_path = 'test_data_bags'
chef.run_list = run_list
chef.roles_path = "roles"
"myapp" => {
"cluster_name" => cluster_name,
"role" => node_role
},
}
end
end
end
end
I would like to do the same within kitchen ie. take an array of attributes and foreach array item - run recipe xyz - this is so I can write some tests using test-kitchen , is this possible?
Thanks
There's a few different workarounds to accomplish this, but they are all definitely workarounds. There is an issue that was opened to discuss support of multiple boxes on test-kitchen, and you can go there to read more about why this probably won't be supported any time soon. TL;DR: it's not really a goal of the project.
Workarounds include:
Chef-provisioning can bootstrap more servers from a single provisioned server/test-suite
Kitchen-nodes provisioner can share data about each server to the other test-suites in your setup
A custom Vagrantfile template for test-kitchen
I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?
I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost