Vagrant not allowing NatNetwork on adapter 1 - vagrant

Vagrant is forcing network type nat on adapter 1 (eth0) even when I have the network set as natnetwork in Virutalbox.
I can set up everything through Virtualbox manually and all my VMs can communicate with each other via eth0 as desired and port forwarding works as well. I would like to get Vagrant to work the same way for easy distribution among coworkers.
It looks like others have this problem and it's not being addressed by Vagrant:
https://github.com/mitchellh/vagrant/issues/2779
Anyone know of a workaround?
Details:
$ VBoxManage list natnetworks
NetworkName: ff_mgmt
IP: 10.0.2.1
Network: 10.0.2.0/24
IPv6 Enabled: No
IPv6 Prefix: fd17:625c:f037:2::/64
DHCP Enabled: No
Enabled: Yes
Port-forwarding (ipv4)
https1:tcp:[]:4441:[10.0.2.11]:443
https2:tcp:[]:4442:[10.0.2.12]:443
https3:tcp:[]:4443:[10.0.2.13]:443
ssh1:tcp:[]:2221:[10.0.2.11]:22
ssh2:tcp:[]:2222:[10.0.2.12]:22
ssh3:tcp:[]:2223:[10.0.2.13]:22
loopback mappings (ipv4)
127.0.0.1=2
Relevant part of Vagrantfile:
boxes = [
{
:name => "ff1",
:ip => "10.0.2.11",
:ssh_port => "2221",
:https_port => "4441",
:mac => "0800270fa302",
:memory => "8192",
:cpus => "4"
},
{
:name => "ff2",
:ip => "10.0.2.12",
:ssh_port => "2222",
:https_port => "4442",
:mac => "0800270fb302",
:memory => "8192",
:cpus => "4"
},
{
:name => "ff3",
:ip => "10.0.2.13",
:ssh_port => "2223",
:https_port => "4443",
:mac => "0800270fc302",
:intnet2 => "seg5a",
:memory => "8192",
:cpus => "4"
}
]
Vagrant.configure(2) do |config|
boxes.each do |opts|
config.vm.define opts[:name] do |config|
config.vm.box = "ff"
#config.vm.box_version = 402
config.vm.hostname = opts[:name]
config.ssh.username = 'niska'
config.ssh.private_key_path = '/home/niska/.ssh/id_rsa'
config.vm.network :private_network, ip: opts[:ip]
config.vm.network :forwarded_port, guest: 22, guest_ip: opts[:ip], host: opts[:ssh_port], id: 'ssh'
config.vm.network :forwarded_port, guest: 443, host: opts[:https_port]
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = opts[:memory]
vb.cpus = opts[:cpus]
vb.customize ["modifyvm", :id, "--nic1", "natnetwork"]
vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
vb.customize ["modifyvm", :id, "--macaddress1", opts[:mac]]
#vb.customize ["modifyvm", :id, "--intnet1", "ff_mgmt"]
end
end
end
Output of vagrant up. Notice override of natnetwork. Also port forwarding picks different ports and therefore fails to connect.
$ vagrant reload ff1
==> ff1: Attempting graceful shutdown of VM...
ff1: Guest communication could not be established! This is usually because
ff1: SSH is not running, the authentication information was changed,
ff1: or some other networking issue. Vagrant will force halt, if
ff1: capable.
==> ff1: Forcing shutdown of VM...
==> ff1: Clearing any previously set network interfaces...
==> ff1: Preparing network interfaces based on configuration...
->>> ff1: Adapter 1: nat <<<--- (overrode w/ 'nat')
ff1: Adapter 2: hostonly
==> ff1: Forwarding ports...
ff1: 22 (guest) => 2221 (host) (adapter 1)
ff1: 443 (guest) => 4441 (host) (adapter 1)
==> ff1: Running 'pre-boot' VM customizations...
==> ff1: Booting VM...
==> ff1: Waiting for machine to boot. This may take a few minutes...
ff1: SSH address: 127.0.0.1:22
ff1: SSH username: niska
ff1: SSH auth method: private key
ff1: Warning: Authentication failure. Retrying...
...
Vagrant never connects

Don't set nic1 (eth0) to anything other than NAT. virtualbox requires the first interface to be NAT.
ff1: Warning: Authentication failure. Retrying...
if you change the network type of the nic to natnetwork, then vagrant can't figure out how to ssh to the machine.
if you need vagrant at first to provision, change your network settings after you finish provisioning. however; after that point, you won't be able to use vagrant.

Related

Vagrant virtualbox provider - provision and customize error with storagectl attach on subsequent "reload"

All - I am setting up a multi-machine environment with Vagrant and the VirtualBox provider on a MacOSX 10.9.5.
PROBLEM: I need to attach a SATA storage controller to a CentOS 7.1 guest VM to add additional vHDDs. I do this with the following customize snippet:
config.vm.define vmName = hostname do |node|
...snip...
node.vm.provider "virtualbox" do |v|
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
...
This works perfectly fine on a vagrant up - the SATA controller is attached, and I can subsequently createhd and storageattach them. However - when I do a vagrant reload, the customize is being run - causing the failure error condition:
Stderr: VBoxManage: error: Storage controller named 'SATA Controller' already exists
VBoxManage: error: Details: code VBOX_E_OBJECT_IN_USE (0x80bb000c), component SessionMachine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "AddStorageController(Bstr(pszCtl).raw(), StorageBus_SATA, ctl.asOutParam())" at line 1044 of file VBoxManageStorageController.cpp
Versions:
Vagrant version: 1.7.2
VirtualBox version: 4.3.28r100309
If I comment out the storagectl command prior to running the vagrant reload, then the reload succeeds correctly and there are no issues as an additional controller isn't attached. However - this isn't a sustainable solution.
Is there some option to the Vagrantfile that I haven't been able to dig up yet - to force it to NOT do a customize step? Or is this possibly a bug in the VirtualBox Vagrant provider re-running the customize steps when it shouldn't be?
~~shane
For complete reference - here is my full Vagrantfile (See the "WARNING" comments below for the customize issue):
# vi: set ft=ruby :
###
# set 'num_minions' to the number of minions you'd like built
# set 'num_disks' to the number of disks to build on the minion
# set 'disk_size' to the MB size of the disks to build for the minions
###
num_minions = 5 # number of minions to build
num_disks = 12 # number of vHDDs to attach to minions
disk_size = 256 # size (MB) of each disk to attach to minions
name_prefix = "minion" # minion name prefix to use
cur_dir = Dir.pwd
disks_dir = "#{cur_dir}/disks" # set the storage location for minion disks
int_net1 = "172.16.1" # internal net 1: first 3 octets of IP space
# NOTE "configs/minion" needs to be updated to
# match "master" IP set below - otherwise salt
# will not work
Vagrant.configure("2") do |config|
config.vm.box = "centos71"
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
config.vm.define "master-01", primary: true do |node|
node.vm.hostname = "master-01"
node.vm.network :private_network, ip: "#{int_net1}.10"
node.vm.provision "shell", inline: "echo master-01 IP set to #{int_net1}.10"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.install_master = true
salt.master_config = "configs/master"
salt.run_highstate = false
salt.master_key = 'keys/master-01.pem'
salt.master_pub = 'keys/master-01.pub'
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/master-01.pem'
salt.minion_pub = 'keys/master-01.pub'
salt.seed_master = { 'dash-01' => 'keys/dash-01.pub', 'master' => 'keys/master-01.pub' }
(1..num_minions).each do |v|
vn = "#{name_prefix}-%03d" % v
salt.seed_master = { "#{vn}" => "keys/#{vn}.pub" }
end
end
end
# MINIONS with additional disk devices
(1..num_minions).each do |i|
hostname = "#{name_prefix}-%03d" % i
minion_ip = (i.to_i + 100)
config.vm.define vmName = hostname do |node|
node.vm.hostname = vmName
node.vm.network :private_network, ip: "#{int_net1}.#{minion_ip}"
node.vm.provision "shell", inline: "echo #{vmName} IP set to #{int_net1}.#{minion_ip}"
node.vm.provider "virtualbox" do |v|
############# WARNING
############# WARNING: can't "vagrant reload" - as the 'customize' snippet is
############# WARNING: re-run on reload, and you can't inject a new controller
############# WARNING: of the same type seems to be a bug in the specific
############# WARNING: provider for vagrant or my usage???
############# WARNING
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
(1..num_disks).each do |disk|
diskname = File.expand_path("#{disks_dir}/#{vmName}-#{disk}.vdi")
v.customize ['createhd', '--filename', diskname, '--size', "#{disk_size}"]
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', disk, '--device', 0, '--type', 'hdd', '--medium', diskname]
end
end
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = "keys/#{vmName}.pem"
salt.minion_pub = "keys/#{vmName}.pub"
end
end
end
# setup Dashing web dashboard
config.vm.define "dash-01" do |node|
node.vm.hostname = "dash-01"
node.vm.network :private_network, ip: "#{int_net1}.11"
node.vm.provision "shell", inline: "echo dash-01 IP set to #{int_net1}.11"
node.vm.network "forwarded_port", guest: 80, host: 8080
node.vm.provision "shell", inline: "echo dash-01 port 80 forwarding: http://127.0.0.1:8080"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/dash-01.pem'
salt.minion_pub = 'keys/dash-01.pub'
end
end
end

Chef run list is empty. Chef-Zero Vagrant Provisioner

I am trying to setup my Chef-Zero provisioner to execute the run list from a nodes JSON file. This is what my Vagrantfile looks like.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
end
end
When I run vagrant up, I get the following output from the chef-zero provisioner.
==> default: Running provisioner: chef_zero...
==> default: Detected Chef (latest) is already installed
Generating chef JSON and uploading...
==> default: Warning: Chef run list is empty. This may not be what you want.
==> default: Running chef-zero...
==> default: stdin: is not a tty
==> default: [2015-01-08T01:19:51+00:00] INFO: Started chef-zero at http://localhost:8889 with repository at /tmp/vagrant-chef/bec99a1a4e96279669bc5bb3140c0f2e, /tmp/vagrant-chef/2cbca02c0b5c49646d765ae1c9c0738f
==> default: One version per cookbook
==> default: data_bags at /tmp/vagrant-chef/72ac2a17a7c339d91d27a954fc49f8c3/data_bags
==> default: nodes at /tmp/vagrant-chef/83a9002a19a985bce4d26b8c9d050540/nodes
==> default: roles at /tmp/vagrant-chef/9cdb44a6dfefce6070b32ff28cb96535/roles
==> default: [2015-01-08T01:19:51+00:00] INFO: Forking chef instance to converge...
==> default: [2015-01-08T01:19:51+00:00] INFO: *** Chef 12.0.3 ***
==> default: [2015-01-08T01:19:51+00:00] INFO: Chef-client pid: 1731
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List is []
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List expands to []
==> default: [2015-01-08T01:19:57+00:00] INFO: Starting Chef Run for jira
==> default: [2015-01-08T01:19:57+00:00] INFO: Running start handlers
==> default: [2015-01-08T01:19:57+00:00] INFO: Start handlers complete.
==> default: [2015-01-08T01:19:59+00:00] INFO: Chef Run complete in 1.831177529 seconds
==> default: [2015-01-08T01:19:59+00:00] INFO: Skipping removal of unused files from the cache
==> default: [2015-01-08T01:19:59+00:00] INFO: Running report handlers
==> default: [2015-01-08T01:19:59+00:00] INFO: Report handlers complete
My question is why was the runlist empty? I specified the nodes directory and the node name in the chef_zero provisioner. The JSON file that specifies the run list for "app_node" exists in the nodes directory and I can see that chef is copying up all the cookbook/nodes/roles files to the server correctly.
I feel like I am missing something here. Any help would be much appreciated. If anything is unclear let me know.
Vagrant's chef_zero provisioner is actually using chef-solo -c solo.rb -j dna.json. Your configuration in nodes is not used other than mounting the directories.
You can workaround this by doing something like this (assumes that nodes/hostname.json is the same name put in config.vm.hostname):
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.hostname = "foobarhost"
VAGRANT_JSON = JSON.parse(Pathname(__FILE__).dirname.join('../../nodes', "#{config.vm.hostname}.json").read)
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["../../cookbooks", "../../site-cookbooks"]
chef.roles_path = "../../roles"
chef.data_bags_path = "../../data_bags"
chef.verbose_logging = true
chef.run_list = VAGRANT_JSON.delete('run_list') if VAGRANT_JSON['run_list']
# Add JSON Attributes
chef.json = VAGRANT_JSON
end
end
Note I just use chef_solo, as what's the point of running chef_zero when it really calls chef_solo.
You aren't specifying a run list. The directory you're talking about is where the node data is stored, not where those JSON files are gotten. Chef has no idea what you are trying to do right now.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
chef.run_list = []
chef.json = {}
end
end
This should allow you to set your run list and include any JSON attributes you wanted to include with it.
http://docs.vagrantup.com/v2/provisioning/chef_common.html
Has more options you might want to consider if they meet your needs. :-)
Vagrant doesn't consult Chef about existing nodes. In order to use the Chef node file that you wrote, you'll need to tell vagrant about it. In your Vagrantfile, add this line beneath the line chef.node_name = "app_node":
chef.json = JSON.parse(File.read("../kitchen/nodes/app_node.json"))
Once you do that, Chef and Vagrant will both look for the node configuration data in the same place.
I am not sure how to specify a local directory (still researching), but when you specify this in the Vagrantfile in chef.run_list["stuff"], vagrant will put a /tmp/vagrant-chef/dna.json
It also creates a solo.rb, which will insert stuff like node_name, if chef.node_name is created.

Unable to assign static ip using Vagrant

I am using Vagrant and vagrant-vshpere plugin to deploy VM on vCenter. Vagrant deploys the VM but it fails to assign the ip address for the VM. Could any body let me know do i need to make specific changes in order for the ip address to be assigned?
Here are the contents of my Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = 'dummy'
config.vm.box_url = './example_box/dummy.box'
config.vm.network :public_network, ip: "xxx.xxx.xxx.xxx"
config.vm.provider :vsphere do | vsphere |
vsphere.host = '<vSphere_Host>'
vsphere.data_center_name = '<Data_Center_Name>'
vsphere.data_store_name = '<Data_Store_Name>'
vsphere.template_name = '<Template_Name>'
vsphere.name = '<New_Name_Of_The_VM>'
vsphere.user = '<vShpere_User_Name>'
vsphere.password = '<The_Password>'
vsphere.insecure = true
vsphere.compute_resource_name = '<Compute_Resource_IP>'
end
end
Note: The template that i am uploading is a custom template [a linux box with some applications running on top of it]
You can set static ip with a private network..
try with this network configuration
config.vm.network "private_network", ip: "192.168.50.4"

Vagrant to deploy Salt master

I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?
I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost

How do I add another virtual host?

I created a debian vm and via puphet I just created one virtual host and pointed the local ip it created inside my hosts host file. But now when I try creating another it wont work. I've read other threads such as:
creating virtual hosts on a vagrant box which mentions to use chef. But what i don't understand is why wont it work the normal way? By the normal way i mean creating a new vhost file in /etc/apache2/sites-available , enabling it and just creating the path with ip in my local machines host file. What is chef doing that I cant do manually?
Here is my vagrant file:
Vagrant.configure("2") do |config|
config.vm.box = "debian-wheezy72-x64-vbox43"
config.vm.box_url = "https://puphpet.s3.amazonaws.com/debian-wheezy72-x64-vbox43.box"
config.vm.network "private_network", ip: "192.168.56.101"
config.vm.synced_folder "/Volumes/www", "/var/www", id: "vagrant-root", :nfs => false
config.vm.usable_port_range = (2200..2250)
config.vm.provider :virtualbox do |virtualbox|
virtualbox.customize ["modifyvm", :id, "--name", "wheezyDEB"]
virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
virtualbox.customize ["modifyvm", :id, "--memory", "512"]
virtualbox.customize ["setextradata", :id, "--VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
end
config.vm.provision :shell, :path => "shell/initial-setup.sh"
config.vm.provision :shell, :path => "shell/update-puppet.sh"
config.vm.provision :shell, :path => "shell/librarian-puppet-vagrant.sh"
config.vm.provision :puppet do |puppet|
puppet.facter = {
"ssh_username" => "vagrant"
}
puppet.manifests_path = "puppet/manifests"
puppet.options = ["--verbose", "--hiera_config /vagrant/hiera.yaml", "--parser future"]
end
config.ssh.username = "vagrant"
config.ssh.shell = "bash -l"
config.ssh.keep_alive = true
config.ssh.forward_agent = false
config.ssh.forward_x11 = false
config.vagrant.host = :detect
end
I figured it out, i'm not sure if this is the correct way of doing it but it works and thats all that matters. If anyone has a better or faster way let me know.
This is what i did:
1) I had origionally used puphet from puphet.com
2) i had origionally used puphet from puphet.com
3) i went into the directory where puhpet gave me so i went into mypath/puppet/hieradata
and opened common.yaml
4) under vhosts: are the vhost settings, example: <
vhosts:
VO6aT11EHJmL:
servername: localhost
docroot: /var/www/public
port: '80'
setenv:
- 'APP_ENV dev'
override:
- All
4yNJr1LpLJYA:
servername: laravel.dev
docroot: /var/www/vhosts/laravel-dev
port: '80'
setenv:
- 'APP_ENV dev'
override:
- all
Each vhost is given a unique identification name and it can be anything. The options for servername, docroot seem simple so modify these.
Next, in order for the virtual machine to reconize your updates make sure your vagrant is stopped while making these changes into common.yaml and after your done and save type vagrant up into terminal. Once its up go into your directories where your sites-available directory is. In my case i'm using apache under debian; Same goes for ubuntu.
/etc/apache2/sites-available
the files that you see here, their name matches the unique names that you have in the common.yaml file except there are numbers in them for example:
#-unique_name.conf
so in the common.yaml file if you created or modified settings under 4yNJr1LpLJYA you would need to look for #-4yNJr1LpLJYA.conf in this directory and match the settings as you have in your common.yaml file. Once you are done restart your server, in my case its
sudo service apache reload
also dont forget to add the virtual host into your local machines host file and match it with the same ip that puphet gave you. for example:
192.168.56.101 my-virtual-domain
do this for each one that you want to add.

Resources