Vagrant: ping not working while ssh works - vagrant

I have created vagrant box by using these settings:
Vagrant.configure("2") do |config|
config.ssh.forward_x11 = true
config.vm.define 'test2' do |machine|
machine.vm.box = "ubuntu/xenial64"
machine.vm.network :public_network, ip: "192.168.33.23"
machine.disksize.size = "15GB"
machine.vm.synced_folder "./data", "/root/data"
machine.vm.provider "virtualbox" do |v|
v.name = 'test2'
v.customize ["modifyvm", :id, "--memory", 3072]
v.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/data", "1"]
end
end
end
I can connect to this box via ssh, but when I try to ping it like this:
ping 192.168.33.23
It throws to me timeout error. Why this is happening?

ssh and ping use different ports, probably when trying to ping that IP, server probably denies public access. Try to use telnet with specific port instead to see if port is opened and accessible.

Related

How do I provision a Vagrant box before it's been instantiated?

I would like to create a Vagrant box from the output of another Vagrant box. I'd like to do it from within Vagrant. I would like to run vagrant init or vagrant up and have it do the following steps:
Run this command to convert from .bin to .vdi vboxmanage convertfromraw --format vdi ../build/qMp_3.2.1-Clearance_VirtualBox_x86_factory_20180325-0702.bin qmp-nycmesh-3.2.1.vdi
Create an instance based on that .vdi
I have this Vagrantfile, but 1) it is not executing the shell script before trying to create the box, and 2) it trying to download a box instead of using the VDI given.
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: "echo Hello"
#config.vm.box = "nycmesh-qmp-openwrt"
config.vm.network "public_network"
config.vm.network "private_network", type: "dhcp"
config.vm.provider "virtualbox" do |vb|
vb.memory = "64"
vb.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "../build/qMp_3.2.1-Clearance_VirtualBox_x86_factory_20170406-2203.vdi"]
#vb.customize ["modifyvm", :id, "--ostype", "Linux26"]
end
end
With config.vm.box, it gives
$ vagrant up
==> default: Box 'nycmesh-qmp-openwrt' could not be found. Attempting to find and install...
default: Downloading: nycmesh-qmp-openwrt
An error occurred while downloading the remote file. ...
Couldn't open file .../nycmesh-qmp-openwrt
And commenting config.vm.box gives
$ vagrant up
vm:
* A box must be specified.
Vagrant 2.0.1
I found I can specify a placeholder box and can just swap it out before it boots, but it still has to copy the gigabyte sized image and when you use vagrant destroy it doesn't delete the placeholder image. This is less than ideal.
I realized the Vagrantfile is just Ruby so I was able to script the VDI creation. I wasn't able to delete the placeholder image because there's no hook to insert yourself between the instance being created and the swapping of the disk image.
Vagrant.configure("2") do |config|
latest_bin = `ls -t ../build/*.bin | head -1`.strip
#latest_bin = Dir.glob('../build/*.bin').sort{ |a,b| File.new(a).stat <=> File.new(b).stat }.last
vdi_file = 'nycmesh-qmp-openwrt.vdi'
system "vboxmanage convertfromraw --format vdi #{latest_bin} #{vdi_file}" unless File.exist?(vdi_file)
config.vm.box = "centos/7"
config.vm.network "public_network"
config.vm.network "private_network", type: "dhcp"
config.vm.provider "virtualbox" do |vb|
vb.memory = "64"
# add the newly created build disk firmware
#vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 0, '--device', 0, '--type', 'hdd', '--medium', "none"]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 0, '--device', 0, '--type', 'hdd', '--medium', "nycmesh-qmp-openwrt.vdi"]
vb.customize ["modifyvm", :id, "--ostype", "Linux26"]
end
config.vm.provision "shell", inline: "" do
# delete the placeholder dummy hdd image. there should be a better way.
# NO WAY TO GET THE DUMMY HDD FILE NAME AFTER THE INSTANCE IS CREATED AND BEFORE THE NEW VDI IS INSTALLED!
# id = File.read(".vagrant/machines/default/virtualbox/id")
# hdd_file = `vboxmanage showvminfo #{id}`.split(/\n/).grep(/IDE \(0, 0\): /).first.split(/ /)[3]
# puts hdd_file
# File.delete(hdd_file) if hdd_file.index 'centos-7-1-1.x86_64.vmdk'
end
end

Permission issue with Vagrant : 501 dialout

when trying to use NFS with Vagrant to sync folders, on my vagrant I see the permissions as user 501 group dialout.
I fixed the issue disabling NFS and specifying a user and group manually in Vagrant file. I would like to use NFS though.
Here is my VagrantFile
$BOX = "ubuntu/xenial64"
$IP = "10.0.0.10"
$MEMORY = ENV.has_key?('VM_MEMORY') ? ENV['VM_MEMORY'] : "1024"
$CPUS = ENV.has_key?('VM_CPUS') ? ENV['VM_CPUS'] : "1"
Vagrant.configure("2") do |config|
config.vm.hostname = "name.dev"
config.vm.box = $BOX
config.vm.network :private_network, ip: $IP
config.ssh.forward_agent = true
config.vm.synced_folder ".", "/var/www/name/current", type: "nfs"
config.vm.provider "virtualbox" do |v|
v.name = "name
v.customize ["modifyvm", :id, "--cpuexecutioncap", "100"]
v.customize ["modifyvm", :id, "--memory", $MEMORY]
v.customize ["modifyvm", :id, "--cpus", $CPUS]
end
end
I've tried nfsbind but it seems that this require absolute path which I would rather avoid.
Any tips?
Thanks

Access Kubernetes service (in Vagrant) on localhost

I have created a service on some pods in Kubernetes. I'm able to curl the service-ip:port and I see some HTML.
I want to access the service in my browser on my own PC.
What do I have to edit in my Vagrantfile to map my vagrant internal IP on my localhost (of PC)?
This is a part of my vagrantfile (I think the part where I have to make edits)
# Finally, fall back to VirtualBox
config.vm.provider :virtualbox do |v, override|
setvmboxandurl(override, :virtualbox)
v.memory = vm_mem # v.customize ["modifyvm", :id, "--memory", vm_mem]
v.cpus = $vm_cpus # v.customize ["modifyvm", :id, "--cpus", $vm_cpus]
# Use faster paravirtualized networking
v.customize ["modifyvm", :id, "--nictype1", "virtio"]
v.customize ["modifyvm", :id, "--nictype2", "virtio"]
end
end
# Kubernetes master
config.vm.define "master" do |c|
customize_vm c, $vm_master_mem
if ENV['KUBE_TEMP'] then
script = "#{ENV['KUBE_TEMP']}/master-start.sh"
c.vm.provision "shell", run: "always", path: script
end
c.vm.network "private_network", ip: "#{$master_ip}"
end
# Kubernetes node
$num_node.times do |n|
node_vm_name = "node-#{n+1}"
node_prefix = ENV['INSTANCE_PREFIX'] || 'kubernetes' # must mirror default in cluster/vagrant/config-default.sh
node_hostname = "#{node_prefix}-#{node_vm_name}"
config.vm.define node_vm_name do |node|
customize_vm node, $vm_node_mem
node_ip = $node_ips[n]
if ENV['KUBE_TEMP'] then
script = "#{ENV['KUBE_TEMP']}/node-start-#{n}.sh"
node.vm.provision "shell", run: "always", path: script
end
node.vm.network "private_network", ip: "#{node_ip}"
end
end
I tried to change node.vm.network "private_network", ip: "#{node_ip}" in node.vm.network "private_network", ip: "192.xx.xx.xx"but I'm not able to reload my vagrant:
/opt/vagrant/embedded/lib/ruby/2.2.0/ipaddr.rb:559:in `in6_addr': invalid address (IPAddr::InvalidAddressError)
Your service needs to be of type NodePort to be reached on the IP of the vagrant VM. For example if your pods are started by a replication controller. Create the services with:
kubectl expose rc foobar --type=NodePort --port=80
This will return a random port. Open your browser on your Vagrant node VM at that port and you will reach your service.

Configure multiple monitors with vagrant

How to configure multiple monitors for the provider vmware-workstation in Vagrantfile?
For the provider virtualbox this can be done as follows:
config.vm.provider :virtualbox do |vb|
vb.gui = true
vb.customize ["modifyvm", :id, "--monitorcount", "2"]
As result the vm will be displayed on 2 monitors. How can I do this for vmware-workstation?
I have no VMware Workstation at hand, but according to this reference, the parameter you're searching for is svga.numDisplays.
So the resulting snippet for the Vagrantfile should be
config.vm.provider "vmware_fusion" do |v|
v.vmx["svga.numDisplays"] = 2
v.vmx["svga.autodetect"] = "FALSE"
end

Error With Vagrant Provisioning

I am trying to create and then provision a multi-machine vagrant environment using ansible.
My Vagrant file looks like:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
(1..3).each do |i|
config.vm.define "db#{i}" do |node|
node.vm.box = "trusty64"
node.vm.network "public_network", ip: "192.168.253.20#{i}", bridge: 'en0: Wi-Fi (AirPort)'
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 512]
v.name = "db#{i}_box"
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
end
end
(1..1).each do |i|
config.vm.define "es#{i}" do |node|
node.vm.box = "trusty64"
node.vm.network "public_network", ip: "192.168.253.21#{i}", bridge: 'en0: Wi-Fi (AirPort)'
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 1024]
v.name = "es#{i}_box"
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
end
end
config.vm.define "emailchecker" do |node|
node.vm.box = "trusty64"
node.vm.network "public_network", ip: "192.168.253.205", bridge: 'en0: Wi-Fi (AirPort)'
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 1024]
v.name = "emailchecker_box"
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
end
config.vm.define "smartrouter" do |node|
node.vm.box = "trusty64"
node.vm.network "public_network", ip: "192.168.253.206", bridge: 'en0: Wi-Fi (AirPort)'
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 1024]
v.name = "smartrouter_box"
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
end
config.vm.define "web" do |node|
node.vm.box = "trusty64"
node.vm.network "public_network", ip: "192.168.253.204", bridge: 'en0: Wi-Fi (AirPort)'
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 1024]
v.name = "web_box"
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
####
#
# Note Provisioning is included here to ensure it only gets run once on all the boxes.
#
####
node.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible_playbook/playbook.yml"
ansible.verbose = "vvvv"
ansible.limit = 'all'
ansible.inventory_path = "ansible_playbook/vagrant_inventory"
end
end
end
I can create the machines by doing a vagrant up --no-provision and I can log into the machines by doing vagrant ssh web (for example).
However, when I try and provision the machines, I get the following error message:
fatal: [web] => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/smcgurk/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/smcgurk/.ansible/cp/ansible-ssh-192.168.253.204-22-vagrant" does not exist
Has anyone any idea as to what might be going on here or suggestions about how I debug/ remedy this?
Thanks,
Seán
I have seen this error before and its usually down to spaces!
It seems that when the path to the SSH key contains spaces, these are not correctly escaped, effectively breaking Ansible.
I may be totally off the beaten track here but I have rechecked vagrant files in the past and found this to be the case.
Another thought, if you have any references to localhost instead of 127.0.0.1

Resources