The /vagrant directory is empty. It should contain the workspace where my Vagrantfile is located. I can cd /vagrant, but it's empty.
Vagrantfile
VAGRANTFILE_API_VERSION = '2'
VAGRANT_BOX_NAME = 'nomades.local'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = 'bento/centos-6.7'
config.vm.box_check_update = false
config.vm.hostname = VAGRANT_BOX_NAME
config.vm.define VAGRANT_BOX_NAME do |dev|
dev.vm.provider 'virtualbox' do |v|
v.name = VAGRANT_BOX_NAME
v.memory = 512
v.cpus = 2
v.gui = true
v.customize ['modifyvm', :id, '--ioapic', 'on', '--vram', '16']
end
# Réseau (penser à configurer son /etc/hosts pointant vers cette ip)
dev.vm.network 'private_network', ip: '192.168.12.2'
# In order to use VPN
# dev.vm.network 'public_network', ip: '172.32.0.101'
# Provision
dev.vm.provision "ansible" do |ansible|
ansible.groups = {
'vagrant' => [VAGRANT_BOX_NAME],
'servers' => [VAGRANT_BOX_NAME],
}
ansible.playbook = 'provision/provision.yml'
ansible.sudo = true
end
end
end
This happens when vagrant can't modify /etc/fstab and tries to mount the current working directory from the host. Make sure it has permissions to mount the directory.
Run this on the Vagrant guest VM, then logout.
$ sudo /etc/init.d/vboxadd setup
Run this on the host.
$ sudo vagrant reload
$ vagrant ssh
This issue happened to me after manually rebooting the VM (i.e. sudo reboot).
Restarting the machine from the host solved the problem for me:
$ vagrant halt
$ vagrant up
you need to define below line in the vagrantfile, . means current directory
config.vm.synced_folder ".", "/vagrant"
Related
I am having trouble connecting to Vagrant from the DB client tool (Datagrip).
If you know anything about this, we would appreciate it if you could let us know.
connection information
→ vagrant ssh-config
Host default
HostName 127.0.0.1
User bargee
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/〇〇/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
SSH Connection with Terminal
$ ssh bargee#127.0.0.1 -p 2222 -i /Users/〇〇/.vagrant/machines/default/virtualbox/private_key
Welcome to Barge 2.15.0, Docker version 20.10.8, build 75249d8
[bargee#barge app]$
I was able to do it.
ssh connection on Datagrip
↓
Unable to connect.
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ailispaw/barge"
config.vm.box_version = "2.15.0"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "./", "/app"
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.cpus = 4
vb.memory = 4096
vb.customize ['modifyvm', :id, '--natdnsproxy1', 'off']
vb.customize ['modifyvm', :id, '--natdnshostresolver1', 'off']
end
config.vm.provision "shell", inline: <<-SHELL
/etc/init.d/docker restart v20.10.8
wget "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-$(uname -s)-$(uname -m)" -O /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
if ! grep -q "cd /app" /home/bargee/.bashrc ; then
echo "cd /app" >> /home/bargee/.bashrc
fi
SHELL
end
My vagrantfile looks like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
vagrant_home = "/home/vagrant/"
local_share = "#{ENV['HOME']}"
unless Vagrant.has_plugin?("vagrant-vbguest")
puts "Vagrant plugin 'vagrant-vbguest' is not installed!"
puts "Execute: vagrant plugin install vagrant-vbguest"
end
unless Vagrant.has_plugin?("vagrant-sshfs")
puts "Vagrant plugin 'vagrant-sshfs' is not installed!"
puts "Execute: vagrant plugin install vagrant-sshfs"
end
Vagrant.configure("2") do |stage|
stage.vm.box = "centos/7"
stage.vm.hostname = "HSS-IAAS-VB"
stage.vm.box_check_update = true
stage.vm.network "private_network", :type => 'dhcp'
stage.vm.provider "virtualbox" do |vb|
vb.name = "centos7-dev"
vb.gui = false
vb.memory = "1024"
stage.ssh.keys_only = false
stage.ssh.username = "#{ENV['USER']}"
stage.ssh.forward_agent = true
stage.ssh.insert_key = true
stage.ssh.private_key_path = "#{ENV['HOME']}/.ssh/id_rsa" , "/home/#{ENV['USER']}/.ssh/id_rsa
stage.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> #{ENV['home']}.ssh/authorized_keys
sudo bash -c \"echo #{ssh_pub_key} >> #{ENV['home']}/.ssh/authorized_keys\"
SHELL
end
end
end
My issue is that when I run this vagrantfile, I receive an error that states the following: default: Warning: Authentication failure. Retrying... and if I run in debug mode I just see a bunch of timeouts..
All that I am trying to do is rather than create a "vagrant" user, I want to create a user that is the same as the user on the host machine by using #{ENV['USER']} and have the user immediately be able to run vagrant ssh and if their host user is test.user, then the guest user will be test.user..
vagrant ssh-config was:
Host default
HostName 127.0.0.1
User aaron.west
Port 2200
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/aaron.west/.ssh/id_rsa
IdentityFile /Users/aaron.west/.ssh/id_rsa
LogLevel FATAL
all help is appreciated :)
I believe you'll have to create a new user on your Vagrant machine. As per the docs for the ssh.username setting, it doesn't sound like that setting actually creates a user. It only helps you to tell Vagrant what user to connect as, if the box was made with a username other than vagrant.
You probably need to shell out to useradd during provisioning.
I am building docker containers for different modules through vagrant machine as below.
DOCKER_HOST_NAME = "docker-host"
# Require 'yaml' module
require 'yaml'
# Read details of containers to be created from YAML file
# Be sure to edit 'containers.yml' to provide container details
containers = YAML.load_file('environment/containers.yaml')
# Create and configure the Docker container(s)
Vagrant.configure("2") do |config|
config.vm.define "#{DOCKER_HOST_NAME}", autostart: false do |config|
# Always use Vagrant's default insecure key
config.ssh.insert_key = false
config.vm.box = "ubuntu/trusty64"
config.vm.network "forwarded_port", guest: 8080, host: 1234
config.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", auto_correct: true
config.vm.network "forwarded_port", guest: 15672, host: 15672
config.vm.network "private_network", ip: "192.168.50.50"
config.vm.network "forwarded_port", guest: 5672, host: 5672
config.vm.provider :virtualbox do |vb|
vb.name = "docker-host"
end
# provision docker environment
config.vm.provision "docker"
# The following line terminates all ssh connections. Therefore
# Vagrant will be forced to reconnect.
# That's a workaround to have the docker command in the PATH
config.vm.provision "shell", inline:
"ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
# create required jars if not present.
config.vm.provision "shell", inline: <<-SHELL
if [ "$SS_BUILD" == "NO" ] && [ -f "/vagrant/src/sonarCharts/notificationEmulator/target/notificationEmulator-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/dataInputHandler/target/dataInputHandler-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/controller/target/controller-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/modelingWorker/target/modelingWorker-1.0-SNAPSHOT.jar" ] && [ -f "/vagrant/src/sonarCharts/contouringWorker/target/contouringWorker-1.0-SNAPSHOT.jar" ]
then
echo "ALL JAR FILES ARE PRESENT"
else
echo "Building the Jar Files...."
sudo apt-get -y install maven
sudo apt-get -y install default-jdk
mvn -f /vagrant/src/sonarCharts/pom.xml clean install
fi
SHELL
config.vm.synced_folder ENV['SS_INPUT'], "/vagrant/data/input"
config.vm.synced_folder ENV['SS_OUTPUT'], "/vagrant/data/output"
config.vm.provision "docker" do |docker|
docker.build_image "/vagrant/environment/base", args: "-t local/base"
docker.build_image "/vagrant/environment/rabbitmq", args: "-t local/rabbitmq"
docker.build_image "/vagrant/environment/notificationEmulator", args: "-t local/notificationEmulator"
docker.build_image "/vagrant/environment/dataInputHandler", args: "-t local/dataInputHandler"
docker.build_image "/vagrant/environment/controller", args: "-t local/controller"
docker.build_image "/vagrant/environment/modelingWorker", args: "-t local/modelingWorker"
docker.build_image "/vagrant/environment/contouringWorker", args: "-t local/contouringWorker"
end
config.vm.provider "virtualbox" do |virtualbox|
virtualbox.memory = 2048
end
end
# Perform one-time configuration of Docker provider to specify
# location of Vagrantfile for host VM; comment out this section
# to use default boot2docker box
config.vm.provider "docker" do |docker|
docker.force_host_vm = true
docker.vagrant_machine = "#{DOCKER_HOST_NAME}"
docker.vagrant_vagrantfile = __FILE__
end
# Iterate through the entries in the YAML file
containers.each do |container|
config.vm.define container["name"] do |cntnr|
# Disable synced folders for the Docker container
# (prevents an NFS error on "vagrant up")
# cntnr.vm.synced_folder ".", "/vagrant", disabled: true
# Configure the Docker provider for Vagrant
cntnr.vm.provider "docker" do |docker|
# Specify the Docker image to use, pull value from YAML file
docker.image = container["image"]
docker.build_dir = container["build_dir"]
docker.build_args = container["build_args"] || []
docker.create_args = container["create_args"] || []
#docker.has_ssh = true
# Specify port mappings, pull value from YAML file
# If omitted, no ports are mapped!
docker.ports = container["ports"] || []
# Mount voluems that are available in the Docker host
docker.volumes = container["volumes"] || []
docker.remains_running = container["remains_running"] | true
# Specify a friendly name for the Docker container, pull from YAML file
docker.name = container["name"]
end
end
end
#Port for logging into the host VM
config.ssh.port = 2222
end
I don't want to use vagrant any more and want to build docker containers in my machine itself. How can i do that.
You can create a Dockerfile and use docker build to create an image:
http://docs.docker.com/engine/userguide/dockerimages/
once you have an image you can use run to create the container:
We started using Vagrant to setup our development environment.
Now we would like to use the same Vagrantfile also in production/staging.
When I use the same vagrantfile that contains a virtualbox provider and gce I get the error
An active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.
Machine name: default
Active provider: virtualbox
Requested provider: google
Is there a way I can vagrant up virtualbox and gce?
Vagrant.has_plugin?("nugrant")
Vagrant.require_version ">= 1.6.3"
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox"
config.vm.box = "yungsang/boot2docker"
config.vm.provider :google do |google, override|
override.vm.box = "gce"
google.google_project_id = $GOOGLE_PROJECT_ID
google.google_client_email = $GOOGLE_CLIENT_EMAIL
google.google_key_location = $GOOGLE_KEY_LOCATION
# Override provider defaults
google.name = "name"
google.image = "ubuntu-1404-trusty-v20141212"
google.machine_type = "n1-standard-1"
google.zone = "europe-west1-c"
google.metadata = {'custom' => 'metadata', 'production' => 'app'}
google.tags = ['vagrantbox', 'prod']
override.ssh.username = $LOCAL_USER
override.ssh.private_key_path = $LOCAL_SSH_KEY
config.vm.define :prod do |prod|
prod.vm.provision :shell, inline: 'echo I am executed in prod only!!'
end
end
config.vm.synced_folder ".", "/vagrant"
# Fix busybox/udhcpc issue
config.vm.provision :shell do |s|
s.inline = <<-EOT
if ! grep -qs ^nameserver /etc/resolv.conf; then
sudo /sbin/udhcpc
fi
cat /etc/resolv.conf
EOT
end
# Adjust datetime after suspend and resume
config.vm.provision :shell do |s|
s.inline = <<-EOT
sudo /usr/local/bin/ntpclient -s -h pool.ntp.org
date
EOT
end
# Login docker hub
config.vm.provision :shell do |s|
s.inline = "/usr/bin/docker $#"
s.args = ["login", "-u", config.user.docker.username, "-p", config.user.docker.password, "-e", config.user.docker.email]
end
config.vm.provision :docker do |d|
d.pull_images "nginx"
d.pull_images "mongodb"
d.pull_images "java"
end
ACTICE_SPRING_PROFILE = "te"
config.vm.provision :docker do |d|
# provision docker stuff here
end
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.network "forwarded_port", guest: 443, host: 8443
end
Update:
I tried to solve the problem with a multi vm setup, but now I am facing the problem that vagrant uses an outside-in provisioning, so my GCE specific scripts like gce.vm.provision :shell, inline: 'curl -sSL https://get.docker.com/ubuntu/ | sudo sh' are executed at the end not at the beginning.
Vagrant.configure("2") do |config|
config.vm.define "dev", primary: true do |dev|
dev.vm.provider "virtualbox"
dev.vm.box = "yungsang/boot2docker"
dev.vm.synced_folder ".", "/vagrant"
override.vm.provision :shell, inline: 'echo "Provision DEV"'
# Fix busybox/udhcpc issue
dev.vm.provision :shell do |s|
s.inline = <<-EOT
if ! grep -qs ^nameserver /etc/resolv.conf; then
sudo /sbin/udhcpc
fi
cat /etc/resolv.conf
EOT
end
# Adjust datetime after suspend and resume
dev.vm.provision :shell do |s|
s.inline = <<-EOT
sudo /usr/local/bin/ntpclient -s -h pool.ntp.org
date
EOT
end
dev.vm.network "forwarded_port", guest: 80, host: 8080
dev.vm.network "forwarded_port", guest: 443, host: 8443
end
config.vm.define "gce", autostart: false do |gce|
gce.vm.provider :google do |google, override|
override.vm.box = "gce"
google.google_project_id = $GOOGLE_PROJECT_ID
google.google_client_email = $GOOGLE_CLIENT_EMAIL
google.google_key_location = $GOOGLE_KEY_LOCATION
# Override provider defaults
google.name = "z-rechnung-#{TARGET_ENV}"
google.image = "ubuntu-1404-trusty-v20141212"
google.machine_type = "n1-standard-1"
google.zone = "europe-west1-c"
google.metadata = {'environment' => "#{TARGET_ENV}"}
google.tags = ['vagrantbox', "#{TARGET_ENV}"]
override.ssh.username = $LOCAL_USER
override.ssh.private_key_path = $LOCAL_SSH_KEY
gce.vm.provision :shell, inline: 'sudo apt-get update -y'
gce.vm.provision :shell, inline: 'sudo apt-get upgrade -y'
gce.vm.provision :shell, inline: 'curl -sSL https://get.docker.com/ubuntu/ | sudo sh'
end
end
# Login docker hub
config.vm.provision :shell do |s|
s.inline = "/usr/bin/docker $#"
s.args = ["login", "-u", config.user.docker.username, "-p", config.user.docker.password, "-e", config.user.docker.email]
end
config.vm.provision :docker do |d|
d.pull_images ....
end
config.vm.provision :docker do |d|
d.run "image" ....
end
The right answer is to remember that your Vagrantfile is just a Ruby program, first and foremost, and the execution of this program is to result in a datastructure that the CLI sub-commands traverse.
So, create functions that add provisioners to configuration, then call them in the "inside". For example,
def provisioner_one(config)
config.vm.provision :shell, 'echo hello'
end
Vagrant.configure('2') do |config|
# stuff here
config.vm.define 'dev' do |dev, override|
# whatever else here
provisioner_one(dev)
# other stuff here
end
# more other stuff here
end
This will DWIM.
I trying to use the Vagrant VMWare Fusion Plugin, however (whatever I do) to set a static IP address on my private VMWare network the VM only ever gets a DHCP address.
I’ve added this to my Vagrant config file:
server1.vm.network "private_network", ip: "192.168.13.120"
However, it just gets ignored and a dynamic DHCP address is issued. I’m using the hashicorp/precise64 base image.
Here’s a complete listing of the Vagrant file I'm using to test.
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise64_vmware.box"
# Turn off shared folders
config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
# Begin server1
config.vm.define "server1" do |server1|
server1.vm.hostname = "server1"
server1.vm.provider "vmware_fusion" do |v|
v.vmx["numvcpus"] = "1"
v.vmx["memsize"] = "512"
end
server1.vm.provider "virtualbox" do |v|
v.customize [ "modifyvm", :id, "--cpus", "1" ]
v.customize [ "modifyvm", :id, "--memory", "512" ]
end
server1.vm.network "private_network", ip: "192.168.13.120"
end
# End server1
....................................
end
And this is how my VMWare private interface is configured:
vmnet8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:50:56:c0:00:08
inet 192.168.13.1 netmask 0xffffff00 broadcast 192.168.13.255
Edit /Library/Preferences/VMware\ Fusion/networking and disable DHCP for the adapter that the IP belongs to.
For example:
...
answer VNET_2_DHCP no
answer VNET_2_HOSTONLY_NETMASK 255.255.255.0
answer VNET_2_HOSTONLY_SUBNET 172.17.8.0
answer VNET_2_VIRTUAL_ADAPTER yes
...
And then restart VMware Fusion networking
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli —stop
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli —configure
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli —start
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli —status
My use case:
This helped in my situation which is pretty much the same as yours.
server.vm.network :private_network, ip: 172.17.8.100
With DHCP on I had something like:
inet 172.17.8.131/24 brd 172.17.8.255 scope global dynamic enp0s18
After turning DHCP off, vm is assigned the defined private IP address:
inet 172.17.8.100/24 brd 172.17.8.255 scope global enp0s18
Let me know how it goes.