I'm trying to create aliases that I can use in Vagrant any time I run the VM. I've found several sources on the web about it, but can't get it working. I tried making a .bash_profile in my synced folder, but that didn't work. I noticed if I run the command alias name="command" this will work, but only for the current session. Anyone know how to do this? I'm using macOS. Thanks for your help!
Here is my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
unless Vagrant.has_plugin?("vagrant-vbguest")
warn "\nWARNING: The vagrant-vbguest plugin should be installed or your shared folders might not mount properly!"
warn "You can do this by running the command 'vagrant plugin install vagrant-vbguest'.\n\n"
end
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "pype_vm"
config.vm.box_url = "https://.../pype_vm.json"
config.vm.network "private_network", ip: ""
config.vm.boot_timeout = 600
config.vm.provider "virtualbox" do |v|
# This forces VirtualBox to use the host's DNS resolver instead of
# VirtualBox's
v.customize ["modifyvm", :id, "", "on"]
# This enables the PAE/NX option, which resolved at least one user's
# issues with the VM hanging on boot
v.customize ["modifyvm", :id, "--pae", "on"]
# The RHEL VM was created with 2GB of memory to facilitate provisioning,
# but this is causing issues with certain workstations. This reduces
# the amount of memory allocated to the VM but should not impact development
# performance. The number is in MB and can be increased if desired.
v.memory = 1024
end
# Share an additional folder to the guest VM.
config.vm.synced_folder File.dirname(__FILE__), "/pype"
end
The details depend on the specific of the guest being run, but some notes:
Assuming the default user account is active for vagrant ssh, ensure that any dotfiles you wish to override are copied to /home/vagrant.
If overriding .bashrc, ensure that the remote shell is started with the interactive flag (if this is true, echo $- will include i).
If overriding .bash_profile, ensure that the remote shell is started as a login shell (if this is true, echo $- will include l).
Related
I have a small problem with setting up my working environment after upgrade to Ubuntu 18.04. Vagrant version is 2.0.0.
Vagrant File
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "debian/jessie64"
# config.vm.box = "ubuntu/xenial64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
config.vm.network "forwarded_port", guest: 80, host: 8888
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder ".", "/vagrant-nfs", type: :nfs, mount_options: ['rw', 'vers=3', 'tcp', 'fsc' ,'actimeo=2']
config.bindfs.bind_folder "/vagrant-nfs", "/srv/bspotted.net/app", :owner => "vagrant", :group => "vagrant"
config.vm.network :private_network, ip: "192.168.33.15"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--name", "bspotted", "--memory", "2048"]
end
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
# Ansible provisioner.
config.vm.provision "ansible_local" do |ansible|
ansible.version = "2.3.2.0"
ansible.install_mode = "pip"
ansible.provisioning_path = "/srv/bspotted.net/app"
ansible.playbook = "orchestration/vagrant.yml"
ansible.verbose = "vvv"
end
end
For the record I have an older version Ubuntu 16.10 on my other laptop and everything is working properly, this is the output of the vagrant up --debug &> vagrant.log command here.
Everything fall apart when setup reach
==> default: Preparing to edit /etc/exports. Administrator privileges will be required...
==> default: Mounting NFS shared folders...
after aprox 5 minutes, it will show
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp,rw,vers=3,tcp,fsc,actimeo=2 192.168.33.1:/home/copser/Documents/Bspotted /vagrant-nfs
Stdout from the command:
Stderr from the command:
mount.nfs: Connection timed out
Problem is solved after I've changed the version of nfs,
`config.vm.synced_folder ".", "/vagrant-nfs", type: :nfs, mount_options: ['rw', 'vers=3', 'tcp', 'fsc' ,'actimeo=2']`
to
config.vm.synced_folder ".", "/vagrant-nfs", type: :nfs, mount_options: ['rw', 'vers=4', 'tcp', 'fsc' ,'actimeo=2']
just to make it clear I've changed vers=3 to vers=4, and no everything is working fine. You can check my correspondence with vagrant contributors here.
I want to mount a local directory relative to my vagrant file inside the vm. However, when I do that,
vagrant ssh
No longer works -- private key fails. I am not sure why ssh suddenly fails.
help? (if it matters: I am trying to mount the directory my Java artifacts compile to).
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Use Ubuntu 14.04 Trusty Tahr 64-bit as our operating system
config.vm.box = "ubuntu/trusty64"
# Configurate the virtual machine to use 2GB of RAM
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "1024"]
vb.cpus = 1
end
config.vm.network :forwarded_port, guest: 8080, host: 8080
config.vm.synced_folder "target/", "/home/vagrant"
end
As you try to sync directly on /home/vagrant in the VM, you need to make sure your target directory contains a .ssh folder with the authorized_keys
When you vagrant up the VM, vagrant will ssh and create this directory for you but when going through the sync_folder part, it will replace all the content from /home/vagrant with your host target/ so loosing what it did create before.
If you really really want to sync on /home/vagrant what you could do is run first without the sync, copy all files that have been created (.ssh/, .bash ...) into your target directory and then you should be able to rerun with the sync on /home/vagrant. (Note: I did not try that and honestly would not recommend to sync on /home/vagrant directly as if you install other soft from provisioning or other, you might run into issues later)
I have a VM that I'm pulling down with vagrant and using a VERY basic vagrant file
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "vagrant-rhel-devel"
config.vm.network :private_network, ip: "192.168.33.101"
config.vm.synced_folder ".", "/vagrant"
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true
# Enable 3d Rendering
vb.customize ["modifyvm", :id, "--accelerate3d", "on"]
# Sets 32megs video ram, higher number here = more POWERS
vb.customize ["modifyvm", :id, "--vram", "32"]
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
vb.name = "RedHat 3D"
end
end
My issue is that the vagrant user gets a home directory of /localhome/vagrant and I'd like him to have /home/vagrant as the home directory.
Is this something I'm able to change with provisioning or is it something that is set in the VM itself? I'm rather unskilled at the provisioning step at the moment so an example would be great.
The HOME directory has been set when the user has been created so this was done before the VM was packaged as vagrant box.
From there, you should be able to change that with
usermod -m -d /path/to/new/home/dir userNameHere
so in your case it would be
usermod -m -d /home/vagrant vagrant
check if you have existing files under the /localhome directory that would need to be copied into the new one (bash preferences file)
if you do not plan to destroy/create a bunch of new VM from this box, its not needed to add as provisioning; if you do plan to use this box and create lot of new VM, then it could make sense, you would just need to add an inline shell provision
I have updated my Vagrantfile to this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define :ceph do |ceph|
ceph.vm.box = "big-ceph"
ceph.vm.network :private_network, ip: "192.168.251.100"
ceph.vm.hostname = "ceph"
end
config.vm.define :client do |client|
client.vm.box = "hashicorp/precise64"
client.vm.hostname = "ceph-client"
client.vm.provision :shell, path: "setup/ceph.sh"
client.vm.network :private_network, ip: "192.168.251.101"
end
end
but I am still getting this warning message whenever I vagrant up my virtual machines.
calvin % vagrant reload ceph && vagrant reload client
There were warnings and/or errors while loading your Vagrantfile
for the machine 'ceph'.
Your Vagrantfile was written for an earlier version of Vagrant,
and while Vagrant does the best it can to remain backwards
compatible, there are some cases where things have changed
significantly enough to warrant a message. These messages are
shown below.
Warnings:
* `config.vm.customize` calls are VirtualBox-specific. If you're
using any other provider, you'll have to use config.vm.provider in a
v2 configuration block.
Any idea why?
Ok, I figured it out. This 3rd party ceph box that I am using comes with its own Vagrantfile which overrides my Vagrantfile and the included box Vagrantfile (which is located in ~/.vagrant.d/boxes/big-ceph) still contains
config.vm.customize ["modifyvm", :id, "--nictype1", "virtio"]
Comment that out and I no longer see the annoying warning.
How are people handling simple automation (with puppet) for dev / prod environments with vagrant (ideally from the same vagrantfile)?
Use case I'm trying to solve
I would love to spin up the the production machine with vagrant if it isn't created.
I would love to reload nginx or apache confs on production with vagrant if they were tweaked in the puppet files for my dev environment.
The Problem
When you call vagrant up with a provider like AWS or Digital Ocean, it becomes the active provider and you can't switch. You get this error:
An active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.
It seems the answer it to destroy, but I just need to switch. I don't want to destroy.
I would love to be able to say
vagrant up prod
or
vagrant reload prod
and then a simple vagrant up would fall back to the default machine.
This syntax is similar to how multiple machines work, but I don't want to spin up a dev and production environment when I just call vagrant up (which is the default behavior).
Should I be looking at packer as part of the workflow? I watched the whole talk at puppetconf 2013 on Mitchell's talk on Multi-Provider http://puppetlabs.com/presentations/multi-provider-vagrant-aws-vmware-and-more
I'm still not seeing a solution for my problem.
UPDATE 9/27/13
In case anybody else is fighting this idea, this article cleared up a lot of questions I had.
http://pretengineer.com/post/packer-vagrant-infra
As for workaround, you should define config.vm.define (as suggested here), in order to support multiple providers.
Please find the following configuration posted by #kzap as example:
Vagrant.configure("2") do |config|
# Store the current version of Vagrant for use in conditionals when dealing
# with possible backward compatible issues.
vagrant_version = Vagrant::VERSION.sub(/^v/, '')
# Configuration options for the VirtualBox provider.
def configure_vbox_provider(config, name, ip, memory = 2048, cpus = 1)
config.vm.provider :virtualbox do |v, override|
# override box url
override.vm.box = "ubuntu/trusty64"
# configure host-only network
override.vm.hostname = "#{name}.dev"
override.vm.network :private_network, id: "vvv_primary", ip: ip
v.customize ["modifyvm", :id,
"--memory", memory,
"--cpus", cpus,
"--name", name,
"--natdnshostresolver1", "on",
"--natdnsproxy1", "on"
]
end
end
default_provider = "virtualbox"
supported_providers = %w(virtualbox rackspace aws managed)
active_provider = ENV['VAGRANT_ACTIVE_PROVIDER'] # it'd be better to get this from the CLI --provider option
supported_providers.each do |provider|
next unless (active_provider.nil? && provider == default_provider) || active_provider == provider
#
# VM per provider
#
config.vm.define :"sample-#{provider}" do | sample_web_config |
case provider
when "virtualbox"
configure_vbox_provider(sample_web_config, "examine-web", "192.168.50.1")
when "aws"
configure_aws_provider(sample_web_config)
when "managed"
configure_managed_provider(sample_web_config, "1.2.3.4")
when "rackspace"
configure_rackspace_provider(sample_web_config)
end
end
end
Or the following example posted at gist by #maxlinc:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "dummy"
config.vm.provider :rackspace do |rs|
rs.username = ENV['RAX_USERNAME']
rs.api_key = ENV['RAX_API_KEY']
rs.rackspace_region = :ord
end
supported_providers = %w(virtualbox rackspace)
active_provider = ENV['VAGRANT_ACTIVE_PROVIDER'] # it'd be better to get this from the CLI --provider option
supported_providers.each do |provider|
next unless active_provider.nil? || active_provider == provider
config.vm.define "exact_name_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = '1 GB Performance'
rs.image = 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)'
end
end
config.vm.define "regex_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = /1\s+GB\s+Performance/
rs.image = /Ubuntu.*Trusty Tahr.*(PVHVM)/
end
end
config.vm.define "id_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = 'performance1-1'
rs.image = 'bb02b1a3-bc77-4d17-ab5b-421d89850fca'
end
end
config.vm.define "unlisted_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = 'performance1-1'
rs.image = '547a46bd-d913-4bf7-ac35-2f24f25f1b7a'
end
end
end
end
Not an ideal solution, but what about using git branches? My thinking is that it could be conceptually similar to using heroku, where you might have a master, staging, and production versions (since they're usually different remotes).
In this case you start off the prod branch with the small edit to the Vagrantfile to name the VM a little differently. Then you should be able to merge all changes from dev with the prod branch as they occur. So your workflow would look like:
$ git checkout prod
$ vagrant up
$ git checkout master
... make changes to puppet ...
$ git checkout prod
$ git merge master
$ vagrant reload
$ git checkout master
You could script and alias these so you end up with
$ start_production
$ reload_production
Here is a simple way of dynamically changing the 'default' machine name depending on the specified --provider from the command line, so they won't conflict between the different providers:
require 'getoptlong'
opts = GetoptLong.new(
[ '--provider', GetoptLong::OPTIONAL_ARGUMENT ],
[ '--vm-name', GetoptLong::OPTIONAL_ARGUMENT ]
)
provider=ENV['PROVIDER'] || 'virtualbox'
vm_name=ENV['VM_NAME'] || 'default'
opts.each do |opt, arg|
case opt
when '--provider'
provider=arg
when '--vm-name'
vm_name=arg
end
end
Vagrant.configure(2) do |config|
# HERE you are dynamically changing the machine name to prevent conflict.
config.vm.define "mt-#{provider}-#{vm_name}"
# Below sections are just examples, not relevant.
config.vm.provider "virtualbox" do |vm|
vm.name = "test.local"
vm.network "private_network", ip: "192.168.22.22"
vm.customize ['modifyvm', :id, '--natdnshostresolver1', 'on']
config.vm.box = "ubuntu/wily64"
end
config.vm.provider :aws do |aws, override|
aws.aws_profile = "testing"
aws.instance_type = "m3.medium"
aws.ami = "ami-7747d01e"
config.vm.box = "testing"
end
end
Example usage:
VM_NAME=dev PROVIDER=virtualbox vagrant up --provider=virtualbox
VM_NAME=uat PROVIDER=aws vagrant up --provider=aws
VM_NAME=test PROVIDER=aws vagrant up --provider=aws
VM_NAME=prod PROVIDER=aws vagrant up --provider=aws
VM_NAME=uat PROVIDER=aws vagrant destroy -f
VM_NAME=test PROVIDER=aws vagrant status
See also: Multiple provisioners in a single vagrant file?
what I came up with to work with this scenario is to manage 2 distincts .vagrant folder.
Note: most of the other answers deal with setting up multi-provider assuming you will run dev and prod on different provider, in most cases this might be true but you can definitely have cases where you have same provider for dev and prod. Lets say you're using aws and you want to use dev and prod as ec2 instance it will be the same provider.
Say you want to manage dev and prod instances, potentially using different providers (but could also very well be on the same provider) so you'll do:
set up dev instance with normal vagrant up --provider <dev_provider>.
This will create a dev VM that you can manage
back up the .vagrant folder created in your project directory and rename it like .vagrant.dev
set up prod instance with your provider of choice vagrant up --provider <prod_provider>. This now creates your prod VM
back up the newly .vagrant folder created in your project directory and rename it like .vagrant.prod
now, depending if you want to work on dev or prod, you'll rename the .vagrant.dev or .vagrant.prod directory as .vagrant and vagrant will operate the right VM.
I did not come up with a script as mainly the most of the time I work with dev and very few times I need to switch to the other provider. but I dont think it will be too hard to read the parameter from CLI and make the renaming more dynamic.