I’ve trying to set up a multi-machine Vagrant project. According to the docs (https://www.vagrantup.com/docs/multi-machine/), provisioning is “outside in”, meaning any top-level provisioning scripts are executed before provisioning scripts in individual machine blocks.
The project contains a Laravel project, and a Symfony project. My Vagrantfile looks like this:
require "json"
require "yaml"
confDir = $confDir ||= File.expand_path("vendor/laravel/homestead", File.dirname(__FILE__))
homesteadYamlPath = "web/Homestead.yaml"
homesteadJsonPath = "web/Homestead.json"
afterScriptPath = "web/after.sh"
aliasesPath = "web/aliases"
require File.expand_path(confDir + "/scripts/homestead.rb")
Vagrant.configure(2) do |config|
config.vm.provision "shell", path: "init.sh"
config.vm.define "web" do |web|
web.ssh.forward_x11 = true
if File.exists? aliasesPath then
web.vm.provision "file", source: aliasesPath, destination: "~/.bash_aliases"
end
if File.exists? homesteadYamlPath then
Homestead.configure(web, YAML::load(File.read(homesteadYamlPath)))
elsif File.exists? homesteadJsonPath then
Homestead.configure(web, JSON.parse(File.read(homesteadJsonPath)))
end
if File.exists? afterScriptPath then
web.vm.provision "shell", path: afterScriptPath
end
end
config.vm.define "api" do |api|
api.vm.box = "ubuntu/trusty64"
api.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
api.vm.network "private_network", ip: "10.1.1.34"
api.vm.network "forwarded_port", guest: 80, host: 8001
api.vm.network "forwarded_port", guest: 3306, host: 33061
api.vm.network "forwarded_port", guest: 9200, host: 9201
api.vm.synced_folder "api", "/var/www/api"
api.vm.provision "shell", path: "api/provision.sh"
end
end
I have a block (web) for the Laravel project, where I’ve copied the contents of the Homestead-based Vagrantfile, and an api block that uses the “standard” Vagrant configuration.
To bootstrap the projects, I created a simple shell script (init.sh) that simply clones the Git repositories into git-ignored directories. Given the documentation says configuration works outside-in, I’d therefore expect that script to run, and then the machine-specific blocks, but this doesn’t seem to be happening. Instead, on vagrant up, I receive the following error:
There are errors in the configuration of this machine. Please fix the following errors and try again:
vm:
* A box must be specified.
It seems it’s still trying to provision the individual machines, before running the shell script. I know the shell script isn’t getting called as I added an echo statement to it. Instead, the terminal just outputs the following:
Bringing machine 'web' up with 'virtualbox' provider...
Bringing machine 'api' up with 'virtualbox' provider...
So how can I get Vagrant to run my shell script first? I think it’s failing because the web group is checking if my web/Homestead.yaml file exists and if so, use the values in there for configuring (including the box name), but as my shell script hasn’t been ran and hasn’t cloned the repository that file does not exist, so there is no box specified, which Vagrant complains about.
The issue is that you do not define a box for the web machine. You need to either define the box in the outer space like
config.vm.box = "ubuntu/trusty64"
if you plan to use the same box/OS for both machines or define in the web scope
web.vm.box = "another box"
EDIT
Using the provision property will run the script in the VM, which is not what you want here, as you want the script to run on your host. (and because it runs in the VM, it needs the VM to be booted first)
Vagrantfile is just a simple ruby script, so you could add your script or even an execution to it (from ruby call), a potential issue I could see is that you cannot guarantee the execution and specially that the execution of your init script will be complete before vagrant does it things on the VM.
A possibility is to use the vagrant trigger plugin and execute your shell script before the up event
config.trigger.before :up do
info "Dumping the database before destroying the VM..."
run "init.sh"
end
Running it this way, vagrant will wait for the script to be executed before it runs its part of the up command.
You would need to do some check in your script to make sure it runs only when needed, otherwise, it will run everytime you start the machine (invoking vagrant up), e.g. you could make a check on the presence of the yaml file
Related
I'm stuck again. I need to provision a multi-machine Environment - one VM for a Sinatra app and a second for its PostgreSQL DB.
So far, I've managed to get the Sinatra app up and running in the ubuntu/xenial64 box but the provisioning "breaks" when it hits the configuration for the DB
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
# Use ubuntu/xenial64 as the virtual machine
app.vm.box = "ubuntu/xenial64"
# Use a private network to connect the VM to the local machine via an IP with an alias
app.vm.network "private_network", ip: "192.168.10.100"
app.hostsupdater.aliases = ["development.local"]
# sync the 'app' directory in the local directory to '/app' on the VM
app.vm.synced_folder "app", "/app"
# Use the provisioning script in envirnonment to provision the VM for a Sinatra environment
app.vm.provision "shell", path: "environment/app/provision.sh"
app.vm.provision "shell", inline: set_env({ DATABASE_URL: "postgresql://myapp:dbpass#localhost:15432/myapp" })
end
config.vm.define "db" do |db|
db.vm.box = "ubuntu/trusty64"
db.vm.host_name = "postgresql"
db.vm.network "private_network", ip: "10.0.2.15"
# db.vm.forward_port 8000, 8000
db.hostsupdater.aliases = ["database.local"]
# db.vm.share_folder "home", "/home/vagrant", ".", :create => true
db.vm.provision "shell", path: "environment/db/provision.sh", privileged: false
end
end
As you've probably guessed, I'm running an external provisioning script for the PG setup. The odd thing is I'm using the script recommended from Postgres' own site here.
In a separate location, I've git cloned that repo and followed the instructions and it works absolutely fine, creating a properly provisioned VM with PG installed.
However, I want to run a single vagrant up command and provisioning both the app and db correctly and have them speak to each other.
I'm (quite clearly) new to provisioning and DevOps as a whole, so would really appreciate some help.
I've uploaded my hilariously broken code here for you kind souls to look over if you feel so inclined.
Vagrant documentation on Multi-machines is quite thin and Google isn't being much help
Thanks!
I'm trying to create aliases that I can use in Vagrant any time I run the VM. I've found several sources on the web about it, but can't get it working. I tried making a .bash_profile in my synced folder, but that didn't work. I noticed if I run the command alias name="command" this will work, but only for the current session. Anyone know how to do this? I'm using macOS. Thanks for your help!
Here is my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
unless Vagrant.has_plugin?("vagrant-vbguest")
warn "\nWARNING: The vagrant-vbguest plugin should be installed or your shared folders might not mount properly!"
warn "You can do this by running the command 'vagrant plugin install vagrant-vbguest'.\n\n"
end
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "pype_vm"
config.vm.box_url = "https://.../pype_vm.json"
config.vm.network "private_network", ip: ""
config.vm.boot_timeout = 600
config.vm.provider "virtualbox" do |v|
# This forces VirtualBox to use the host's DNS resolver instead of
# VirtualBox's
v.customize ["modifyvm", :id, "", "on"]
# This enables the PAE/NX option, which resolved at least one user's
# issues with the VM hanging on boot
v.customize ["modifyvm", :id, "--pae", "on"]
# The RHEL VM was created with 2GB of memory to facilitate provisioning,
# but this is causing issues with certain workstations. This reduces
# the amount of memory allocated to the VM but should not impact development
# performance. The number is in MB and can be increased if desired.
v.memory = 1024
end
# Share an additional folder to the guest VM.
config.vm.synced_folder File.dirname(__FILE__), "/pype"
end
The details depend on the specific of the guest being run, but some notes:
Assuming the default user account is active for vagrant ssh, ensure that any dotfiles you wish to override are copied to /home/vagrant.
If overriding .bashrc, ensure that the remote shell is started with the interactive flag (if this is true, echo $- will include i).
If overriding .bash_profile, ensure that the remote shell is started as a login shell (if this is true, echo $- will include l).
I want to use vagrant to setup developer machines. Since the machine will talk to the servers inhouse, I thought it a good idea that they are setup with the same usernames the developers have on their host machine. I'm having trouble figuring out how to handle this in the provisioning step.
My simple Vagrantfile looks like this:
VAGRANT_COMMAND = ARGV[0]
Vagrant.configure(2) do |config|
user = ENV['USER']
config.vm.box = "ubuntu/trusty64"
config.vm.provision :shell, :path => "bootstrap.sh", :args => user
config.ssh.username = user
config.ssh.password = "heimskringla"
config.vm.synced_folder "~/src/", "/home/" + user + "/src"
config.vm.provision "file", source: "~/.gitconfig", destination: "/home/" + user + "/.gitconfig"
config.vm.provision "file", source: "~/.ssh", destination: "/home/" + user + "/.ssh"
end
bootstrap.sh takes $USER from the host machine as an input. If the user does not exist in the Vagrant machine, it is created and added to /etc/sudoers.d.
If I start with a clean slate and run "Vagrant up" on this, it starts using $USER at once, and since it does not exist yet, the setup fails.
As a test I've tried doing this:
if VAGRANT_COMMAND != "up"
config.ssh.username = user
config.ssh.password = "changeme"
end
Then the provisioning in bootstrap.sh works. The user is created, and my packages are installed. When it gets to the file and synced folder provisioning, however, it fails because of permission issues.
Failed to upload a file to the guest VM via SCP due to a permissions
error. This is normally because the SSH user doesn't have permission
to write to the destination location. Alternately, the user running
Vagrant on the host machine may not have permission to read the file.
I've tried doing "su $USER" in the bottom of bootstrap.sh, but that is apparently not the way it works.
Anyone know how I can fulfill my needs?
EDIT: possible solution
I decided not to work so hard to change Vagrant, and tried to use the vagrant user. Now I have the following Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provision :shell, :path => "vagrant/bootstrap.sh"
config.vm.synced_folder "~/src/", "/home/vagrant/src"
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
config.vm.provision "file", source: "~/.ssh", destination: ".ssh-from-host-machine"
config.vm.provision "file", source: "vagrant/.bash_aliases", destination: ".bash_aliases"
config.vm.provision :shell, privileged: false, :path => "vagrant/bootstrap_late.sh"
end
bootstrap.sh installs required packages, and bootstrap_late.sh does necessary setup for the vagrant user. This includes adding the ssh configs that makes it use $USER when talking to the server.
Host: Windows 7
Guest: Windows 8
I have a simple Vagrantfile that runs a powershell script to provision the guest. When I packaged the box, I saw that the file was added, but when I run vagrant up I get the error shell provisioner:* `path` for shell provisioner does not exist on the host system: D:/VirtualMachines/test/provision.ps1
I verified that provision.ps1 exists in the vagrant box location under the include directory.
So why isn't provision.ps1 getting copied to the location it needs to when i run vagrant up?
Vagrant file:
VAGRANTFILE_API_VERSION = "2"
modified_name = ENV["COMPUTERNAME"][0..12]
comp_name = modified_name + "TA"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "base"
config.vm.hostname = comp_name
config.vm.communicator = "winrm"
config.vm.network "forwarded_port", host: 3389, guest: 3389, auto_correct: true
config.vm.provision "shell", path: "provision.ps1"
end
The answer at How to package files with a Vagrant box? helped me.
Here is how I got it to work:
config.vm.provision "shell" do |s|
p = File.expand_path("../", __FILE__)
s.path = p + "\\provision.ps1"
end
I had the same problem with Vagrant 1.8.1 on host Windows 8. After reading https://github.com/fideloper/Vaprobash/issues/30 I just renamed bootstrap.sh to bootstrp.sh, tried again and it worked. After renaming bootstrp.sh to bootstrap.sh it still worked.
I suppose in my case there was some strange invisible character in the filename.
How are people handling simple automation (with puppet) for dev / prod environments with vagrant (ideally from the same vagrantfile)?
Use case I'm trying to solve
I would love to spin up the the production machine with vagrant if it isn't created.
I would love to reload nginx or apache confs on production with vagrant if they were tweaked in the puppet files for my dev environment.
The Problem
When you call vagrant up with a provider like AWS or Digital Ocean, it becomes the active provider and you can't switch. You get this error:
An active machine was found with a different provider. Vagrant
currently allows each machine to be brought up with only a single
provider at a time. A future version will remove this limitation.
Until then, please destroy the existing machine to up with a new
provider.
It seems the answer it to destroy, but I just need to switch. I don't want to destroy.
I would love to be able to say
vagrant up prod
or
vagrant reload prod
and then a simple vagrant up would fall back to the default machine.
This syntax is similar to how multiple machines work, but I don't want to spin up a dev and production environment when I just call vagrant up (which is the default behavior).
Should I be looking at packer as part of the workflow? I watched the whole talk at puppetconf 2013 on Mitchell's talk on Multi-Provider http://puppetlabs.com/presentations/multi-provider-vagrant-aws-vmware-and-more
I'm still not seeing a solution for my problem.
UPDATE 9/27/13
In case anybody else is fighting this idea, this article cleared up a lot of questions I had.
http://pretengineer.com/post/packer-vagrant-infra
As for workaround, you should define config.vm.define (as suggested here), in order to support multiple providers.
Please find the following configuration posted by #kzap as example:
Vagrant.configure("2") do |config|
# Store the current version of Vagrant for use in conditionals when dealing
# with possible backward compatible issues.
vagrant_version = Vagrant::VERSION.sub(/^v/, '')
# Configuration options for the VirtualBox provider.
def configure_vbox_provider(config, name, ip, memory = 2048, cpus = 1)
config.vm.provider :virtualbox do |v, override|
# override box url
override.vm.box = "ubuntu/trusty64"
# configure host-only network
override.vm.hostname = "#{name}.dev"
override.vm.network :private_network, id: "vvv_primary", ip: ip
v.customize ["modifyvm", :id,
"--memory", memory,
"--cpus", cpus,
"--name", name,
"--natdnshostresolver1", "on",
"--natdnsproxy1", "on"
]
end
end
default_provider = "virtualbox"
supported_providers = %w(virtualbox rackspace aws managed)
active_provider = ENV['VAGRANT_ACTIVE_PROVIDER'] # it'd be better to get this from the CLI --provider option
supported_providers.each do |provider|
next unless (active_provider.nil? && provider == default_provider) || active_provider == provider
#
# VM per provider
#
config.vm.define :"sample-#{provider}" do | sample_web_config |
case provider
when "virtualbox"
configure_vbox_provider(sample_web_config, "examine-web", "192.168.50.1")
when "aws"
configure_aws_provider(sample_web_config)
when "managed"
configure_managed_provider(sample_web_config, "1.2.3.4")
when "rackspace"
configure_rackspace_provider(sample_web_config)
end
end
end
Or the following example posted at gist by #maxlinc:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "dummy"
config.vm.provider :rackspace do |rs|
rs.username = ENV['RAX_USERNAME']
rs.api_key = ENV['RAX_API_KEY']
rs.rackspace_region = :ord
end
supported_providers = %w(virtualbox rackspace)
active_provider = ENV['VAGRANT_ACTIVE_PROVIDER'] # it'd be better to get this from the CLI --provider option
supported_providers.each do |provider|
next unless active_provider.nil? || active_provider == provider
config.vm.define "exact_name_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = '1 GB Performance'
rs.image = 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)'
end
end
config.vm.define "regex_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = /1\s+GB\s+Performance/
rs.image = /Ubuntu.*Trusty Tahr.*(PVHVM)/
end
end
config.vm.define "id_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = 'performance1-1'
rs.image = 'bb02b1a3-bc77-4d17-ab5b-421d89850fca'
end
end
config.vm.define "unlisted_#{provider}" do |box|
box.vm.provider :rackspace do |rs|
rs.flavor = 'performance1-1'
rs.image = '547a46bd-d913-4bf7-ac35-2f24f25f1b7a'
end
end
end
end
Not an ideal solution, but what about using git branches? My thinking is that it could be conceptually similar to using heroku, where you might have a master, staging, and production versions (since they're usually different remotes).
In this case you start off the prod branch with the small edit to the Vagrantfile to name the VM a little differently. Then you should be able to merge all changes from dev with the prod branch as they occur. So your workflow would look like:
$ git checkout prod
$ vagrant up
$ git checkout master
... make changes to puppet ...
$ git checkout prod
$ git merge master
$ vagrant reload
$ git checkout master
You could script and alias these so you end up with
$ start_production
$ reload_production
Here is a simple way of dynamically changing the 'default' machine name depending on the specified --provider from the command line, so they won't conflict between the different providers:
require 'getoptlong'
opts = GetoptLong.new(
[ '--provider', GetoptLong::OPTIONAL_ARGUMENT ],
[ '--vm-name', GetoptLong::OPTIONAL_ARGUMENT ]
)
provider=ENV['PROVIDER'] || 'virtualbox'
vm_name=ENV['VM_NAME'] || 'default'
opts.each do |opt, arg|
case opt
when '--provider'
provider=arg
when '--vm-name'
vm_name=arg
end
end
Vagrant.configure(2) do |config|
# HERE you are dynamically changing the machine name to prevent conflict.
config.vm.define "mt-#{provider}-#{vm_name}"
# Below sections are just examples, not relevant.
config.vm.provider "virtualbox" do |vm|
vm.name = "test.local"
vm.network "private_network", ip: "192.168.22.22"
vm.customize ['modifyvm', :id, '--natdnshostresolver1', 'on']
config.vm.box = "ubuntu/wily64"
end
config.vm.provider :aws do |aws, override|
aws.aws_profile = "testing"
aws.instance_type = "m3.medium"
aws.ami = "ami-7747d01e"
config.vm.box = "testing"
end
end
Example usage:
VM_NAME=dev PROVIDER=virtualbox vagrant up --provider=virtualbox
VM_NAME=uat PROVIDER=aws vagrant up --provider=aws
VM_NAME=test PROVIDER=aws vagrant up --provider=aws
VM_NAME=prod PROVIDER=aws vagrant up --provider=aws
VM_NAME=uat PROVIDER=aws vagrant destroy -f
VM_NAME=test PROVIDER=aws vagrant status
See also: Multiple provisioners in a single vagrant file?
what I came up with to work with this scenario is to manage 2 distincts .vagrant folder.
Note: most of the other answers deal with setting up multi-provider assuming you will run dev and prod on different provider, in most cases this might be true but you can definitely have cases where you have same provider for dev and prod. Lets say you're using aws and you want to use dev and prod as ec2 instance it will be the same provider.
Say you want to manage dev and prod instances, potentially using different providers (but could also very well be on the same provider) so you'll do:
set up dev instance with normal vagrant up --provider <dev_provider>.
This will create a dev VM that you can manage
back up the .vagrant folder created in your project directory and rename it like .vagrant.dev
set up prod instance with your provider of choice vagrant up --provider <prod_provider>. This now creates your prod VM
back up the newly .vagrant folder created in your project directory and rename it like .vagrant.prod
now, depending if you want to work on dev or prod, you'll rename the .vagrant.dev or .vagrant.prod directory as .vagrant and vagrant will operate the right VM.
I did not come up with a script as mainly the most of the time I work with dev and very few times I need to switch to the other provider. but I dont think it will be too hard to read the parameter from CLI and make the renaming more dynamic.