Changing user during vagrant provisioning - ruby

I want to use vagrant to setup developer machines. Since the machine will talk to the servers inhouse, I thought it a good idea that they are setup with the same usernames the developers have on their host machine. I'm having trouble figuring out how to handle this in the provisioning step.
My simple Vagrantfile looks like this:
VAGRANT_COMMAND = ARGV[0]
Vagrant.configure(2) do |config|
user = ENV['USER']
config.vm.box = "ubuntu/trusty64"
config.vm.provision :shell, :path => "bootstrap.sh", :args => user
config.ssh.username = user
config.ssh.password = "heimskringla"
config.vm.synced_folder "~/src/", "/home/" + user + "/src"
config.vm.provision "file", source: "~/.gitconfig", destination: "/home/" + user + "/.gitconfig"
config.vm.provision "file", source: "~/.ssh", destination: "/home/" + user + "/.ssh"
end
bootstrap.sh takes $USER from the host machine as an input. If the user does not exist in the Vagrant machine, it is created and added to /etc/sudoers.d.
If I start with a clean slate and run "Vagrant up" on this, it starts using $USER at once, and since it does not exist yet, the setup fails.
As a test I've tried doing this:
if VAGRANT_COMMAND != "up"
config.ssh.username = user
config.ssh.password = "changeme"
end
Then the provisioning in bootstrap.sh works. The user is created, and my packages are installed. When it gets to the file and synced folder provisioning, however, it fails because of permission issues.
Failed to upload a file to the guest VM via SCP due to a permissions
error. This is normally because the SSH user doesn't have permission
to write to the destination location. Alternately, the user running
Vagrant on the host machine may not have permission to read the file.
I've tried doing "su $USER" in the bottom of bootstrap.sh, but that is apparently not the way it works.
Anyone know how I can fulfill my needs?
EDIT: possible solution
I decided not to work so hard to change Vagrant, and tried to use the vagrant user. Now I have the following Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.provision :shell, :path => "vagrant/bootstrap.sh"
config.vm.synced_folder "~/src/", "/home/vagrant/src"
config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig"
config.vm.provision "file", source: "~/.ssh", destination: ".ssh-from-host-machine"
config.vm.provision "file", source: "vagrant/.bash_aliases", destination: ".bash_aliases"
config.vm.provision :shell, privileged: false, :path => "vagrant/bootstrap_late.sh"
end
bootstrap.sh installs required packages, and bootstrap_late.sh does necessary setup for the vagrant user. This includes adding the ssh configs that makes it use $USER when talking to the server.

Related

Multi-machine Vagrant with Ubuntu - Sinatra & PostgreSQL

I'm stuck again. I need to provision a multi-machine Environment - one VM for a Sinatra app and a second for its PostgreSQL DB.
So far, I've managed to get the Sinatra app up and running in the ubuntu/xenial64 box but the provisioning "breaks" when it hits the configuration for the DB
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
# Use ubuntu/xenial64 as the virtual machine
app.vm.box = "ubuntu/xenial64"
# Use a private network to connect the VM to the local machine via an IP with an alias
app.vm.network "private_network", ip: "192.168.10.100"
app.hostsupdater.aliases = ["development.local"]
# sync the 'app' directory in the local directory to '/app' on the VM
app.vm.synced_folder "app", "/app"
# Use the provisioning script in envirnonment to provision the VM for a Sinatra environment
app.vm.provision "shell", path: "environment/app/provision.sh"
app.vm.provision "shell", inline: set_env({ DATABASE_URL: "postgresql://myapp:dbpass#localhost:15432/myapp" })
end
config.vm.define "db" do |db|
db.vm.box = "ubuntu/trusty64"
db.vm.host_name = "postgresql"
db.vm.network "private_network", ip: "10.0.2.15"
# db.vm.forward_port 8000, 8000
db.hostsupdater.aliases = ["database.local"]
# db.vm.share_folder "home", "/home/vagrant", ".", :create => true
db.vm.provision "shell", path: "environment/db/provision.sh", privileged: false
end
end
As you've probably guessed, I'm running an external provisioning script for the PG setup. The odd thing is I'm using the script recommended from Postgres' own site here.
In a separate location, I've git cloned that repo and followed the instructions and it works absolutely fine, creating a properly provisioned VM with PG installed.
However, I want to run a single vagrant up command and provisioning both the app and db correctly and have them speak to each other.
I'm (quite clearly) new to provisioning and DevOps as a whole, so would really appreciate some help.
I've uploaded my hilariously broken code here for you kind souls to look over if you feel so inclined.
Vagrant documentation on Multi-machines is quite thin and Google isn't being much help
Thanks!

Multi-machine Vagrant project not provisioning as per docs

I’ve trying to set up a multi-machine Vagrant project. According to the docs (https://www.vagrantup.com/docs/multi-machine/), provisioning is “outside in”, meaning any top-level provisioning scripts are executed before provisioning scripts in individual machine blocks.
The project contains a Laravel project, and a Symfony project. My Vagrantfile looks like this:
require "json"
require "yaml"
confDir = $confDir ||= File.expand_path("vendor/laravel/homestead", File.dirname(__FILE__))
homesteadYamlPath = "web/Homestead.yaml"
homesteadJsonPath = "web/Homestead.json"
afterScriptPath = "web/after.sh"
aliasesPath = "web/aliases"
require File.expand_path(confDir + "/scripts/homestead.rb")
Vagrant.configure(2) do |config|
config.vm.provision "shell", path: "init.sh"
config.vm.define "web" do |web|
web.ssh.forward_x11 = true
if File.exists? aliasesPath then
web.vm.provision "file", source: aliasesPath, destination: "~/.bash_aliases"
end
if File.exists? homesteadYamlPath then
Homestead.configure(web, YAML::load(File.read(homesteadYamlPath)))
elsif File.exists? homesteadJsonPath then
Homestead.configure(web, JSON.parse(File.read(homesteadJsonPath)))
end
if File.exists? afterScriptPath then
web.vm.provision "shell", path: afterScriptPath
end
end
config.vm.define "api" do |api|
api.vm.box = "ubuntu/trusty64"
api.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
api.vm.network "private_network", ip: "10.1.1.34"
api.vm.network "forwarded_port", guest: 80, host: 8001
api.vm.network "forwarded_port", guest: 3306, host: 33061
api.vm.network "forwarded_port", guest: 9200, host: 9201
api.vm.synced_folder "api", "/var/www/api"
api.vm.provision "shell", path: "api/provision.sh"
end
end
I have a block (web) for the Laravel project, where I’ve copied the contents of the Homestead-based Vagrantfile, and an api block that uses the “standard” Vagrant configuration.
To bootstrap the projects, I created a simple shell script (init.sh) that simply clones the Git repositories into git-ignored directories. Given the documentation says configuration works outside-in, I’d therefore expect that script to run, and then the machine-specific blocks, but this doesn’t seem to be happening. Instead, on vagrant up, I receive the following error:
There are errors in the configuration of this machine. Please fix the following errors and try again:
vm:
* A box must be specified.
It seems it’s still trying to provision the individual machines, before running the shell script. I know the shell script isn’t getting called as I added an echo statement to it. Instead, the terminal just outputs the following:
Bringing machine 'web' up with 'virtualbox' provider...
Bringing machine 'api' up with 'virtualbox' provider...
So how can I get Vagrant to run my shell script first? I think it’s failing because the web group is checking if my web/Homestead.yaml file exists and if so, use the values in there for configuring (including the box name), but as my shell script hasn’t been ran and hasn’t cloned the repository that file does not exist, so there is no box specified, which Vagrant complains about.
The issue is that you do not define a box for the web machine. You need to either define the box in the outer space like
config.vm.box = "ubuntu/trusty64"
if you plan to use the same box/OS for both machines or define in the web scope
web.vm.box = "another box"
EDIT
Using the provision property will run the script in the VM, which is not what you want here, as you want the script to run on your host. (and because it runs in the VM, it needs the VM to be booted first)
Vagrantfile is just a simple ruby script, so you could add your script or even an execution to it (from ruby call), a potential issue I could see is that you cannot guarantee the execution and specially that the execution of your init script will be complete before vagrant does it things on the VM.
A possibility is to use the vagrant trigger plugin and execute your shell script before the up event
config.trigger.before :up do
info "Dumping the database before destroying the VM..."
run "init.sh"
end
Running it this way, vagrant will wait for the script to be executed before it runs its part of the up command.
You would need to do some check in your script to make sure it runs only when needed, otherwise, it will run everytime you start the machine (invoking vagrant up), e.g. you could make a check on the presence of the yaml file

Vagrant 'permission denied' on Windows

I am having trouble accessing files through Vagrant on Windows. I have been using it on OS X for quite some time and have my Vagrantfile setup correctly which works every time.
I have sent my colleague the same Vagrant file, he is on Windows and receives 'Permission Denied' when trying to access files through the browser.
Just to be clear, the permission errors are returned by the server when accessing 'dev.local' in the browser and not from Vagrant itself... it will be a configuration error on Windows or within the VM.
The VM is CentOS 6.5
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.5"
config.vm.network "private_network", ip: "192.168.33.21"
config.vm.network :forwarded_port, guest: 80, host: 8080
config.vm.provision :shell, :path => "install.sh"
config.vm.hostname = "dev.local"
config.vm.synced_folder ".", "/home", id: "vagrant", :nfs => false, :mount_options => ["dmode=777","fmode=777"]
config.ssh.insert_key = false
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
end
Can any Windows Vagrant users shed any light on this?
It was VBGuestAdditions out of date. The permission error was caused by not being able to sync to my local folder (which contained an index.php) so it was using the servers /home folder which didn't contain anything and since viewing directory structure is disabled it returned permission errors.
I had 4.X.X installed, and VirtualBox is on 5.X.X Here is the fix:
run command: vagrant plugin install vagrant-vbguest
the run vagrant up which may still throw an error as the plugin fails to copy a file.
vagrant ssh to get into the box and run the following command:
sudo ln -s /opt/VBoxGuestAdditions-5.X.X/lib/VBoxGuestAdditions /usr/lib/VBoxGuestAdditions
Replace 5.X.X with the version of your VirtualBox.
logout and run vagrant reload
do a happy dance

Using SaltStack grains file with Vagrant

I would like to use minion.d/*.conf to provision a vagrant machine.
Here is my Vagrant configuration:
Vagrant.configure("2") do |config|
## Choose your base box
config.vm.box = "precise64"
## For masterless, mount your salt file root
config.vm.synced_folder "salt/roots/", "/srv/salt/"
## Use all the defaults:
config.vm.provision :salt do |salt|
salt.minion_config = "salt/minion"
salt.run_highstate = true
salt.grains_config = "salt/minion.d/vagrant.conf"
end
end
After provisioning the Vagrant machine, I have errors with rendering SLS files since the minion.d/*.conf files are not being copied to the guest machine under :
/etc/salt/minion.d/
Should I make a sync config in the Vagrant configuration to co ?
Have you just tried mounting a synced folder to /etc/salt/grains?
## For masterless, mount your salt file root
config.vm.synced_folder "salt/roots/", "/srv/salt/"
config.vm.synced_folder "salt/grains.d/", "/etc/salt/grains.d/"
#Utah_Dave's solution will work just fine, or you can do the following (which is how I run it).
Filesystem:
/dev
Vagrantfile
salt-minion.conf
salt/
top.sls
my-awesome-state/init.sls
pillar/
top.sls
my-awesome-pillar.sls
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "mafro/jessie64-au-salt"
# salt config directory & shared dir in /tmp
config.vm.synced_folder ".", "/srv/salt"
# setup the salt-minion
config.vm.provision :salt do |salt|
salt.minion_config = "salt-minion.conf"
end
end
salt-minion.conf
file_client: local
id: awesome
file_roots:
base:
- /srv/salt/salt
pillar_roots:
base:
- /srv/salt/pillar
Vagrant's implementation of salt.grains_config doesn't copy the file to the /etc/salt/minion.d folder as you might expect. Instead it copies the file to /etc/salt/grains.
To get the salt-minion to read this new grains file, you just need to add the following to your minion configuration:
/etc/salt/minion
include:
- /etc/salt/grains

When i do a vagrant provisioning, can i ignore some provision codes from Vagrantfile on vagrant reload?

In my Vagrantfile, I have two shell provisions: one is for installing system dependencies for my project, and another is for starting up nginx server.
So what I wanted to have is when I vagrant reload --provision, can I ignore the provision for installing the system dependencies, and just start up the nginx server instead?
Sample code:
VAGRANTFILE_API_VERSION = '2'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
# Ignore this line on VM reload
config.vm.provision 'shell', path: 'provision/install.sh'
# Execute this one only on VM reload
config.vm.provision 'shell', path: 'provision/start_nginx.sh'
...
end
One simple solution, but a little bit hack method is
You can pass environment variables while running vagrant reload command like this
RELOAD=true vagrant reload --provision
then in VagrantFile
VAGRANTFILE_API_VERSION = '2'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
# Ignore this line on VM reload
if (ENV['RELOAD'] != true)
config.vm.provision 'shell', path: 'provision/install.sh'
end
# Execute this one only on VM reload
config.vm.provision 'shell', path: 'provision/start_nginx.sh'
...
end

Resources