Vagrant argv input on terminal complains about machine name - ruby

I am trying to pass in arguments (via known ruby methods) to my vagrant up command line, but am getting machine not found errors. What is the correct way to do this in Vagrant?
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Parse options
options = {}
options[:keyfile] = ARGV[1] || false # Your Github authentication keyfile
options[:boxtype] = ARGV[2] || 'virtualbox' # Type of virtual appliance to load
options[:version] = ARGV[3] || 'latest' # Box version to load (not used currently)
ARGV.delete_at(1)
ARGV.delete_at(1)
ARGV.delete_at(1)
Vagrant.configure("2") do | config |
puts ("==> [info] Looking for #{options[:keyfile]}")
if File.file?(options[:keyfile])
config.vm.provision :shell, :inline => "echo -e '#{File.read(options[:keyfile])}' > '/home/vagrant/.ssh/GitKey'"
else
puts ("==> [error] The require RSA key: #{options[:keyfile]} does not exist, exiting.")
abort
end
end
Error
$ vagrant up ~/.ssh/github_rsa
The machine with the name '/Users/ehime/.ssh/github_rsa' was not found configured for
this Vagrant environment.
EDIT
Trying this a different way gives me some slightly more promising results
require 'optparse'
require 'ostruct'
....
options = OpenStruct.new
OptionParser.new do | opt |
opt.on('-v', '--version VERSION', 'Box version to load (not used currently)') { | o | options.version = o }
opt.on('-k', '--keyfile KEYFILE', 'Your Github authentication keyfile') { | o | options.keyfile = o }
opt.on('-b', '--boxfile BOXTYPE', 'Type of virtual appliance to load') { | o | options.boxtype = o }
end.parse!
Vagrant.configure("2") do | config |
puts ("==> [info] Looking for #{options.keyfile}")
if File.file?(options.keyfile)
config.vm.provision :shell, :inline => "echo -e '#{File.read(options.keyfile)}' > '/home/vagrant/.ssh/GitKey'"
else
puts ("==> [error] The require RSA key: #{options.keyfile} does not exist, exiting.")
abort
end
....
Gets me pretty close as well, but it needs the flags unset somehow so they don't conflict with vagrant. At least the help flag works
$ vagrant up -k /Users/ehime/.ssh/github_rsa
==> [info] Looking for /Users/ehime/.ssh/github_rsa
An invalid option was specified. The help for this command
is available below.
Usage: vagrant up [options] [name]
Options:
--[no-]provision Enable or disable provisioning
--provision-with x,y,z Enable only certain provisioners, by type.
--[no-]destroy-on-error Destroy machine if any fatal error happens (default to true)
--[no-]parallel Enable or disable parallelism if provider supports it
--provider PROVIDER Back the machine with a specific provider
-h, --help Print this help
Help
$ vagrant up -h
Usage: vagrant up [options] [name]
Options:
--[no-]provision Enable or disable provisioning
--provision-with x,y,z Enable only certain provisioners, by type.
--[no-]destroy-on-error Destroy machine if any fatal error happens (default to true)
--[no-]parallel Enable or disable parallelism if provider supports it
--provider PROVIDER Back the machine with a specific provider
-h, --help Print this help
Usage: vagrant [options]
-v, --version VERSION Box version to load (not used currently)
-k, --keyfile KEYFILE Your Github authentication keyfile
-b, --boxfile BOXTYPE Type of virtual appliance to load

The Vagrantfile is not executed directly so you cannot just pass in the arguments as you would with a normal script. vagrant looks for the file inside cwd() and bring it in.
Would go the route of the env vars or a template file which you generate before running vagrant.

Related

sed script with backslashes works in console or standalone script but not a Vagrantfile

I have this line:
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
Really simple line for editing my PHPMyAdmin config.inc.php AllowNoPassword to true (this is a dev environment of course).
It works perfectly in console but in a script file using vagrant it simply does not.
I do believe it is to do with the ' but I cannot understand what the difference is.
What is going on here and how to solve it?
Edit
Here is a complete example, minus a few bits of logic to simplify it and remove some private details etc:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/xenial64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
config.vm.network "forwarded_port", guest: 443, host: 4343, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
add-apt-repository ppa:ondrej/php
apt-get update > /dev/null
yes | apt-get install zip
yes | apt-get install php7.1-fpm \
php7.1-curl \
php7.1-gd \
php7.1-mysql \
php7.1-mbstring \
php7.1-xml \
php7.1-mcrypt \
php7.1-soap \
php7.1-dev
# Set the right user for PHP7 ("ubuntu" user)
sed -i -e "s/www-data/ubuntu/g" /etc/php/7.1/fpm/pool.d/www.conf
# Fix error reporting so it is consistent with live server
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/fpm/php.ini
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/cli/php.ini
yes | apt-get install php-pear
pecl install xdebug
# Add xdebug to PHP runtime
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/fpm/php.ini
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/cli/php.ini
service php7.1-fpm restart
debconf-set-selections <<< 'mysql-server mysql-server/root_password password root'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password root'
yes | apt-get -y install mysql-server
mysqladmin -u root -p'root' password ''
SHELL
$script = <<-SCRIPT
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
config.vm.provision "shell", inline: $script, privileged: false
#config.vm.provision :shell, path: "bootstrap.sh"
end
Use an escaped heredoc:
$script = <<-'SCRIPT'
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
The quotes around SCRIPT indicate to the Ruby interpreter that all contents should be literal -- taken precisely as given rather than prone to expansions. (Such quotes have the same meaning in shell heredocs as well).

Not to store FTP password in Vagrantfile (Vagrant push FTP strategy)

I am developing a small site. I use vagrant for development environment and want to use it for deploy to production. Vagrant docs says that there is Vagrant push FTP strategy.
Config:
config.push.define "ftp" do |push|
push.host = "ftp.company.com"
push.username = "username"
push.password = "password"
end
Usage:
vagrant push
It is quite enough for me, but thing that is stopping me is storing ftp host, username and password in Vagrantfile that is going to my Version Control System and it is bad practice.
Can you give any workaround for this case?
Generate hashed password and store it
openssl passwd -1 "Your_password"
I found solution using config file. Inspired with this question, I moved my sensitive data to separate file. I called it ftp.yml and added to .gitignore
ftp.yml
ftp_host: "host"
ftp_user: "username"
ftp_pass: "password"
.gitignore
ftp.yml
Vagrantfile
# loading FTP config
require 'yaml'
settings = YAML.load_file 'ftp.yml'
Vagrant.configure("2") do |config|
# vm config omitted
config.push.define "ftp" do |push|
push.host = settings['ftp_host']
push.username = settings['ftp_user']
push.password = settings['ftp_pass']
end
end
It worked fine for me.
A simple solution is use of environment variable. It has the important benefit to not store password in clear text.
Vagrantfile:
config.push.define "ftp" do |push|
push.host = "ftp.company.com"
push.username = "username"
push.password = ENV["FTP_PASSWORD"]
end
And export password in environment variable before call to vagrant:
export FTP_PASSWORD='super_secret'
vagrant push
Personally, I use 1Password command-line tool to retrieve password from my vault:
export FTP_PASSWORD=$(op get item ftp.company.com | jq '.details.fields[] | \
select(.designation=="password").value' -r)
vagrant push

Vagrant - Not Supported the capability 'change_host_name'

The problem is about the capability 'change_host_name' isn't supported by the guest when I try to execute the following command line:
vagrant up
It gives me an error as the following:
Vagrant attempted to execute the capability 'change_host_name'
on the detect guest OS 'linux', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.
Note that my OS is:
OS X Yosemite 10.10.5
Guest Additions Version: 4.2.0 and VirtualBox Version: 5.0
I've tried many solutions of others who face this issue, but I couldn't fix it.
This is https://github.com/mitchellh/vagrant/issues/7625. It will be fixed in the next release, until then if its blocking you, you can patch vagrant yourself
If you want to patch yourself
Method1 :
search for the plugins/guests/ubuntu/guest.rb file in your vagrant installation
e.g. /opt/vagrant/embedded/gems/gems/vagrant-1.8.5/plugins/guests/ubuntu/guest.rb on mac/linux default install
or /opt/vagrant/embedded/gems/vagrant-1.8.5/plugins/guests/ubuntu/guest.rb
windows : C:\HashiCorp\Vagrant\embedded\gems\gems\vagrant-1.8.5\plugin‌​s\guests\ubuntu\gues‌​t.rb
replace with
https://raw.githubusercontent.com/carlosefr/vagrant/1c631c18d1a654405f6954459a42ac19a1a2f096/plugins/guests/ubuntu/guest.rb (make sure to be with correct rights if you install as admin, you must be admin user to save the file)
alternatively edit the file and replace all contents by
module VagrantPlugins
module GuestUbuntu
class Guest < Vagrant.plugin("2", :guest)
def detect?(machine)
# This command detects if we are running on Ubuntu. /etc/os-release is
# available on modern Ubuntu versions, but does not exist on 14.04 and
# previous versions, so we fall back to lsb_release.
#
# GH-7524
# GH-7625
#
machine.communicate.test <<-EOH.gsub(/^ {10}/, "")
if test -r /etc/os-release; then
source /etc/os-release && test xubuntu = x$ID
elif test -x /usr/bin/lsb_release; then
/usr/bin/lsb_release -i 2>/dev/null | grep -q Ubuntu
else
exit 1
fi
EOH
end
end
end
end
Method2 : An Alternative method to patch the file using patch command :
save the following file under vagrant-guest.patch
commit 00fa49191dba2bb7c6322fa8df9327ca505c0b41
Author: Seth Vargo <sethvargo#gmail.com>
Date: Sat Jul 23 11:40:36 2016 -0400
guests/ubuntu: Revert detection
- Semi-reverts GH-7524
- Fixes GH-7625
diff --git a/plugins/guests/ubuntu/guest.rb b/plugins/guests/ubuntu/guest.rb
index 9aeb7aa..f60108e 100644
--- a/plugins/guests/ubuntu/guest.rb
+++ b/plugins/guests/ubuntu/guest.rb
## -2,7 +2,22 ## module VagrantPlugins
module GuestUbuntu
class Guest < Vagrant.plugin("2", :guest)
def detect?(machine)
- machine.communicate.test("test -r /etc/os-release && . /etc/os-release && test xubuntu = x$ID")
+ # This command detects if we are running on Ubuntu. /etc/os-release is
+ # available on modern Ubuntu versions, but does not exist on 14.04 and
+ # previous versions, so we fall back to lsb_release.
+ #
+ # GH-7524
+ # GH-7625
+ #
+ machine.communicate.test <<-EOH.gsub(/^ {10}/, "")
+ if test -r /etc/os-release; then
+ source /etc/os-release && test xubuntu = x$ID
+ elif test -x /usr/bin/lsb_release; then
+ /usr/bin/lsb_release -i 2>/dev/null | grep -q Ubuntu
+ else
+ exit 1
+ fi
+ EOH
end
end
end
and run the following command to apply the patch
sudo patch -p1 --directory /opt/vagrant/embedded/gems/gems/vagrant-1.8.5/ < vagrant-guest.patch
Just replace /opt/vagrant/embedded/gems/gems/vagrant-1.8.5 (or /opt/vagrant/embedded/gems/vagrant-1.8.5/plugins/guests/ubuntu/guest.rb) with your vagrant folder installation

Adding VM /etc/host entries that point to host machine with Vagrant and Puphpet

I know how to use vagrant-hostsupdater to add entries into the host's /etc/hosts file that point to the VM, but I'm actually trying to find a dynamic way to go the OTHER direction. On my machine, I have MySQL installed with a large db. I don't want to put this inside the VM, I need the VM to be able to access it.
I can easily set it up manually. After vagrant up, I can ssh into the VM and edit the /etc/hosts there and make an entry like hostmachine.local and point to my IP address at the time. However, as I move from home to work my host machine will change so I constantly have to update that entry.
Is there a way within an .erb file or somehow to make a vagrant up take the IP of the host machine and make such an entry in a VM hosts file?
Here's one way to do it. Since the Vagrantfile is a Ruby script, we can use a bit of logic to find the local hostname and IP address. Then we use those in a simple provisioning script which adds them to the guest /etc/hosts file.
Example Vargrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# test setting the host IP address into the guest /etc/hosts
# determine host IP address - may need some other magic here
# (ref: http://stackoverflow.com/questions/5029427/ruby-get-local-ip-nix)
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
ip = my_first_private_ipv4.ip_address()
# determine host name - may need some other magic here
hostname = `hostname`
script = <<SCRIPT
echo "#{ip} #{hostname}" | tee -a /etc/hosts
SCRIPT
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.hostname = "iptest"
config.vm.provision :shell, :inline => script
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "iptest"
vb.customize ["modifyvm", :id, "--memory", "1000"]
end
end
Note: the echo | tee -a command adding to /etc/hosts will keep appending if you provision multiple times (without destroying the VM). You might need a better solution there if you run into that.
Another possible solution is to use the vagrant-hosts plugin. Host IP can be found the same way BrianC showed in his answer.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
host_ip = my_first_private_ipv4.ip_address()
Vagrant.configure(2) do |config|
config.vm.define "web", primary: true do |a|
a.vm.box = "ubuntu/trusty64"
a.vm.hostname = "web.local"
a.vm.provider "virtualbox" do |vb|
vb.memory = 2048
vb.cpus = 1
end
a.vm.provision :hosts do |provisioner|
provisioner.add_host host_ip, ['host.machine']
end
end
end
Provisioner will add a row to the VM's /etc/hosts file, mapping host machine's IP address to host.machine. Running provisioner multiple times will not result in duplicate lines in /etc/hosts.
I actually found a simpler solution given my situation, running the VM on a Mac and the fact that my local.db.yml file is not part of the source code. Instead of using a name/IP I was actually just able to go to Mac's System Preferences and find my local network name for my computer, i.e. Kris-White-Mac.local
This resolves both inside and outside of the VM, so by putting that name instead of localhost or 127.0.0.1, it works even when my IP changes.

Is it possible to run a script on a virtual machine after Vagrant finishes provisioning all of them?

I am using Vagrant v1.5.1 to create a cluster of virtual machines (VMs). After all the VMs are provisioned, is it possible to run a single script on one of the machines? The script I want to run will setup passwordless SSH from one VM to all the other VMs.
For example, my nodes provisioned in Vagrant (CentOS 6.5) are as follows.
node1
node2
node3
node4
My Vagrantfile looks like the following.
(1..4).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
end
end
After all this is done, I then need to run a script on node1 to enable passwordless SSH to node2, node3, and node4.
I know you can run scripts as each VM is being provisioned, but in this case, I want to run a script after all VMs are provisioned, since I need all VMs to be up and running to run this last script.
Is this possible in Vagrant?
I realized that I can also iterate backwards too.
r = 4..1
(r.first).downto(r.last).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
if i == 1
node.vm.provision "shell" do |s|
s.path = "/path/to/script.sh"
end
end
end
end
This will work great, but, in reality, I also need to setup passwordless SSH from node2 to node1, node3, and node4. In the approach above, this could only ever work for node1, but not for node2 (since node1 will not be provisioned).
If there's a Vagrant plugin to allow password SSH between all nodes in my cluster, that would even be better.
The question is one year old, anyway I found it because I had the same problem, so here it is the workarround I used to solve the problem, somebody might find it usefull.
We need "vagrant triggers" for this to work. The thing with vagrant triggers is that they fire for every machine you are creating, but we want to determine the moment ALL machines are UP. We can do that by checking on each UP event if that event corresponds to the last machine being created:
Vagrant.configure("2") do |config|
(1..$machine_count).each do |i|
config.vm.define vm_name = "w%d" % i do |worker|
worker.vm.hostname = vm_name
workerIP = IP
worker.vm.network :private_network, ip: workerIP
worker.trigger.after :up do
if(i == $machine_count) then
info "last machine is up"
run_remote "bash /vagrant/YOUR_SCRIPT.sh"
end
end
end
end
end
This works for providers that not support parallel execution on Vagrant (VBox, VMWare).
There is no hook in Vagrant for "run after all VMs are provisioned", so you would need to implement it yourself. A couple options I can think of:
1: Run the SSH setup script after all VMs are running.
For example if the script was named ssh_setup.sh and present in the shared folder:
$ for i in {1..4}; do vagrant ssh node$i -c 'sudo /vagrant/ssh_setup.sh'; done
2: Use the same SSH keys for all hosts and set up during provisioning
If all nodes share the same passphrase-less SSH key, you could copy into ~.ssh the needed files like authorized_keys, id_rsa, etc.
Adding an updated answer.
The vagrant-triggers plugin was merged to Vagrant 2.1.0 in may 2018.
We can simply use the only_on option option from the trigger class.
Let's say we have the following configuration:
servers=[
{:hostname => "net1",:ip => "192.168.11.11"},
{:hostname => "net2",:ip => "192.168.22.11"},
{:hostname => "net3",:ip => "192.168.33.11"}
]
We can now easily execute the trigger after the last machine is up:
# Take the hostname of the last machine in the array
last_vm = servers[(servers.length) -1][:hostname]
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
# ----- Common configuration ----- #
node.vm.box = "debian/jessie64"
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
# ----- Adding trigger - only after last VM is UP ------ #
node.trigger.after :up do |trigger|
trigger.only_on = last_vm # <---- Just use it here!
trigger.info = "Running only after last machine is up!"
end
end
end
end
And we can check the output and see the the trigger really fires only after "net3" is UP:
==> net3: Setting hostname...
==> net3: Configuring and enabling network interfaces...
==> net3: Installing rsync to the VM...
==> net3: Rsyncing folder: /home/rotem/workspaces/playground/vagrant/learning-network-modes/testing/ => /vagrant
==> net3: Running action triggers after up ...
==> net3: Running trigger...
==> net3: Running only after last machine is up!
This worked for me pretty well: I've used per VM provision scripts, and in the last script I've called post provision script via ssh on the first VM.
In Vagrantfile:
require 'fileutils'
Vagrant.require_version ">= 1.6.0"
$max_nodes = 2
$vm_name = "vm_prefix"
#...<skipped some lines that are not relevant to the case >...
Vagrant.configure("2") do |config|
config.ssh.forward_agent = true
config.ssh.insert_key = false
#ubuntu 16.04
config.vm.box = "ubuntu/xenial64"
(1..$max_nodes).each do |i|
config.vm.define vm_name = "%s-%02d" % [$vm_name, i] do |config|
config.vm.hostname = vm_name
config.vm.network "private_network", ip: "10.10.0.%02d" % [i+20], :name => 'vboxnet2'
config.vm.network :forwarded_port, guest: 22, host: "1%02d22" % [i+20], id: "ssh"
config.vm.synced_folder "./shared", "/host-shared"
config.vm.provider :virtualbox do |vb|
vb.name = vm_name
vb.gui = false
vb.memory = 4096
vb.cpus = 2
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "100"]
vb.linked_clone = true
end
# Important part:
config.vm.provision "shell", path: "common_provision.sh"
config.vm.provision "shell", path: "per_vm_provision#{i}.sh"
end
end
end
On disk:
(ensure that post_provision.sh has at least owner execute permissions: rwxr..r..)
vm$ ls /vagrant/
...<skipped some lines that are not relevant to the case >...
config.sh
common_provision.sh
per_vm_provision1.sh
per_vm_provision2.sh
per_vm_provision3.sh
...
per_vm_provisionN.sh
post_provision.sh
Vagrantfile
...<skipped some lines that are not relevant to the case >...
In config.sh:
num_vm="2" # should equal the $max_nodes in Vagrantfile
name_vm="vm_prefix" # should equal the $vm_name in Vagrantfile
username="user1"
userpass="abc123"
...<skipped some lines that are not relevant to the case >...
In common_provision.sh:
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
sed -r -i 's/\%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/' /etc/sudoers
sed -r -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
service ssh reload
# add user ${username}
useradd --create-home --home-dir /home/${username} --shell /bin/bash ${username}
usermod -aG admin ${username}
usermod -aG sudo ${username}
/bin/bash -c "echo -e \"${userpass}\n${userpass}\" | passwd ${username}"
# provision additional ssh keys
# copy ssh keys from disk
cp /vagrant/ssh/* /home/vagrant/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys
mkdir /home/${username}/.ssh
cp /vagrant/ssh/* /home/${username}/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/${username}/.ssh/authorized_keys
# not required, just for convenience
cat >> /etc/hosts <<EOF
10.10.0.21 ${name_vm}-01
10.10.0.22 ${name_vm}-02
10.10.0.23 ${name_vm}-03
...
10.10.0.2N ${name_vm}-0N
EOF
...<skipped some lines that are not relevant to the case >...
In per_vm_provision2.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm
if [ "x${num_vm}" = "x2" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
In per_vm_provisionN.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm. N represents the highest number
if [ "x${num_vm}" = "xN" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
I hope, I didn't skip anything important, but I think the idea is clear in general.
Note: ssh keys for interVM access is provisioned by Vagrant by default. You can add your own ssh keys if needed using common_provision.sh

Resources