Where to store Ansible host file on Mac OS X - macos

I am trying to get started with Ansible to provision my Vagrantbox, but I can’t figure out how to deal with host files.
According to the documentation the should be storred in /etc/ansible/hosts, but I can’t find this on my system (Mac OS X). I also seen examples where the host.ini file situated in the document root adjacent to the vagrant file.
So my question is where would you store your hostfile for setting up a single vagrant box?

While Ansible will try /etc/ansible/hosts by default, there are several ways to tell ansible where to look for an alternate inventory file :
use the -i command line switch and pass your inventory file path
add inventory = path_to_hostfile in the [defaults] section of your ~/.ansible.cfg configuration file
use export ANSIBLE_HOSTS=path_to_hostfile as suggested by DomaNitro in his answer
Now you don't mention if you want to use the ansible provisionner available in vagrant, or if you want to provision your vagrant host manually.
Let's go for the Vagrant ansible provisionner first :
Create a directory (e.g. test), and create a Vagrant file inside :
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "precise64-v1.2"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.define :webapp do |webapp|
webapp.vm.hostname = "webapp.local"
webapp.vm.network :private_network, ip: "192.168.123.2"
webapp.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 200, "--name", "vagrant-docs", "--natdnshostresolver1", "on"]
end
end
#
# Provisionning
#
config.vm.provision :ansible do |ansible|
ansible.playbook = "provision.yml"
ansible.inventory_path = "hosts"
ansible.sudo = true
#
# Use anible.tags if you want to restrict what `vagrant provision`
# Here is a list of possible tags
# ansible.tags = "foo bar"
#
# Use ansible.verbose to see detailled output for ansible runs
# ansible.verbose = 'vvv'
#
# Customize your stuff here
ansible.extra_vars = {
some_var: 42,
foo: "bar",
}
end
end
Now when you run vagrant up (or vagrant provision), Vangrant's ansible provionner will look for a file name hosts in the same directory as Vagrantfile, and will try to apply the provision.yml playbook.
You can also run it manually, without resorting to Vagrant's ansible provisionner :
ansible-playbook -i hosts provision.yml --ask-pass --sudo
Note that Vagrant+Virtualbox+Ansible trio does not always get along well. There are some versions combinations that are problematic. Try to upgrade to the latests versions if you experience issues (especially regarding network).
{shameless_plug} You can find an more extensive example mixing vagrant and ansible here {/shameless_plug}
Good luck !

If you used Brew to install Ansible, you'll most likely find the default hosts file at /usr/local/etc/ansible/hosts. But, as others pointed out, you may just want to change the default.

I like to use bash environment variables as my base project is shared with other users.
you can simply export ANSIBLE_HOSTS=/pathTo/inventory/ this can be a host file or a directory with multi files.
You can also use write it in your ~/.bash_profile so its persistent
A bunch of other variables can set that way instead of maintaining a conf file for more info check the source in ansible/lib/ansible/constants.py

Here is a description what to do after installing Ansible on Mac, it worked for me: ansible-tips-and-tricks.readthedocs.io
I downloaded the ansible.cfg file to
/Users/"yourUser"/.ansible
and afterwards you can edit the ansible.cfg file by uncommenting the
inventory = /Users/"yourUser"/.ansible
line and specifying the path to the ansible folder like shown above. You can create the hosts file in this folder then as well. To try it out locally, you can put
localhost ansible_connection=local
in the hosts file and try it out with
ansible -m ping all

If you use Vagrant's ansible provisioner, Vagrant will automatically generate an Ansible hosts file (called vagrant_ansible_inventory_default) and configure ansible-playbook to use that file. It looks like this:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
It calls the Vagrant host "default", so your plays should either refer to "default" or "all".

On Mac i used sudo find / -type d -name "ansible" 2> /dev/null to find it,
but i don't have .../ansible/hosts folder from the box, maybe because i installed using brew as i read above, so i created at path/etc/ansible/hosts

Related

sed script with backslashes works in console or standalone script but not a Vagrantfile

I have this line:
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
Really simple line for editing my PHPMyAdmin config.inc.php AllowNoPassword to true (this is a dev environment of course).
It works perfectly in console but in a script file using vagrant it simply does not.
I do believe it is to do with the ' but I cannot understand what the difference is.
What is going on here and how to solve it?
Edit
Here is a complete example, minus a few bits of logic to simplify it and remove some private details etc:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/xenial64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
config.vm.network "forwarded_port", guest: 443, host: 4343, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
add-apt-repository ppa:ondrej/php
apt-get update > /dev/null
yes | apt-get install zip
yes | apt-get install php7.1-fpm \
php7.1-curl \
php7.1-gd \
php7.1-mysql \
php7.1-mbstring \
php7.1-xml \
php7.1-mcrypt \
php7.1-soap \
php7.1-dev
# Set the right user for PHP7 ("ubuntu" user)
sed -i -e "s/www-data/ubuntu/g" /etc/php/7.1/fpm/pool.d/www.conf
# Fix error reporting so it is consistent with live server
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/fpm/php.ini
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/cli/php.ini
yes | apt-get install php-pear
pecl install xdebug
# Add xdebug to PHP runtime
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/fpm/php.ini
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/cli/php.ini
service php7.1-fpm restart
debconf-set-selections <<< 'mysql-server mysql-server/root_password password root'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password root'
yes | apt-get -y install mysql-server
mysqladmin -u root -p'root' password ''
SHELL
$script = <<-SCRIPT
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
config.vm.provision "shell", inline: $script, privileged: false
#config.vm.provision :shell, path: "bootstrap.sh"
end
Use an escaped heredoc:
$script = <<-'SCRIPT'
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
The quotes around SCRIPT indicate to the Ruby interpreter that all contents should be literal -- taken precisely as given rather than prone to expansions. (Such quotes have the same meaning in shell heredocs as well).

Why does Vagrant truncate hostname

I'm setting the hostname in my VagrantFile like so:
config.vm.hostname = "demo.puppet"
However this ends up with a host name of just demo:
vagrant#demo:~$ hostname
demo
It seems that Vagrant will truncate at the first ., is this expected behaviour as plenty of examples on the web seem to have hostnames with . in them.
It doesn't have anything to do with Vagrant. A hostname cannot contain anything but a-z, 0-9, and dash (-). When you set that hostname, you are setting your hostname as "demo" in the domain "puppet". You should use "demo-puppet" instead.
As a side note, I've started making a habit of including the hostname of the Vagrant host in my VM hostnames. It can come in handy later, for instance when deploying a build, you can include the hostname from which it was deployed. A line like this: config.vm.hostname = "myapp-vagrant-#{hostname[0..-2]}" in your Vagrantfile will do the trick, setting the hostname (in my case) to "myapp-vagrant-nhoover-osx".

Adding VM /etc/host entries that point to host machine with Vagrant and Puphpet

I know how to use vagrant-hostsupdater to add entries into the host's /etc/hosts file that point to the VM, but I'm actually trying to find a dynamic way to go the OTHER direction. On my machine, I have MySQL installed with a large db. I don't want to put this inside the VM, I need the VM to be able to access it.
I can easily set it up manually. After vagrant up, I can ssh into the VM and edit the /etc/hosts there and make an entry like hostmachine.local and point to my IP address at the time. However, as I move from home to work my host machine will change so I constantly have to update that entry.
Is there a way within an .erb file or somehow to make a vagrant up take the IP of the host machine and make such an entry in a VM hosts file?
Here's one way to do it. Since the Vagrantfile is a Ruby script, we can use a bit of logic to find the local hostname and IP address. Then we use those in a simple provisioning script which adds them to the guest /etc/hosts file.
Example Vargrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# test setting the host IP address into the guest /etc/hosts
# determine host IP address - may need some other magic here
# (ref: http://stackoverflow.com/questions/5029427/ruby-get-local-ip-nix)
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
ip = my_first_private_ipv4.ip_address()
# determine host name - may need some other magic here
hostname = `hostname`
script = <<SCRIPT
echo "#{ip} #{hostname}" | tee -a /etc/hosts
SCRIPT
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.hostname = "iptest"
config.vm.provision :shell, :inline => script
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "iptest"
vb.customize ["modifyvm", :id, "--memory", "1000"]
end
end
Note: the echo | tee -a command adding to /etc/hosts will keep appending if you provision multiple times (without destroying the VM). You might need a better solution there if you run into that.
Another possible solution is to use the vagrant-hosts plugin. Host IP can be found the same way BrianC showed in his answer.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
host_ip = my_first_private_ipv4.ip_address()
Vagrant.configure(2) do |config|
config.vm.define "web", primary: true do |a|
a.vm.box = "ubuntu/trusty64"
a.vm.hostname = "web.local"
a.vm.provider "virtualbox" do |vb|
vb.memory = 2048
vb.cpus = 1
end
a.vm.provision :hosts do |provisioner|
provisioner.add_host host_ip, ['host.machine']
end
end
end
Provisioner will add a row to the VM's /etc/hosts file, mapping host machine's IP address to host.machine. Running provisioner multiple times will not result in duplicate lines in /etc/hosts.
I actually found a simpler solution given my situation, running the VM on a Mac and the fact that my local.db.yml file is not part of the source code. Instead of using a name/IP I was actually just able to go to Mac's System Preferences and find my local network name for my computer, i.e. Kris-White-Mac.local
This resolves both inside and outside of the VM, so by putting that name instead of localhost or 127.0.0.1, it works even when my IP changes.

How can I upload more than one file with vagrant file provisioning?

In a Vagrant setup, I have to upload a couple of files from the host to the guest during the provisioning phase.
In https://docs.vagrantup.com/v2/provisioning/file.html I can see how to upload a single file, from a source to a destination, but how can I upload multiple files or a folder structure?
NOTE: I know I can do that using the shell provisioner, but in this particular case a set of file uploads would be more appropriate.
You would need a separate config.vim.provision section for each file. You can add multiple of those sections to your Vagrantfile, like this:
config.vm.provision :file do |file|
file.source = "/etc/file1.txt"
file.destination = "/tmp/1.txt"
end
config.vm.provision :file do |file|
file.source = "/etc/file2.txt"
file.destination = "/tmp/2.txt"
end
Output:
[default] Running provisioner: file...
[default] Running provisioner: file...
You see it is executing both provisioning actions. You can verify the file presence inside the virtual machine:
vagrant ssh -- ls -al /tmp/{1,2}.txt
-rw-r--r-- 1 vagrant vagrant 4 Aug 27 08:22 /tmp/1.txt
-rw-rw-r-- 1 vagrant vagrant 4 Aug 27 08:22 /tmp/2.txt
Folder content uploads seem to work as well (I've only tried it with files, not nested folders):
config.vm.provision :file, source: '../folder', destination: "/tmp/folder"
In case anybody else just needs a quick and dirty hack to copy a directory structure from the host to the guest, this is how I finally did it:
require 'pathname' # at the beginning of the Vagrantfile
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# more configuration here
r = Pathname.new 'root'
Dir.glob(File.join("root", File.join("**","*"))) do |f|
if !File.directory?(f) then
s = Pathname.new f
t = s.relative_path_from r
config.vm.provision "file" do |fp|
fp.source = s.to_s
fp.destination = "/tmp/root/" + t.to_s
end
end
end
# more configuration here
end
Thanks to #hek2mgl for getting me on the right track.
This works for me, cd to your vagrant then run
vagrant up
vagrant ssh
vagrant#yourvagrant: sudo chown -R testuser:testuser /var/www/ or whatever folder you want to upload your files
and in you Vagrantfile, add
# we create add script to upload folder to a vagrant guest from local file
config.vm.provision :file, source: '/wamp/www/ian_loans', destination: "/var/www/solidbond"
just replace the file directory to your needs.Exit and
and don't forget to run in your vagrant directory
vagrant provision
finally go to your Ubuntu directory and see if the file you want to upload is there.
vagrant#yourvagrant: cd /var/your file directory
You can use file provisioner, as others had mentioned but it's still worth noting that it's possible to use a synced folder functionality with type set to rsync: http://docs.vagrantup.com/v2/synced-folders/rsync.html:
Vagrant can use rsync as a mechanism to sync a folder to the guest
machine. This synced folder type is useful primarily in situations
where other synced folder mechanisms are not available, such as when
NFS or VirtualBox shared folders aren't available in the guest
machine.
The rsync synced folder does a one-time one-way sync from the machine
running to the machine being started by Vagrant.
On the first vagrant up syncing happens before provisioners are executed. Sample configuration in Vagrantfile would look like this:
config.vm.synced_folder "upload/", "/home/vagrant", type: "rsync"
This is a little old but I'll throw another answer in here for entire directory structures for Windows users using PowerShell.
First, create a Powershell script that copies over an entire folder structure from a shared folder.
#Check if the directory you're copying already exists on the VM
$foldercheck = "path\to\folder\on\vm"
if (test-path $foldercheck)
{
Write-Host "Folder 'path\to\folder\on\vm' already exists..."
}
else {
#Create the new directory
New-Item -ItemType Directory -Force -Path "path\to\folder\on\vm"
#Copy over contents to the directory
Copy-Item "\\VBOXSRV\<share_name>\path\to\folder\to\copy\*" -Destination "path\to\folder\on\vm" -recurse
}
Then, configure your Vagrantfile to automount a sharedfolder, copy over PowerShell scripts, and then run them in the order specified.
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |vb|
# Other settings
# ...
# Add shared folders
vb.customize ["sharedfolder", "add", :id, "--name", "<share_name>", "--hostpath", "path/to/host/folder/location", "--automount"]
end
# =================================================================
# PROVISIONING
# =================================================================
# Load up the files to the Guest environment for use by PowerShell
config.vm.provision "file", source: "path/to/provisioning/files/powershellfile.ps1", destination: "path/to/temporary/location/powershellfile.ps1"
# Rinse/repeat for additional provisioning files
# Now provision files in the order they are specified
config.vm.provision "shell", inline: "path/to/temporary/location/powershellfile.ps1"
# Rinse/repeat to run PowerShell scripts
end

Is it possible to run a script on a virtual machine after Vagrant finishes provisioning all of them?

I am using Vagrant v1.5.1 to create a cluster of virtual machines (VMs). After all the VMs are provisioned, is it possible to run a single script on one of the machines? The script I want to run will setup passwordless SSH from one VM to all the other VMs.
For example, my nodes provisioned in Vagrant (CentOS 6.5) are as follows.
node1
node2
node3
node4
My Vagrantfile looks like the following.
(1..4).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
end
end
After all this is done, I then need to run a script on node1 to enable passwordless SSH to node2, node3, and node4.
I know you can run scripts as each VM is being provisioned, but in this case, I want to run a script after all VMs are provisioned, since I need all VMs to be up and running to run this last script.
Is this possible in Vagrant?
I realized that I can also iterate backwards too.
r = 4..1
(r.first).downto(r.last).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
if i == 1
node.vm.provision "shell" do |s|
s.path = "/path/to/script.sh"
end
end
end
end
This will work great, but, in reality, I also need to setup passwordless SSH from node2 to node1, node3, and node4. In the approach above, this could only ever work for node1, but not for node2 (since node1 will not be provisioned).
If there's a Vagrant plugin to allow password SSH between all nodes in my cluster, that would even be better.
The question is one year old, anyway I found it because I had the same problem, so here it is the workarround I used to solve the problem, somebody might find it usefull.
We need "vagrant triggers" for this to work. The thing with vagrant triggers is that they fire for every machine you are creating, but we want to determine the moment ALL machines are UP. We can do that by checking on each UP event if that event corresponds to the last machine being created:
Vagrant.configure("2") do |config|
(1..$machine_count).each do |i|
config.vm.define vm_name = "w%d" % i do |worker|
worker.vm.hostname = vm_name
workerIP = IP
worker.vm.network :private_network, ip: workerIP
worker.trigger.after :up do
if(i == $machine_count) then
info "last machine is up"
run_remote "bash /vagrant/YOUR_SCRIPT.sh"
end
end
end
end
end
This works for providers that not support parallel execution on Vagrant (VBox, VMWare).
There is no hook in Vagrant for "run after all VMs are provisioned", so you would need to implement it yourself. A couple options I can think of:
1: Run the SSH setup script after all VMs are running.
For example if the script was named ssh_setup.sh and present in the shared folder:
$ for i in {1..4}; do vagrant ssh node$i -c 'sudo /vagrant/ssh_setup.sh'; done
2: Use the same SSH keys for all hosts and set up during provisioning
If all nodes share the same passphrase-less SSH key, you could copy into ~.ssh the needed files like authorized_keys, id_rsa, etc.
Adding an updated answer.
The vagrant-triggers plugin was merged to Vagrant 2.1.0 in may 2018.
We can simply use the only_on option option from the trigger class.
Let's say we have the following configuration:
servers=[
{:hostname => "net1",:ip => "192.168.11.11"},
{:hostname => "net2",:ip => "192.168.22.11"},
{:hostname => "net3",:ip => "192.168.33.11"}
]
We can now easily execute the trigger after the last machine is up:
# Take the hostname of the last machine in the array
last_vm = servers[(servers.length) -1][:hostname]
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
# ----- Common configuration ----- #
node.vm.box = "debian/jessie64"
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
# ----- Adding trigger - only after last VM is UP ------ #
node.trigger.after :up do |trigger|
trigger.only_on = last_vm # <---- Just use it here!
trigger.info = "Running only after last machine is up!"
end
end
end
end
And we can check the output and see the the trigger really fires only after "net3" is UP:
==> net3: Setting hostname...
==> net3: Configuring and enabling network interfaces...
==> net3: Installing rsync to the VM...
==> net3: Rsyncing folder: /home/rotem/workspaces/playground/vagrant/learning-network-modes/testing/ => /vagrant
==> net3: Running action triggers after up ...
==> net3: Running trigger...
==> net3: Running only after last machine is up!
This worked for me pretty well: I've used per VM provision scripts, and in the last script I've called post provision script via ssh on the first VM.
In Vagrantfile:
require 'fileutils'
Vagrant.require_version ">= 1.6.0"
$max_nodes = 2
$vm_name = "vm_prefix"
#...<skipped some lines that are not relevant to the case >...
Vagrant.configure("2") do |config|
config.ssh.forward_agent = true
config.ssh.insert_key = false
#ubuntu 16.04
config.vm.box = "ubuntu/xenial64"
(1..$max_nodes).each do |i|
config.vm.define vm_name = "%s-%02d" % [$vm_name, i] do |config|
config.vm.hostname = vm_name
config.vm.network "private_network", ip: "10.10.0.%02d" % [i+20], :name => 'vboxnet2'
config.vm.network :forwarded_port, guest: 22, host: "1%02d22" % [i+20], id: "ssh"
config.vm.synced_folder "./shared", "/host-shared"
config.vm.provider :virtualbox do |vb|
vb.name = vm_name
vb.gui = false
vb.memory = 4096
vb.cpus = 2
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "100"]
vb.linked_clone = true
end
# Important part:
config.vm.provision "shell", path: "common_provision.sh"
config.vm.provision "shell", path: "per_vm_provision#{i}.sh"
end
end
end
On disk:
(ensure that post_provision.sh has at least owner execute permissions: rwxr..r..)
vm$ ls /vagrant/
...<skipped some lines that are not relevant to the case >...
config.sh
common_provision.sh
per_vm_provision1.sh
per_vm_provision2.sh
per_vm_provision3.sh
...
per_vm_provisionN.sh
post_provision.sh
Vagrantfile
...<skipped some lines that are not relevant to the case >...
In config.sh:
num_vm="2" # should equal the $max_nodes in Vagrantfile
name_vm="vm_prefix" # should equal the $vm_name in Vagrantfile
username="user1"
userpass="abc123"
...<skipped some lines that are not relevant to the case >...
In common_provision.sh:
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
sed -r -i 's/\%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/' /etc/sudoers
sed -r -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
service ssh reload
# add user ${username}
useradd --create-home --home-dir /home/${username} --shell /bin/bash ${username}
usermod -aG admin ${username}
usermod -aG sudo ${username}
/bin/bash -c "echo -e \"${userpass}\n${userpass}\" | passwd ${username}"
# provision additional ssh keys
# copy ssh keys from disk
cp /vagrant/ssh/* /home/vagrant/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys
mkdir /home/${username}/.ssh
cp /vagrant/ssh/* /home/${username}/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/${username}/.ssh/authorized_keys
# not required, just for convenience
cat >> /etc/hosts <<EOF
10.10.0.21 ${name_vm}-01
10.10.0.22 ${name_vm}-02
10.10.0.23 ${name_vm}-03
...
10.10.0.2N ${name_vm}-0N
EOF
...<skipped some lines that are not relevant to the case >...
In per_vm_provision2.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm
if [ "x${num_vm}" = "x2" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
In per_vm_provisionN.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm. N represents the highest number
if [ "x${num_vm}" = "xN" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
I hope, I didn't skip anything important, but I think the idea is clear in general.
Note: ssh keys for interVM access is provisioned by Vagrant by default. You can add your own ssh keys if needed using common_provision.sh

Resources