Vagrant multiple machines override - vagrant

I have a question related to Hashicorp Vagrant. I would like to create a multi environment machine. I was wondering if we could share common configuration for multiple machines then override each with the specific requirements.
Imagine I have the following vagrant file.
Vagrant.configure("2") do |config|
#
config.vm.define "webserver" do |webserver|
webserver.vm.box = "ubuntu/focal64"
webserver.vm.network :private_network, ip: "10.0.0.10"
webserver.vm.provision "shell", inline: "apt-get update"
webserver.vm.provision "shell", inline: "script specific to web server "
end
#
config.vm.define "db" do |db|
db.vm.box = "ubuntu/focal64"
db.vm.network :private_network, ip: "10.0.0.11"
db.vm.provision "shell", inline: "apt-get update"
db.vm.provision "shell", inline: "script specific to db"
end
end
As we could see, we have some commons. And I would like to know if it was possible to make it a little bit shoter and more flexible. Maybe there is a way to override?
Also, what if I have a third machine that is not ubuntu but red hat instead? Would that be possible?

You can solve this by create a list of vm's you want to create. This is a very nice way to configure multiple machines in an easy to read way.
# List of machine specific details
nodes = {
"webserver" => ["10.0.0.10", "script specific to web server"],
"db" => ["10.0.0.11", "script specific to db"],
Vagrant.configure("2") do |config|
# Iterate over the machines
nodes.each do | (name, cfg) |
ip_address, script_name
config.vm.define name do |server|
server.vm.box = "ubuntu/focal64"
server.vm.network :private_network, ip: ip_address
server.vm.provision "shell", inline: "apt-get update"
server.vm.provision "shell", inline: script_name
end
end
end

Related

Strange behaviour of Vagrant Up command

Whenever I run Vagrant up, I am getting the following error.
Following is my vagrant file
default_box = "generic/opensuse42"
Vagrant.configure("2") do |config|
config.vm.define "master" do |master|
master.vm.box = default_box
master.vm.box_version = "v3.4.4"
master.vm.hostname = "master"
master.vm.network 'private_network', ip: "192.168.0.200", virtualbox__intnet: true
master.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: true
master.vm.network "forwarded_port", guest: 22, host: 2000 # Master Node SSH
master.vm.network "forwarded_port", guest: 6443, host: 6443 # API Access
for p in 30000..30100 # expose NodePort IP's
master.vm.network "forwarded_port", guest: p, host: p, protocol: "tcp"
end
master.vm.provider "virtualbox" do |v|
v.memory = "3072"
v.name = "master"
end
master.vm.provision "shell", inline: <<-SHELL
sudo zypper refresh
sudo zypper --non-interactive install bzip2
sudo zypper --non-interactive install etcd
sudo zypper --non-interactive install apparmor-parser
curl -sfL https://get.k3s.io | sh -
SHELL
end
Any hints will be much appreciated.
The error message provides the info you require:
in 'parse': Illformed requirement ["v3.4.4"] (Gem::Requirement::BadRequirementError)
It basically advises that there is a problem with your version of vm requested from your Vagrantfile:
default_box = "generic/opensuse42"
Vagrant.configure("2") do |config|
config.vm.define "master" do |master|
master.vm.box = default_box
master.vm.box_version = "v3.4.4"
master.vm.hostname = "master"
The problem is that you have used 'v' to define the version, which cannot be parsed for the value attained for the 'config.vm.box_version' attribute variable - it should be :
master.vm.box_version = "3.4.4"
You can always double check your config for a vm from the Vagrant website, in your case generic/opensuse42 - v3.4.4, it shows that it should be:
Vagrant.configure("2") do |config|
config.vm.box = "generic/opensuse42"
config.vm.box_version = "3.4.4"
end
I found that generic/opensuse42 has been expired so changing the box to opensuse/Leap-15.2.x86_64, solved my issue

vagrant ansible access other machines IP address

I have a vagrant file to create 3 VMs and an ansible to manage these 3 machines (inventory file is generated by the vagrant file). I need to access the VM no.1 IP address in order to put it in the configuration file of two other machines. but using hostvars[vm1] variable won't give me the IP address of the vm1.
Here is my vagrant file:
Vagrant.configure("2") do |config|
config.vm.network "private_network", type: "dhcp"
config.vm.provider "virtualbox" do |v|
v.memory = 512
end
config.vm.synced_folder ".", "/vagrant"
config.vm.box_check_update = false
config.vm.define "vm1" do |vm1|
vm1.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm2" do |vm2|
vm2.vm.box = "hashicorp/bionic64"
end
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
How can I solve this problem?
As configured, your ansible provisionner will run three times: once independently for each machine, being called with a limit set to the current machine name.
In this situation, the facts for all other machines will not be gathered in your playbook and hostvars[vm1] will be empty (unless you are currently running on vm1).
What you can try is to declare the provisionner on a single machine only (the best bet being vm3, the last one) and change the default current machine limit to all
config.vm.define "vm3" do |vm3|
vm3.vm.box = "hashicorp/bionic64"
vm3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.limit = "all"
#ansible.ask_become_pass = true
ansible.groups = {
"node_master" => ["vm1"],
"node_replicas" => ["vm2", "vm3"],
"node:children" => ["node_master", "node_replicas"]
}
end
end
This way, your playbook will run on all your vms at once in parallel and you should be able to access facts from all the hosts you target in your playbook.
I got no clue how to solve your specific version.
However, I use Vagrant and Ansible separately. Vagrant to only build the hosts with vagrant up, and Ansible to manage the configuration on those hosts.
I use this Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.ssh.insert_key = false
config.vm.define "multi-1" do | localMulti1 |
localMulti1.vm.box = "ubuntu/xenial64"
localMulti1.vm.hostname = "multi-1"
localMulti1.vm.network :forwarded_port, guest: 22, host: 30001, id: "ssh"
localMulti1.vm.network "private_network", ip: "10.0.0.111"
end
config.vm.define "multi-2" do | localMulti2 |
localMulti2.vm.box = "ubuntu/xenial64"
localMulti2.vm.hostname = "multi-2"
localMulti2.vm.network :forwarded_port, guest: 22, host: 30002, id: "ssh"
localMulti2.vm.network "private_network", ip: "10.0.0.112"
end
config.vm.define "multi-3" do | localMulti3 |
localMulti3.vm.box = "ubuntu/xenial64"
localMulti3.vm.hostname = "multi-3"
localMulti3.vm.network :forwarded_port, guest: 22, host: 30003, id: "ssh"
localMulti3.vm.network "private_network", ip: "10.0.0.113"
localMulti3.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "inventory"
ansible.limit = "local_multi"
end
end
end
I place this in my inventory file:
[local_multi]
multi-1 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30001 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-2 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30002 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
multi-3 ansible_ssh_user=vagrant ansible_host=127.0.0.1 ansible_ssh_port=30003 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
Your playbook.yml
---
- hosts: local_multi
become: True
tasks:
- name: check who is master
debug:
msg: "{{ node_master }}"
when: node_master is defined
Now you can place all your Ansible vars in the grouped or host vars;
./inventory
./playbook.yml
./Vagrantfile
./group_vars/all.yml
./host_vars/multi-1.yml
./host_vars/multi-2.yml
./host_vars/multi-3.yml

How to run an ansible playbook on a specific vagrant host

I have a Vagrantfile which creates 3 servers. I have two ansible playbooks. playbook1 should be executed on every server first. The second playbook2 should only be executed on server1 and not on server2 and server3.
How can I manage this with my Vagrantfile?
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.define "server1" do |server1|
//
end
config.vm.define "server2" do |server2|
//
end
config.vm.define "server3" do |server3|
//
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook1.yml"
end
end
The above executes playbook1 on all servers. How can I add config for playbook2.yml to be executed only on server1 and AFTER playbook1?
Given your example Vagrantfile and your theoretical playbook2.yml executing only on server2 after playbook1.yml on server1, we would arrive at the following solution:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.define "server1" do |server1|
//
# restrict scope of ansible provisioner to server1 by invoking on its class method off the constructor
server1.vm.provision :ansible do |ansible|
ansible.playbook = 'playbook1.yml'
end
end
config.vm.define "server2" do |server2|
//
# perform similarly for server2, which executes after server1 provisioning due to the imperative ruby dsl
server2.vm.provision :ansible do |ansible|
ansible.playbook = 'playbook2.yml'
end
end
config.vm.define "server3" do |server3|
//
end
end
It is also worth noting that if you want to be precise about ordering, you can vagrant up server1 and then vagrant up server2 instead of an all-in-one vagrant up.
Basically, within Vagrant.configure there is a scope affecting all VMs within config.vm. You can restrict its scope to specific VMs by instantiating with config.vm.define like you do above. Object VMs instantiated with config.vm.define have the same members/attributes as the base config.
Note you can also do something like this if you want:
Vagrant.configure('2') do |config|
...
(1..3).each do |i|
config.vm.define "server#{i}" do |server|
//
server.vm.provision :ansible do |ansible|
ansible.playbook = "playbook#{i}.yml"
end
end
end
end
for a per-server specific playbook. This depends on what exactly is within your // though specific to each VM, and if you wanted a third playbook for the third VM.
The below example will execute playbook1.yml on every server first then execute playbook2.yml only on server1 (this example assumes that the playbook1.yml can be parallelized):
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
N = 3
(1..N).each do |server_id|
config.vm.define "server#{server_id}" do |server|
server.vm.box = "ubuntu/bionic64"
server.vm.hostname = "server#{server_id}"
server.vm.network "private_network", ip: "172.28.128.25#{server_id}"
server.vm.provision :shell, inline: "sudo apt install -y python"
# only execute once the ansible provisioner,
# when all the machines are up and ready.
if server_id == N
server.vm.provision :ansible do |ansible|
# disable default limit to connect to all the machines
# execute playbook1 on all hosts
ansible.limit = "all"
ansible.playbook = "playbook1.yml"
ansible.compatibility_mode = "2.0"
end
server.vm.provision :ansible do |ansible|
# limit the connection to server1 and execute playbook2
ansible.limit = "server1"
ansible.playbook = "playbook2.yml"
ansible.compatibility_mode = "2.0"
end
end
end
end
end
This example builds on top of the example provided in the Tips and Tricks, the ansible-playbook's are parallelized and the scope for both ansible-playbooks's is limited by the ansible.limit configuration option (e.g. a vagrant up will bring up the virtual machines first then execute the playbooks one after the other against all hosts or subset of hosts).
Note: the ubuntu/bionic64 (virtualbox, 20190131.0.0) box has /usr/bin/python3 installed, for the sake of having a copy & paste example and for using a dynamic inventory I deliberately kept the server.vm.provision :shell, inline: "sudo apt install -y python" in the example so ansible-playbook (2.7.6) doesn't bomb out with "/bin/sh: 1: /usr/bin/python: not found\r\n errors (ref. How do I handle python not having a Python interpreter at /usr/bin/python on a remote machine?). Example playbook1.yml and playbook2.yml (present in the same directory as the Vagrantfile):
---
- hosts: all
tasks:
- debug:
msg: 'executing on {{ inventory_hostname }}'
Results in:

Exported vagrant machine web directory empty, why is that?

I am trying to export a machine I created with vagrant to either an OVA file for Virtualbox or a package to host in vagrantcloud. The machine exports successfully, but then when I start it up, the directories which should have my files are empty. Seems like a vagrant noob question.
Why would "/usr/local/fieldpapers/" be empty when I start up my exported VM?
What can I do to keep files present in that directory after export?
Vagrant File:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.synced_folder "./", "/usr/local/fieldpapers/", id: "vagrant-root",
owner: "vagrant",
group: "www-data",
mount_options: ["dmode=775,fmode=664"]
config.vm.provider :virtualbox do |vb, override|
vb.memory = 1024
vb.cpus = 1
vb.name = "Field Papers"
override.vm.box = "precise64"
override.vm.box_url = "http://files.vagrantup.com/precise64.box"
override.vm.network :private_network, ip: "192.168.33.10"
override.vm.provision :ansible, :playbook => "provisioning/playbook.yml"
end
end
First check your vagrant version, if it is latest.
Second, delete below line, vagrant will automatically mount your local folder to remote vagrant instance at /vagrant.
config.vm.synced_folder "./", "/usr/local/fieldpapers/", id: "vagrant-root",
owner: "vagrant",
group: "www-data",
mount_options: ["dmode=775,fmode=664"]

Vagrant + Chef + Hostname

I am pretty new using Vagrant and Chef. I am trying to change the Hostname of my Vagrant image using Chef. For some reason, I do not see a changed Hostname when I do vagrant ssh
What am I missing?
Here is my Cheffile
site "http://community.opscode.com/api/v1"
cookbook 'apt'
cookbook 'build-essential'
cookbook 'hostname'
Here is my Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Use Ubuntu 14.04 Trusty Tahr 64-bit as our operating system
config.vm.box = "ubuntu/trusty64"
# Configurate the virtual machine to use 2GB of RAM
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
# Forward the Rails server default port to the host
# config.vm.network :forwarded_port, guest: 3000, host: 3000
# Use Chef Solo to provision our virtual machine
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
chef.add_recipe "apt"
chef.add_recipe "hostname"
# Install Ruby 2.1.2 and Bundler
# Set an empty root password for MySQL to make things simple
chef.json = {
hostname: {
set_fqdn: 'Olympus'
}
}
end
end
As defined in the documentation you should set the attribute at the node level, not in node['hostname'].
In your case it will be
chef.json = {
set_fqdn: 'Olympus'
}
You can find more information in The hostname documentation

Resources