Vagrant to deploy Salt master - vagrant

I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?

I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost

Related

Ruby Vagrant network configuration duplication trouble, some objects references issue

I will say right away - I'm experienced in Python for example, but Ruby is totally new for me so this question may be not related to Vagrant at all, I don't know, sorry.
I want to create two VMs on my host and created Vagrantfile:
Vagrant.configure("2") do |config|
# Image config
config.vm.box = "ubuntu/bionic64"
config.disksize.size = '40GB'
# Nodes specific configs
config.vm.define "node_1_1" do |node|
node.vm.network "public_network", ip: "192.168.3.11", bridge: "enp4s0", netmask: "255.255.248.0"
node.vm.hostname = "vm-ci-node-1-1"
end
config.vm.define "node_1_2" do |node|
node.vm.network "public_network", ip: "192.168.3.12", bridge: "enp4s0", netmask: "255.255.248.0"
node.vm.hostname = "vm-ci-node-1-2"
end
# Nodes generic configs
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus=2
end
end
It works fine.
Then I decided to remove parameters hardcode and optimize it for next case with more than two VMs and for correct work on another machines with another bridge interface names. So I replaced Nodes specific configs section with:
target_interface = nil
for if_addr in Socket.getifaddrs
if if_addr.addr.ipv4? and if_addr.addr.ip_address.include? '192.168'
target_interface = if_addr.name
end
end
hostindex = 8
guestindices = [1, 2]
# Nodes specific configs
for guestindex in guestindices
vm_code = 'node_' + hostindex.to_s() + '_' + guestindex.to_s()
ip = '192.168.3.' + hostindex.to_s() + guestindex.to_s()
hostname = 'vm-ci-node-' + hostindex.to_s() + '-' + guestindex.to_s()
config.vm.define vm_code.dup do |node|
node.vm.network "public_network", ip: ip.dup, bridge: target_interface, netmask: "255.255.248.0"
node.vm.hostname = hostname.dup
end
end
Then I run vagrant up - no errors, but if I try to SSH to these two machines - I get strange behaviour:
node_8_1 IP address is 192.168.3.82, hostname - vm-ci-node-8-2
node_8_2 IP address is 192.168.3.82, hostname - vm-ci-node-8-2
As you see - it is same. Also there are another interface with same IP 10.0.2.15 - and it is trouble too, but it existed on previous version of config too.
I suspected that there are some Ruby references troubles so I used dup (I repeat, I'm totally new to Ruby, sorry). But it does not seem to work.
VM codes are different - node_8_1 and node_8_2, but IP and hostnames are same.
Could anybody please point me where I'm wrong?
I think the for guestindex in guestindices part is the cause of your trouble.
Try using guestindices.each do |i| or shorter (1..2).each do |i| instead.
Vagrant is mentioning this in their documentation for multi-machine provisioning, see https://www.vagrantup.com/docs/vagrantfile/tips.html#loop-over-vm-definitions:
The for i in ... construct in Ruby actually modifies the value of i for each iteration, rather than making a copy. Therefore, when you run this, every node will actually provision with the same [value].

Setting a default value in Vagrantfile if env. variable not set

I'm trying to do something like the following to set a default value if an environment variable is not set:
config.vm.box = ENV['VAGRANT_DEV_BOX'] || "ubuntu/xenial64"
Which causes the following error:
/opt/vagrant/embedded/lib/ruby/2.4.0/rubygems/version.rb:208:in `initialize': Malformed version number string debian-VAGRANTSLASH-jessie64 (ArgumentError)
The VAGRANT_DEV_BOX variable hasn't been set at this point. Confirmed like so:
server 🦄 echo $VAGRANT_DEV_BOX
server 🦄
Is it possible to do this in Ruby and/or Vagrantfile?
This is thanks to double-p on #vagrant, freenode:
<double-p> you cannot just inline ruby.. put this above vagrant-configure, like:
port = ENV["HOST_PORT"] || 8080
Vagrant.configure("2") do |config|
# Ubuntu 14.04 LTS
config.vm.box = "ubuntu/trusty64"
config.vm.network "forwarded_port", guest: 80, host: port
config.vm.provision "shell", path: "vagrant/provision.sh"
end

vagrant sets up duplicate IP addresses; Ansible picks them up?

I've got the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
NUM_HOSTS = 3
def hostname(id)
"node#{id}"
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.memory = 512
vb.cpus = 1
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "vagrant_playbook.yml"
ansible.groups = {
"vagrant" => (1..NUM_HOSTS).collect { |id| hostname(id) }
}
end
NUM_HOSTS.times do |n|
id = n + 1
config.vm.define hostname(id), primary: id == 1 do |host|
host.vm.network :private_network, ip: "10.0.33.1#{id}"
end
end
end
This assigns the private addresses on enp0s8. However, it assigns duplicate IP addresses on enp0s3: 10.0.2.15. Unfortunately, Ansible seems to be picking the duplicate address up in ansible_default_ipv4 instead of the unique address, so services running on these boxes don't work as intended. So, is there a way to:
stop Vagrant from assigning duplicate IPs? (I'm using the virtualbox provider, if that helps)
change which interface ansible_default_ipv4 uses?
some other solution which I haven't thought to search for?
This is a workaround you can use in the meantime:
{{ ansible_all_ipv4_addresses | last }}
Will work assuming it's guaranteed that your ipv4 addresses for the target box will always be sorted in a predictable order.

Chef run list is empty. Chef-Zero Vagrant Provisioner

I am trying to setup my Chef-Zero provisioner to execute the run list from a nodes JSON file. This is what my Vagrantfile looks like.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
end
end
When I run vagrant up, I get the following output from the chef-zero provisioner.
==> default: Running provisioner: chef_zero...
==> default: Detected Chef (latest) is already installed
Generating chef JSON and uploading...
==> default: Warning: Chef run list is empty. This may not be what you want.
==> default: Running chef-zero...
==> default: stdin: is not a tty
==> default: [2015-01-08T01:19:51+00:00] INFO: Started chef-zero at http://localhost:8889 with repository at /tmp/vagrant-chef/bec99a1a4e96279669bc5bb3140c0f2e, /tmp/vagrant-chef/2cbca02c0b5c49646d765ae1c9c0738f
==> default: One version per cookbook
==> default: data_bags at /tmp/vagrant-chef/72ac2a17a7c339d91d27a954fc49f8c3/data_bags
==> default: nodes at /tmp/vagrant-chef/83a9002a19a985bce4d26b8c9d050540/nodes
==> default: roles at /tmp/vagrant-chef/9cdb44a6dfefce6070b32ff28cb96535/roles
==> default: [2015-01-08T01:19:51+00:00] INFO: Forking chef instance to converge...
==> default: [2015-01-08T01:19:51+00:00] INFO: *** Chef 12.0.3 ***
==> default: [2015-01-08T01:19:51+00:00] INFO: Chef-client pid: 1731
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List is []
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List expands to []
==> default: [2015-01-08T01:19:57+00:00] INFO: Starting Chef Run for jira
==> default: [2015-01-08T01:19:57+00:00] INFO: Running start handlers
==> default: [2015-01-08T01:19:57+00:00] INFO: Start handlers complete.
==> default: [2015-01-08T01:19:59+00:00] INFO: Chef Run complete in 1.831177529 seconds
==> default: [2015-01-08T01:19:59+00:00] INFO: Skipping removal of unused files from the cache
==> default: [2015-01-08T01:19:59+00:00] INFO: Running report handlers
==> default: [2015-01-08T01:19:59+00:00] INFO: Report handlers complete
My question is why was the runlist empty? I specified the nodes directory and the node name in the chef_zero provisioner. The JSON file that specifies the run list for "app_node" exists in the nodes directory and I can see that chef is copying up all the cookbook/nodes/roles files to the server correctly.
I feel like I am missing something here. Any help would be much appreciated. If anything is unclear let me know.
Vagrant's chef_zero provisioner is actually using chef-solo -c solo.rb -j dna.json. Your configuration in nodes is not used other than mounting the directories.
You can workaround this by doing something like this (assumes that nodes/hostname.json is the same name put in config.vm.hostname):
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.hostname = "foobarhost"
VAGRANT_JSON = JSON.parse(Pathname(__FILE__).dirname.join('../../nodes', "#{config.vm.hostname}.json").read)
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["../../cookbooks", "../../site-cookbooks"]
chef.roles_path = "../../roles"
chef.data_bags_path = "../../data_bags"
chef.verbose_logging = true
chef.run_list = VAGRANT_JSON.delete('run_list') if VAGRANT_JSON['run_list']
# Add JSON Attributes
chef.json = VAGRANT_JSON
end
end
Note I just use chef_solo, as what's the point of running chef_zero when it really calls chef_solo.
You aren't specifying a run list. The directory you're talking about is where the node data is stored, not where those JSON files are gotten. Chef has no idea what you are trying to do right now.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
chef.run_list = []
chef.json = {}
end
end
This should allow you to set your run list and include any JSON attributes you wanted to include with it.
http://docs.vagrantup.com/v2/provisioning/chef_common.html
Has more options you might want to consider if they meet your needs. :-)
Vagrant doesn't consult Chef about existing nodes. In order to use the Chef node file that you wrote, you'll need to tell vagrant about it. In your Vagrantfile, add this line beneath the line chef.node_name = "app_node":
chef.json = JSON.parse(File.read("../kitchen/nodes/app_node.json"))
Once you do that, Chef and Vagrant will both look for the node configuration data in the same place.
I am not sure how to specify a local directory (still researching), but when you specify this in the Vagrantfile in chef.run_list["stuff"], vagrant will put a /tmp/vagrant-chef/dna.json
It also creates a solo.rb, which will insert stuff like node_name, if chef.node_name is created.

Unable to assign static ip using Vagrant

I am using Vagrant and vagrant-vshpere plugin to deploy VM on vCenter. Vagrant deploys the VM but it fails to assign the ip address for the VM. Could any body let me know do i need to make specific changes in order for the ip address to be assigned?
Here are the contents of my Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = 'dummy'
config.vm.box_url = './example_box/dummy.box'
config.vm.network :public_network, ip: "xxx.xxx.xxx.xxx"
config.vm.provider :vsphere do | vsphere |
vsphere.host = '<vSphere_Host>'
vsphere.data_center_name = '<Data_Center_Name>'
vsphere.data_store_name = '<Data_Store_Name>'
vsphere.template_name = '<Template_Name>'
vsphere.name = '<New_Name_Of_The_VM>'
vsphere.user = '<vShpere_User_Name>'
vsphere.password = '<The_Password>'
vsphere.insecure = true
vsphere.compute_resource_name = '<Compute_Resource_IP>'
end
end
Note: The template that i am uploading is a custom template [a linux box with some applications running on top of it]
You can set static ip with a private network..
try with this network configuration
config.vm.network "private_network", ip: "192.168.50.4"

Resources