Provisioning Vagrant with Chef role error - vagrant

I'm relatively new to Chef, and I'm trying to setup a Vagrant Box using Digital Ocean as a provider and Chef as a provisioner. The issue seems to be with the roles, but as far as I can tell, they match up fine. Thanks.
Here is my Vagrantfile:
Vagrant.configure('2') do |config|
config.omnibus.chef_version = :latest
config.vm.provider :digital_ocean do |provider, override|
config.vm.hostname = 'majestic-chaos-ubuntu14.04x64'
override.ssh.private_key_path = '~/.ssh/id_rsa'
override.vm.box = 'digital_ocean'
provider.token = 'XXXXXXXXXXXX'
provider.image = 'Ubuntu 14.04 x64'
provider.region = 'nyc2'
provider.size = '512mb'
end
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ['../../cookbooks']
chef.roles_path = ['../../roles']
chef.add_role ("majestic-chaos-ubuntu14.04x64")
end
end
and my roles file:
name "majestic-chaos-ubuntu-14.04x64"
ssl_verify_mode :verify_peer
run_list(
"recipe[apt]",
"recipe[open-ssl]",
"recipe[build-essential]",
"recipe[chef-ruby_build]",
"recipe[nodejs-cookbook]",
"recipe[rbenv::user]",
"recipe[rbenv::vagrant]",
"recipe[zsh]",
"recipe[vim]",
"recipe[imagemagick]",
)
override_attributes(
rbenv: {
user_installs: [{
user: 'vagrant',
rubies: ["2.1.2"],
global: "2.1.2",
gems: {
"2.1.2" => [
{ name: "bundler" }
]
}
}]
}
)
And this is the error I'm getting:
[2014-09-18T16:05:48-04:00] INFO: *** Chef 11.16.2 ***
[2014-09-18T16:05:48-04:00] INFO: Chef-client pid: 2934
[2014-09-18T16:05:51-04:00] INFO: Setting the run_list to ["role[majestic-chaos-
ubuntu14.04x64]"] from CLI options
default: [2014-09-18T16:05:51-04:00] ERROR: Role majestic-chaos-ubuntu14.04x64 (included
by 'top level') is in the runlist but does not exist. Skipping expand.
[2014-09-18T16:05:51-04:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-
stacktrace.out
[2014-09-18T16:05:51-04:00] ERROR: The expanded run list includes nonexistent roles:
majestic-chaos-ubuntu14.04x64
[2014-09-18T16:05:51-04:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process
exited unsuccessfully (exit code 1)

I had a similar problem when launching a Vagrant image where the roles file was not being picked up. The solution I found for that was that for the kitchen.yml file the roles file should be in JSON format. (The roles file listed above does not look like JSON).
As documented here: https://docs.chef.io/config_yml_kitchen.html
roles_path The relative path to the directory in which role data is located. This data must be defined as JSON.
Here are a few ruby commands to convert your .rb to JSON
require 'chef'
role = Chef::Role.new
role.from_file("./roles/useful_api_role.rb")
puts JSON.pretty_generate(role)
As a side effect, this above code will load up an rb config and you can validate that it's producing the role variables you expect.

Try using absolute paths for the roles and cookbooks:
chef.cookbooks_path = File.expand_path('../../cookbooks', __FILE__)
chef.roles_path = File.expand_path('../../roles', __FILE__)

Related

Could not find class at /tmp/vagrant-puppet

I can't provision, i search in the other threads but no one is useful!
I'm using Vagrant with Puppet both of them at the latest version, my project structure is:
prova
|
|__Vagrantfile
|
|__puppet
|__manifests
| |__ubonda.pp
|
|__modules
|
|__apache
|
|__manifests
|
|__apache.pp
My VagrantFile is:
Vagrant.configure("2") do |config|
config.vm.define "ubonda" do |vm0|
vm0.vm.hostname = "ubonda"
vm0.vm.box = "hashicorp/precise64"
vm0.vm.provision "puppet" do |puppet|
puppet.manifests_path = 'puppet/manifests'
puppet.module_path = 'puppet/modules'
puppet.manifest_file = "ubonda.pp"
end
end
end
My ubonda.pp file is:
# default path
Exec {
path => ["/usr/bin", "/bin", "/usr/sbin", "/sbin", "/usr/local/bin", "/usr/local/sbin"]
}
include apache
My apache.pp file is:
class apache {
# install apache
package { "apache2":
ensure => present,
require => Exec["apt-get update"]
}
# ensures that mode_rewrite is loaded and modifies the default configuration file
file { "/etc/apache2/mods-enabled/rewrite.load":
ensure => link,
target => "/etc/apache2/mods-available/rewrite.load",
require => Package["apache2"]
}
# create directory
file {"/etc/apache2/sites-enabled":
ensure => directory,
recurse => true,
purge => true,
force => true,
before => File["/etc/apache2/sites-enabled/vagrant_webroot"],
require => Package["apache2"],...
}
}
If i launch vagrant provision i obtain :
==> ubonda: Running provisioner: puppet...
==> ubonda: Running Puppet with ubonda.pp...
==> ubonda: warning: Could not retrieve fact fqdn
==> ubonda: Could not find class apache for ubonda at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/ubonda.pp:6 on node ubonda
I search in the tmp/manifests folder of the ubonda vm and inside there is only the ubonda.pp while in the tmp/modules there is the apache but the two are not connected, so I tried to copy inside the manifest but nothing has changed, how can I do?
The solution is very simple but I leave the question for posterity.
The file containing the puppet specifications must be named as init.pp for it to be correctly recognized

is a loop in packer config.json file possible?

is it possible so do loops in packer ?
I have a Vagrantfile with a loop and want to convert it to packer. I already searched on google and stack overflow but didn't find the right solution.
I have problems converting this into a json format:
(0..$NODE_COUNT).each do |I|
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BOX_IMAGE = "VAGRANT CLOUD IMAGE NAME"
$TBSP_VERSION = "2.0"
# number of additional tendermint validation nodes to be provisioned
$NODE_COUNT = 1
Vagrant.configure("2") do |config|
(0..$NODE_COUNT).each do |i|
config.vm.define "node#{i}" do |subconfig|
subconfig.vm.box = $BOX_IMAGE
subconfig.vm.hostname = "node#{i}"
subconfig.vm.network "private_network", ip: "192.168.50.#{i + 10}"
subconfig.disksize.size = '10GB'
subconfig.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
v.customize ["modifyvm", :id, "--name", "v2.0-abci-tm-d-multi-node#{i}-server"]
end
subconfig.trigger.after :up do |trigger|
if(i == $NODE_COUNT)
trigger.info = "last Node-#{$NODE_COUNT} is up"
trigger.run = {path: "provision_tendermint.sh", :args => "'#{$NODE_COUNT}'"}
end
end
subconfig.vm.provision "shell", path: "provision.sh", :args => "'#{i}' '#{$NODE_COUNT}'"
end
end
end
No, Packer JSON doesn't currently support doing any kind of looping. You will have to approach this in a different manner. I think is you are just using Vagrant to provision VMs and not create an Image, then you should consider using Terraform instead.

How to multi-provision in kitchen.yml?

I have a Vagrantfile in which I am provisioning different Vm's by looping through a json file. eg.
cluster_config.each do |cluster|
cluster_name = cluster[0] # name of node
nodes_config = (JSON.parse(File.read("test_data_bags/myapp/_default.json")))['clusters'][cluster_name]['nodes']
nodes_config.each do |node|
config.vm.define node_name do |nodeconfig|
processes = node_values['processes']
processes.each do |process|
nodeconfig.vm.provision :chef_solo do |chef|
chef.data_bags_path = 'test_data_bags'
chef.run_list = run_list
chef.roles_path = "roles"
"myapp" => {
"cluster_name" => cluster_name,
"role" => node_role
},
}
end
end
end
end
I would like to do the same within kitchen ie. take an array of attributes and foreach array item - run recipe xyz - this is so I can write some tests using test-kitchen , is this possible?
Thanks
There's a few different workarounds to accomplish this, but they are all definitely workarounds. There is an issue that was opened to discuss support of multiple boxes on test-kitchen, and you can go there to read more about why this probably won't be supported any time soon. TL;DR: it's not really a goal of the project.
Workarounds include:
Chef-provisioning can bootstrap more servers from a single provisioned server/test-suite
Kitchen-nodes provisioner can share data about each server to the other test-suites in your setup
A custom Vagrantfile template for test-kitchen

Chef run list is empty. Chef-Zero Vagrant Provisioner

I am trying to setup my Chef-Zero provisioner to execute the run list from a nodes JSON file. This is what my Vagrantfile looks like.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
end
end
When I run vagrant up, I get the following output from the chef-zero provisioner.
==> default: Running provisioner: chef_zero...
==> default: Detected Chef (latest) is already installed
Generating chef JSON and uploading...
==> default: Warning: Chef run list is empty. This may not be what you want.
==> default: Running chef-zero...
==> default: stdin: is not a tty
==> default: [2015-01-08T01:19:51+00:00] INFO: Started chef-zero at http://localhost:8889 with repository at /tmp/vagrant-chef/bec99a1a4e96279669bc5bb3140c0f2e, /tmp/vagrant-chef/2cbca02c0b5c49646d765ae1c9c0738f
==> default: One version per cookbook
==> default: data_bags at /tmp/vagrant-chef/72ac2a17a7c339d91d27a954fc49f8c3/data_bags
==> default: nodes at /tmp/vagrant-chef/83a9002a19a985bce4d26b8c9d050540/nodes
==> default: roles at /tmp/vagrant-chef/9cdb44a6dfefce6070b32ff28cb96535/roles
==> default: [2015-01-08T01:19:51+00:00] INFO: Forking chef instance to converge...
==> default: [2015-01-08T01:19:51+00:00] INFO: *** Chef 12.0.3 ***
==> default: [2015-01-08T01:19:51+00:00] INFO: Chef-client pid: 1731
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List is []
==> default: [2015-01-08T01:19:57+00:00] INFO: Run List expands to []
==> default: [2015-01-08T01:19:57+00:00] INFO: Starting Chef Run for jira
==> default: [2015-01-08T01:19:57+00:00] INFO: Running start handlers
==> default: [2015-01-08T01:19:57+00:00] INFO: Start handlers complete.
==> default: [2015-01-08T01:19:59+00:00] INFO: Chef Run complete in 1.831177529 seconds
==> default: [2015-01-08T01:19:59+00:00] INFO: Skipping removal of unused files from the cache
==> default: [2015-01-08T01:19:59+00:00] INFO: Running report handlers
==> default: [2015-01-08T01:19:59+00:00] INFO: Report handlers complete
My question is why was the runlist empty? I specified the nodes directory and the node name in the chef_zero provisioner. The JSON file that specifies the run list for "app_node" exists in the nodes directory and I can see that chef is copying up all the cookbook/nodes/roles files to the server correctly.
I feel like I am missing something here. Any help would be much appreciated. If anything is unclear let me know.
Vagrant's chef_zero provisioner is actually using chef-solo -c solo.rb -j dna.json. Your configuration in nodes is not used other than mounting the directories.
You can workaround this by doing something like this (assumes that nodes/hostname.json is the same name put in config.vm.hostname):
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-6.6"
config.vm.hostname = "foobarhost"
VAGRANT_JSON = JSON.parse(Pathname(__FILE__).dirname.join('../../nodes', "#{config.vm.hostname}.json").read)
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["../../cookbooks", "../../site-cookbooks"]
chef.roles_path = "../../roles"
chef.data_bags_path = "../../data_bags"
chef.verbose_logging = true
chef.run_list = VAGRANT_JSON.delete('run_list') if VAGRANT_JSON['run_list']
# Add JSON Attributes
chef.json = VAGRANT_JSON
end
end
Note I just use chef_solo, as what's the point of running chef_zero when it really calls chef_solo.
You aren't specifying a run list. The directory you're talking about is where the node data is stored, not where those JSON files are gotten. Chef has no idea what you are trying to do right now.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu-14.04_Base_image"
config.vm.hostname = "app_node"
config.omnibus.chef_version = '12.0.3'
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = ["../kitchen/cookbooks", "../kitchen/site-cookbooks"]
chef.roles_path = "../kitchen/roles"
chef.data_bags_path = "../kitchen/data_bags"
chef.nodes_path = "../kitchen/nodes"
chef.node_name = "app_node"
chef.run_list = []
chef.json = {}
end
end
This should allow you to set your run list and include any JSON attributes you wanted to include with it.
http://docs.vagrantup.com/v2/provisioning/chef_common.html
Has more options you might want to consider if they meet your needs. :-)
Vagrant doesn't consult Chef about existing nodes. In order to use the Chef node file that you wrote, you'll need to tell vagrant about it. In your Vagrantfile, add this line beneath the line chef.node_name = "app_node":
chef.json = JSON.parse(File.read("../kitchen/nodes/app_node.json"))
Once you do that, Chef and Vagrant will both look for the node configuration data in the same place.
I am not sure how to specify a local directory (still researching), but when you specify this in the Vagrantfile in chef.run_list["stuff"], vagrant will put a /tmp/vagrant-chef/dna.json
It also creates a solo.rb, which will insert stuff like node_name, if chef.node_name is created.

Vagrant to deploy Salt master

I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?
I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost

Resources