I am new to vagrant and puppet and having problem getting environment and environment_path to work.
My working directory looks like this:
=> C:/vagrantdemo/Vagrantfile
=> C:/vagrantdemo/environments/testenv/manifests/default.pp
and I specify the environment as follows:
config.vm.provision "puppet" do |puppet|
puppet.environment = "testenv"
puppet.environment_path = "../vagrantdemo/environments"
puppet.manifests_path = "manifests"
puppet.manifest_file = "default.pp"
end
Someone with tips on how to make the evironment work and not gething the error: puppet provisioner: The manifests path specified for Puppet does not exist: C:/vagrantdemo/manifests
I change environment_path to environments and took away manifest_path and _file, things worked.
Related
I have had a number of issues recently with vagrant-berkshelf not syncing Chef cookbooks on an existing machine reliably. And, basically, when doing research on workarounds, I see something like:
vagrant-berkshelf is deprecated, use Test Kitchen instead.
My use case is that I have Vagrantfiles, used to build VMs and DigitalOcean droplets, that are hand-written and only use Chef to provision the VMs. I am most definitely approaching Chef as a user, not an author or tester of cookbooks.
So, I am in a case of Vagrant -> Chef, not Chef -> Vagrant.
When looking at Kitche-Vagrant, I see that:
The kitchen-vagrant driver for Kitchen generates a single Vagrantfile for each instance of Kitchen in a sandboxed directory..
My question is: if my workflow relies on hand-written, complex, Vagrantfiles, can I continue to use Chef as a provisioner without having to rely on vagrant-berkshelf?
Some of the possible alternatives I see are:
mangle Test Kitchen configuration to work with my exiting Vagrantfile. I fear that this is not the intent of this tool and will not end well.
use chef.cookbooks_path attribute in vagrant and let it take the place of vagrant-berkshelf.
switch out provisioners and use say Vagrant->Ansible.
The Vagrantfile below is somewhat simplified, but the gist is that the Vagrantfile is in charge and Chef is just used to provision.
# -*- mode: ruby -*-
# vi: set ft=ruby :
#...grab some variables from my host environment...
DJANGO_SECRET_KEY = ENV['BUILD_DJANGO_SECRET_KEY']
Vagrant.configure('2') do |config|
config.vm.define "myserver" do |config|
config.vm.provider :digital_ocean do |provider, override|
override.ssh.private_key_path = digoconf.private_key_path
override.vm.box_url = "https://github.com/devopsgroup-io/vagrant-digitalocean/raw/master/box/digital_ocean.box"
provider.token = digoconf.TOKEN
...
end
#had chef_client before, that worked too.
config.vm.provision "chef_zero" do |chef|
chef.log_level = "info"
#I haven't tested these out
#chef.cookbooks_path = ["../community/cookbooks","../.berkshelf/cookbooks"]
env_for_chef = " DJANGO_SECRET_KEY='#{DJANGO_SECRET_KEY}'"
chef.binary_env = env_for_chef
chef.environment = "digitalocean"
chef.add_recipe "base::install"
end
end
end
It isn't deprecated per se, but it does no longer have a maintainer and does highly recommend against its use. There is no replacement for the workflow you describe. Sorry. If you are interested in taking over as maintainer, I can put you in contact with the team.
I am having an absolute nightmare getting Puppet to load a group of modules that will be shared between multiple environments.
The modules in puppet/environments/development/modules get loaded fine BUT none of the dependencies in puppet/modules can be found.
The folder structure for my project is:
And the project is up on bitbucket:
https://bitbucket.org/andrew_hancox/vagrantmoodle
What I do usually to manage the modules dependencies is to have a shell script that will install the modules directly, this way it downloads the necessary dependency as well as pushing to the right place.
I will have in my Vagrantfile
node_config.vm.provision "shell", path: "puppet/script/install-puppet-modules-app.sh"
node_config.vm.provision :puppet do |puppet|
puppet.environment = "production"
puppet.environment_path = "puppet/environments"
puppet.manifests_path = "puppet/environments/production/manifests"
puppet.manifest_file = "base-app.pp"
#puppet.options = "--verbose --trace"
end
The script shell is something like
#!/bin/bash
puppet module install puppet-nginx --version 0.4.0
here you will have your apache, mysql module etc
the environment.conf file will locate the default place for the installed module
# environment configuration used by Puppet4
modulepath = /etc/puppetlabs/code/environments/production/modules:$basemodulepath
I got the whole thing working properly thanks to #michael-mulqueen
The way he fixed it was by setting the module path in the vagrant file:
puppet.module_path = ["puppet/modules", "puppet/environments/development/modules"]
You can see this in the repo referenced in the question.
I have a projekt with a setup for a multi machine environment for Vagrant. I had to fix some problems, which were initially caused by the redirect issue to https, but solving these lead into other errors, which I fixed in all projects except this one now, which uses the multi machine feature of Vagrant.
So I have this folder structure:
/Vagrantfile
/puppet/box_1/puppetfile
/puppet/box_1/manifests/site.pp
This is my code snippet, where I define my provision directories:
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/box_1/manifests"
puppet.manifest_file = "site.pp"
end
My puppetfile looks like this:
forge "https://forgeapi.puppetlabs.com"
mod 'tPl0ch/composer'
mod 'puppetlabs/apt'
mod 'puppetlabs/apache'
mod 'puppetlabs/firewall'
In my site.pp I try to include apt, but I get this error message:
Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::apt for project.local at /tmp/vagrant-puppet/manifests-f2b1fd0ac42b51938ed6ae7e6917367e/site.pp:1:1 on node project.local
When I rearange my puppet files like this:
/Vagrantfile
/puppet/puppetfile
/puppet/manifests/site.pp
like this is the common way of setting this up, it works without that problem, but as I mentioned, there are other boxes, which use different puppetfiles and site.pp files, so this folder structure makes some kinda sense. It seems, that it doesn'even matter, if I delete the config for the other boxes, and setup my Vagrantfile, as if it would be only one box, so I am just confused, how the location, of these files influence the scope of certain classes.
So my questions is here: Is there a way, to keep this folder structure and still have these modules defined in puppetfile available in my site.pp? Or is this generally some kinda bad practice to organize it this way? I was searching for some examples for this, but couldn't find any for some reason...
EDIT: It seems, on provision the puppetfile isnt even used anymore, when its not located in /puppet/ So maybe I just have to tell Vagrant how to use it?
define where librarian should find the puppet file
Vagrant.configure("2") do |config|
config.librarian_puppet.puppetfile_dir = "puppet/box1"
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/box_1/manifests"
puppet.manifest_file = "site.pp"
end
I have inherited a python app that uses Puppet, Vagrant and VirtualBox to provision a local Centos test machine.
This app was written on Mac and I'm developing on Windows.
When I run vagrant up I see a large list of command line errors, the most relevant of which is:
Running Puppet with site.pp..
Warning: Config file /home/vagrant/hiera.yaml not found, using Hiera defaults
WARN: Fri Apr 25 16:32:24 +0100 2014: Not using Hiera::Puppet_logger. It does not report itself to b
e suitable.
I know what Hiera is, and why it's important, but I'm not sure how to fix this.
The file hiera.yaml is present in the repo but it's not found at home/vagrant/hiera.yaml, instead it's found at ./puppet/manifests/hiera.yaml.... Similarly if I ssh into the box, there is absolutely nothing whatsoever inside home/vagrant but I can find this file when I look in /tmp
So I'm confused, is vagrant looking inside the Virtual box and expecting this file to be found at home/vagrant/hiera.yaml? Or have I inherited an app that did not work properly in the first place? I'm really stuck here and I can't get in touch with the original dev.
Here are some details from my Vagrantfile:
Vagrant.configure("2") do |config|
# Base box configuration ommitted
# Forwarded ports ommitted
# Statically set hostname and internal network to match puppet env ommitted
# Enable provisioning with Puppet standalone
config.vm.provision :puppet do |puppet|
# Tell Puppet where to find the hiera config
puppet.options = "--hiera_config hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
# Boilerplate Vagrant/Puppet configuration
puppet.module_path = "puppet/modules"
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "site.pp"
# Custom facts provided to Puppet
puppet.facter = {
# Tells Puppet that we're running in Vagrant
"is_vagrant" => true,
}
end
# Make the client accessible
config.vm.synced_folder "marflar_client/", "/opt/marflar_client"
end
It's really strange.
There's this puppet.options line that tells Puppet where to look for hiera.yaml and currently it simply specifies the file name. Now, when you run vagrant up, it mounts the current directory (the one that has the Vagrantfile in it) into /vagrant on the guest machine. Since you're saying hiera.yaml is actually found in ./puppet/manifests/hiera.yaml, I believe you want to change the puppet.options line as follows:
puppet.options = "--hiera_config /vagrant/puppet/manifests/hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
To solve this warning, you may first try to check from where this file is referenced, i.e.:
$ grep -Rn hiera.yaml /etc/puppet
/etc/puppet/modules/stdlib/spec/spec_helper_acceptance.rb:13: on hosts, '/bin/touch /etc/puppet/hiera.yaml'
/etc/puppet/modules/mysql/spec/spec_helper_acceptance.rb:34: shell("/bin/touch #{default['puppetpath']}/hiera.yaml")
In this case it's your vagrant file.
You may also try to find the right path to it via:
sudo updatedb && locate hiera.yaml
And either follow the #EvgenyChernyavskiy suggestion, create a symbolic link if you found the file:
sudo ln -vs /etc/hiera.yaml /etc/puppet/hiera.yaml
or create the empty file (touch /etc/puppet/hiera.yaml).
In your case you should use the absolute path to your hiera.yaml file, instead of relative.
The warning should be gone.
I'd like to specify directly in the vagrantfile which provider to use by default for each VM.
For example, given this vagrantfile:
# Vagrantfile
[...]
config.vm.define 'dev_vm' do |machine|
machine.vm.provider :libvirt do |os|
[...]
end
# machine.default_provider = :libvirt
end
config.vm.define 'production_vm' do |machine|
machine.vm.provider :openstack do |os|
[...]
end
# machine.default_provider = :openstack
end
To boot up the following to VMs, I have to issue two commands currently:
vagrant up --provider=libvirt dev_vm
vagrant up --provider=openstack production_vm
I'd like to bring up both with a single vagrant up, especially because I'm running quite a few more machines. Some configuration like the commented machine.default_provider = :openstack would be fantastic to have.
Is there a way to do so?
I don't think there is any easy way to do it. Vagrant will currently use the same provider during the whole run so it could possibly be quite big code change to support this.
Maybe wrapper scripts are the easiest solution now.
Another workaround would be to use separate Vagrantfiles for the VMs and set VAGRANT_DEFAULT_PROVIDER in each. If there is a lot of common config, you could extract it to e.g. Vagrantfile.common, which is included by the others. Something like:
# Vagrantfile 1
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
# assume the common config is in parent directory
load File.expand_path('../../Vagrantfile.common', __FILE__)
Vagrant.configure('2') do |config|
# ...
end