How to config timezone with Vagrant, Puppet and Hiera? - vagrant

I'm using PuPHPet for my testing environments, which is based on Vagrant/Puppet+Hiera.
In the config.yml (Hiera config file) I would like to add section for my timezone
and with command vagrant provision setup it properly.
It's that possible?

You can install Time Zone plugin for Vagrant (vagrant plugin install vagrant-timezone) and configure Vagrantfile in the following way:
Vagrant.configure("2") do |config|
if Vagrant.has_plugin?("vagrant-timezone")
config.timezone.value = "UTC"
end
# ... other stuff
end
Instead of UTC you can also use :host to synchronize timezone with the host.

Just add your timezone to whatever key you want in your hiera file, let's call it timezone. The value for which and the puppet code you'd need to set that timezone depends on the system you're firing up, but I'll assume RedHat flavor of unix.
I recommend setting that to any valid value you'd see under /usr/share/zoneinfo. As an example your key may look like:
timezone: 'US/Pacific'
Then you'd use the file puppet type to symlink /etc/localtime to the full path of the timezone:
$tz = hiera('timezone')
file {'/etc/localtime': ensure => link, target => "/usr/share/zoneinfo/${tz}"}

Related

Issue with SaltStack

Recently I started taking an interest in Salt and begun doing a tutorial on it.I am currently working on a Mac and I having a hard time trying to start the vm[the minion] from my laptop[I am using Vagrant as an application to start the process]
The vagrant file for the vm contains these lines:
# salt-vagrant config
config.vm.provision :salt do |salt|
salt.run_highstate = true
salt.minion_config = "/etc/salt/minion"
salt.minion_key = "./minion1.pem"
salt.minion_pub = "./minion1.pub"
end
even though I wrote this it gets stuck at:
Calling state.highstate... (this may take a while)
Any ideas why?
One more thing.I seem to need to modify the top.sls file at the next step which is located in /srv/salt.Unfortunately I can not find the /srv file anywhere,why is that?is there a way to tell the master that the top file is somewhere else?
If you don't have a top.sls created, then you won't be able to run a hightstate like you have configured with the salt.run_highstate = true line.
If you don't have a /srv/salt/ directory created, then you can just create it yourself. Just make sure the user the salt-master is running as can read it.
The /srv/salt/ directory is the default location of what is known as the file_root. You can modify its location in the master config file /etc/salt/master and modify the file_roots config option.

Test Kitchen - How can I pass the local user login in as a Chef attribute?

Our development environment includes local VMs running queries to larger, shared resources in the cloud.
It was set up in such a way that the local user's login would be passed into a Chef role as an attribute, which is used to define SSH and DB connections.
We currently use Ruby in the Vagrantfile to pluck out the login and vm.provision to create a /tmp file on the guest VM's filesystem. Our Chef role for the local VMs reads the contents of that file and passes the value into an attribute.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
(snip)
require 'etc'
user = Etc.getlogin
(snip)
analytics.vm.provision :shell, inline: "echo #{user} > /tmp/remote_user"
During the Chef run, The contents of /tmp/remote_user are read by the following:
remote_user = File.open("/tmp/remote_user", "r").read.strip
sandbox = "_#{remote_user}"
default_attributes "remote_user" => remote_user,
"sandbox" => sandbox,
Is there a better way to add the current user's login as an attribute to a local Chef run?
If not, how can I replicate the functionality I had in our old Vagrantfile in a .kitchen.yml?
Test Kitchen processes .kitchen.yml via ERB, so you could just simply put something like this in your ERB file:
attributes:
foo: <%= ENV['STUFF'] %>
And then node['foo'] would contain $STUFF. Switch STUFF to USER in your case.

Vagrant not able to find home/vagrant/hiera.yaml

I have inherited a python app that uses Puppet, Vagrant and VirtualBox to provision a local Centos test machine.
This app was written on Mac and I'm developing on Windows.
When I run vagrant up I see a large list of command line errors, the most relevant of which is:
Running Puppet with site.pp..
Warning: Config file /home/vagrant/hiera.yaml not found, using Hiera defaults
WARN: Fri Apr 25 16:32:24 +0100 2014: Not using Hiera::Puppet_logger. It does not report itself to b
e suitable.
I know what Hiera is, and why it's important, but I'm not sure how to fix this.
The file hiera.yaml is present in the repo but it's not found at home/vagrant/hiera.yaml, instead it's found at ./puppet/manifests/hiera.yaml.... Similarly if I ssh into the box, there is absolutely nothing whatsoever inside home/vagrant but I can find this file when I look in /tmp
So I'm confused, is vagrant looking inside the Virtual box and expecting this file to be found at home/vagrant/hiera.yaml? Or have I inherited an app that did not work properly in the first place? I'm really stuck here and I can't get in touch with the original dev.
Here are some details from my Vagrantfile:
Vagrant.configure("2") do |config|
# Base box configuration ommitted
# Forwarded ports ommitted
# Statically set hostname and internal network to match puppet env ommitted
# Enable provisioning with Puppet standalone
config.vm.provision :puppet do |puppet|
# Tell Puppet where to find the hiera config
puppet.options = "--hiera_config hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
# Boilerplate Vagrant/Puppet configuration
puppet.module_path = "puppet/modules"
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "site.pp"
# Custom facts provided to Puppet
puppet.facter = {
# Tells Puppet that we're running in Vagrant
"is_vagrant" => true,
}
end
# Make the client accessible
config.vm.synced_folder "marflar_client/", "/opt/marflar_client"
end
It's really strange.
There's this puppet.options line that tells Puppet where to look for hiera.yaml and currently it simply specifies the file name. Now, when you run vagrant up, it mounts the current directory (the one that has the Vagrantfile in it) into /vagrant on the guest machine. Since you're saying hiera.yaml is actually found in ./puppet/manifests/hiera.yaml, I believe you want to change the puppet.options line as follows:
puppet.options = "--hiera_config /vagrant/puppet/manifests/hiera.yaml --manifestdir /tmp/vagrant-puppet/manifests"
To solve this warning, you may first try to check from where this file is referenced, i.e.:
$ grep -Rn hiera.yaml /etc/puppet
/etc/puppet/modules/stdlib/spec/spec_helper_acceptance.rb:13: on hosts, '/bin/touch /etc/puppet/hiera.yaml'
/etc/puppet/modules/mysql/spec/spec_helper_acceptance.rb:34: shell("/bin/touch #{default['puppetpath']}/hiera.yaml")
In this case it's your vagrant file.
You may also try to find the right path to it via:
sudo updatedb && locate hiera.yaml
And either follow the #EvgenyChernyavskiy suggestion, create a symbolic link if you found the file:
sudo ln -vs /etc/hiera.yaml /etc/puppet/hiera.yaml
or create the empty file (touch /etc/puppet/hiera.yaml).
In your case you should use the absolute path to your hiera.yaml file, instead of relative.
The warning should be gone.

Puppet conditional check between vagrant & ec2

I've looked at the documentation for puppet variables and can't seem to get my head around how to apply this to the following situation:
if vagrant (local machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'dev.demo.com',
root => '/vagrant/public',
}
else if aws ec2 (remote machine)
phpfpm::nginx::vhost { 'vhost_name':
server_name => 'demo.com',
root => '/home/ubuntu/demo.com/public',
}
Thanks
Try running facter on both your vagrant host and your EC2 instance, and look for differences. I suspect that 'facter virtual' may be different between the two hosts, or that the EC2 may return a bunch of ec2_ facts that won't be present on the vagrant host.
Then you can use this fact as a top level variable as per below. I switched to a case statement as well, since that's a little easier to maintain IMHO, plus you can use the default block for error checking.
case $::virtual {
'whatever vagrant returns' : {
<vagrant specific provisionin>
}
'whatever the EC2 instance returns' : {
<EC2 specific provisioning>
}
default : {
fail("Unexpected virtual value of $::virtual")
}
}
NOTE: In the three years since this response was posted, Vagrant has introduced the facter hash option. See #thomas' answer below for more details. I believe that this is the right way to go and makes my proposed kernel command line trick pretty obsolete. The rationale for using a fact hasn't changed, though, only strengthened (e.g. Vagrant currently supports AWS provider).
ORIGINAL REPLY: Be careful - you assume that you only use virtualbox for vagrant and vice versa, but Vagrant is working on support for other virtualization technologies (e.g. kvm), and you might use VirtualBox without vagrant one day (e.g. for production).
Instead, the trick I use is to pass the kernel a "vagrant=yes" parameter when I build the basebox, which is then accessible via /proc/cmdline. Then you can create a new fact based on that (e.g. /etc/vagrant file and check for it in subsequent facter runs).
Vagrant has a great utility for providing Puppet facts:
facter (hash) - A hash of data to set as available facter variables
within the Puppet run.
For example, here's a snippet from my Vagrantfile with the Puppet setup:
config.vm.provision "puppet", :options => ["--fileserverconfig=/vagrant/fileserver.conf"] do |puppet|
puppet.manifests_path = "./"
puppet.module_path = "~/projects/puppet/modules"
puppet.manifest_file = "./vplan-host.pp"
puppet.facter = {
"vagrant_puppet_run" => "true"
}
end
And then we make use of that fact for example like this:
$unbound_conf = $::vagrant_puppet_run ? {
'true' => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf.vagrant',
default => 'puppet:///modules/unbound_dns/etc/unbound/unbound.conf',
}
file { '/etc/unbound/unbound.conf':
owner => root,
group => root,
notify => Service['unbound'],
source => $unbound_conf,
}
Note that the fact is only available during puppet provision time.

What is the best way to store project specific config info in ruby Rake tasks?

I have rake tasks for getting the production database from the remote server, etc. It's always the same tasks but the server info changes per project. I have the code here: https://gist.github.com/868423 In the last task, I'm getting a #local_db_dir_path = nil error.
I don't think want to use shell environment variables because I don't want to set them up each time I use rake or open a new shell.
Stick the settings in a YAML file, and read it like this:
require 'yaml'
config = YAML.load("config.yaml") # or wherever
$remote_host = config['remote_host']
$ssh_username = config['ssh_username']
# and so on
Or you can just read one big config hash:
$config = YAML.load("config.yaml")
Note that I'm using globals here, not instance variables, so there's no chance of being surprised by variable scope.
config.yaml would then look like this:
---
remote_host: some.host.name
ssh_username: myusername
other_setting: foo
whatever: bar
I tend to keep a config.yaml.sample checked in with the main body of the code which has example but non-working settings for everything which I can copy across to the non-versioned config.yaml. Some people like to keep their config.yaml checked in to a live branch on the server itself, so that it's versioned, but I've never bothered with that.
you should be using capistrano for this, you could use mulitsage or just separate host setting to a task, example capistrano would look like this:
task :development do
server "development.host"
end
task :backup do
run "cd #{current_path}; rake db:dump"
download "remote_path", "local_path"
end
and call it like this:
cap development backup

Resources