I am setting up a vagrant deployment with aws as the backend. I would like to source values from the shell. For instance
Vagrant.configure("2") do |config|
config.vm.box = "dummy"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "YOUR KEY"
aws.secret_access_key = "YOUR SECRET KEY"
aws.session_token = "SESSION TOKEN"
aws.keypair_name = "KEYPAIR NAME"
aws.ami = "ami-7747d01e"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "PATH TO YOUR PRIVATE KEY"
end
end
I would like to populate the aws.{access_key_id,secret_access_key,etc} values from local environmental variables in BASH (e.g. $access_key_id, $secret_access_key, etc).
Is this possible to do directly in ruby or is there a specific vagrant DSL technique that allows for this?
Environment variables are exposed via the ENV hash, whose keys are environment variable names. For example,
aws.access_key_id = ENV['access_key_id']
Related
I'm putting together a Vagrantfile that can be used to spin up multiple VMs, it mostly works apart from the bit where I need to tell ansible which playbook to use. This is being imposed on top of an existing structure so there's little scope to change file locations and that.
Here's an extract of the relevant bits from my Vagrantfile:
hosts = [
{ name: 'myhost01', hostname: 'vg-myhost01', ip: '172.172.99.99', memory:'512', cpu: 1, box: 'centos', port_forward: [] },
]
config.vm.provision :ansible do |ansible|
ansible.playbook = ['install/mydir/install_', :name, '.yml']
end
Basically I'm trying to figure out how to get it to end up with a setting like
ansible.playbook = 'install/mydir/install_myhost01.yml'
but I can't seem to get the right syntax to get it to recognise :name as a variable in that context. It either tries to run install_.yml, install_name.yml or most commonly gives the error:
`initialize': no implicit conversion of Symbol into String (TypeError)
Any suggestions?
It's more a question about Ruby syntax. You can use:
ansible.playbook = 'install/mydir/install_' + name + '.yml'
or (thanks to Frédéric Henri)
ansible.playbook = "install/mydir/install_#{name}.yml"
To reference the value from the hash (per OP's own suggestion):
ansible.playbook = "install/mydir/install_#{host[:name]}.yml"
I have a Vagrantfile in which I am provisioning different Vm's by looping through a json file. eg.
cluster_config.each do |cluster|
cluster_name = cluster[0] # name of node
nodes_config = (JSON.parse(File.read("test_data_bags/myapp/_default.json")))['clusters'][cluster_name]['nodes']
nodes_config.each do |node|
config.vm.define node_name do |nodeconfig|
processes = node_values['processes']
processes.each do |process|
nodeconfig.vm.provision :chef_solo do |chef|
chef.data_bags_path = 'test_data_bags'
chef.run_list = run_list
chef.roles_path = "roles"
"myapp" => {
"cluster_name" => cluster_name,
"role" => node_role
},
}
end
end
end
end
I would like to do the same within kitchen ie. take an array of attributes and foreach array item - run recipe xyz - this is so I can write some tests using test-kitchen , is this possible?
Thanks
There's a few different workarounds to accomplish this, but they are all definitely workarounds. There is an issue that was opened to discuss support of multiple boxes on test-kitchen, and you can go there to read more about why this probably won't be supported any time soon. TL;DR: it's not really a goal of the project.
Workarounds include:
Chef-provisioning can bootstrap more servers from a single provisioned server/test-suite
Kitchen-nodes provisioner can share data about each server to the other test-suites in your setup
A custom Vagrantfile template for test-kitchen
I've got the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
NUM_HOSTS = 3
def hostname(id)
"node#{id}"
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.memory = 512
vb.cpus = 1
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "vagrant_playbook.yml"
ansible.groups = {
"vagrant" => (1..NUM_HOSTS).collect { |id| hostname(id) }
}
end
NUM_HOSTS.times do |n|
id = n + 1
config.vm.define hostname(id), primary: id == 1 do |host|
host.vm.network :private_network, ip: "10.0.33.1#{id}"
end
end
end
This assigns the private addresses on enp0s8. However, it assigns duplicate IP addresses on enp0s3: 10.0.2.15. Unfortunately, Ansible seems to be picking the duplicate address up in ansible_default_ipv4 instead of the unique address, so services running on these boxes don't work as intended. So, is there a way to:
stop Vagrant from assigning duplicate IPs? (I'm using the virtualbox provider, if that helps)
change which interface ansible_default_ipv4 uses?
some other solution which I haven't thought to search for?
This is a workaround you can use in the meantime:
{{ ansible_all_ipv4_addresses | last }}
Will work assuming it's guaranteed that your ipv4 addresses for the target box will always be sorted in a predictable order.
I have a vagrant file, where I want a variable "servers" to be used ...
# -*- mode: ruby -*-
# vi: set ft=ruby :
NUMM = 3
IP_OFFSET = 10
setup_master = File.read("master.sh")
setup_slave = File.read("slave.sh")
def ip_from_num(i)
"172.31.16.#{100+i+IP_OFFSET}"
end
# Map of servers -> parameters.
servers = {
0 => ["mybox","master.rhbd","ami-759dcb74","ap-northeast-1","subnet-4aa28b22","MASTER",ip_from_num(0)],
1 => ["mybox","slave1.rhbd","ami-759dcb74","ap-northeast-1","subnet-4aa28b22","SLAVE",ip_from_num(1)],
}
def getBox()
## this variable isnt available to vagrant...
servers[0]
end
Vagrant.configure("2") do |config|
(0..NUMM).each do |i|
config.vm.define "aws#{i}" do |n|
n.vm.box = getBox()
...
When this Vagrantfile is invoked, however, vagrant complains that the "servers" variable is not in existence. This makes sense : If vagrant is invoking from another class, and reading the configuration from that location, then the class variables defined in Vagrantfile might not be accessible in that scope.
So my question is : How can I make variables inside my Vagrantfile accessible to the outside provisioner? It seems to work okay with function calls (either because they are materialized during creation , or else because vagrant can easily call a function due to default scoping).
Another option could be, to turn the variables into functions - not nice, but if it is good enough for the circumstances:
# Map of servers -> parameters.
def servers()
{
0 => ["mybox","master.rhbd","ami-759dcb74","ap-northeast-1","subnet-4aa28b22","MASTER",ip_from_num(0)],
1 => ["mybox","slave1.rhbd","ami-759dcb74","ap-northeast-1","subnet-4aa28b22","SLAVE",ip_from_num(1)],
}
end
def getBox()
## this variable isnt available to vagrant...
servers()[0]
end
(Assuming servers is needed somewhere else too, otherwise it could just be a local variable in getBox.)
I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?
I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost