Am trying to create a virtual machine for Raspbian using Vagrant-libvirt plugin. However I failed find out how to add '2017-09-07-raspbian-stretch-lite.img' image.
Am able to boot the image using qemu.
qemu-system-arm -kernel kernel-qemu-4.4.34-jessie -cpu arm1176 -m 256 -machine versatilepb -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw vga=normal console=ttyAMA0" -drive "file=./2017-09-07-raspbian-stretch-lite.img,index=0,media=disk,format=raw" -no-reboot -serial stdio -curses
Here is my incomplete Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
Vagrant.configure("2") do |config|
config.vm.box_url = File.join(Dir.pwd, "2017-09-07-raspbian-stretch-lite.img")
config.vm.box = "Raspbian"
config.vm.provider :libvirt do |libvirt|
libvirt.driver = "qemu"
#config.vm.box = "arm"
# vagrant issues #1673..fixes hang with configure_networks
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
libvirt.uri = 'qemu+unix:///system'
libvirt.host = 'virtualized'
libvirt.kernel = File.join(Dir.pwd, "OS/kernel-qemu-4.4.34-jessie")
#libvirt.kernel = File.join(Dir.pwd, "Ovmlinuz")
#libvirt.initrd = File.join(Dir.pwd, "initrd")
#libvirt.storage :file, :size => '20G', :path => "./OS/2017-09-07-raspbian-stretch-lite.img", :allow_existing => true, :shareable => true, :type => 'raw'
libvirt.emulator_path = '/usr/bin/qemu-system-arm'
libvirt.cmd_line = 'root=/dev/mmcblk0p2 devtmpfs.mount=0 rw'
libvirt.memory = 256
libvirt.cpu_model = 'arm1176'
libvirt.cpu_fallback = 'allow'
libvirt.graphics_type = 'none'
end
end
I've committed support to vagrant-libvirt for precisely this. This is a working config:
config.vm.define "rpi" do |rpi|
# Create rpi.vm
rpi.vm.hostname = "rpi"
rpi.vm.box = "raspbian-jessie-lite-2016-02-26"
rpi.vm.provider :libvirt do |v|
v.driver = 'qemu'
v.random_hostname = true
v.connect_via_ssh = false
v.memory = 1024
v.cpus = 1
v.volume_cache = 'none'
v.storage_pool_name = "vagrant"
v.kernel = File.join(Dir.pwd, "kernel")
v.initrd = File.join(Dir.pwd, "initrd")
v.machine_type = 'virt'
v.machine_arch = 'armv7l'
v.cpu_mode = 'custom'
v.cpu_model = 'cortex-a15'
v.cpu_fallback = 'allow'
v.cmd_line = 'rw earlyprintk loglevel=8 console=ttyAMA0,115200n8 rootwait root=/dev/vda2'
v.graphics_type = 'none'
v.disk_bus = 'virtio'
v.nic_model_type = 'virtio'
v.features = ["apic","gic version='2'"]
end
However, you will need to find or build a kernel and accompanying initrd that supports the virt machine type. You can take a look here for some background on that.
Related
is it possible so do loops in packer ?
I have a Vagrantfile with a loop and want to convert it to packer. I already searched on google and stack overflow but didn't find the right solution.
I have problems converting this into a json format:
(0..$NODE_COUNT).each do |I|
# -*- mode: ruby -*-
# vi: set ft=ruby :
$BOX_IMAGE = "VAGRANT CLOUD IMAGE NAME"
$TBSP_VERSION = "2.0"
# number of additional tendermint validation nodes to be provisioned
$NODE_COUNT = 1
Vagrant.configure("2") do |config|
(0..$NODE_COUNT).each do |i|
config.vm.define "node#{i}" do |subconfig|
subconfig.vm.box = $BOX_IMAGE
subconfig.vm.hostname = "node#{i}"
subconfig.vm.network "private_network", ip: "192.168.50.#{i + 10}"
subconfig.disksize.size = '10GB'
subconfig.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
v.customize ["modifyvm", :id, "--name", "v2.0-abci-tm-d-multi-node#{i}-server"]
end
subconfig.trigger.after :up do |trigger|
if(i == $NODE_COUNT)
trigger.info = "last Node-#{$NODE_COUNT} is up"
trigger.run = {path: "provision_tendermint.sh", :args => "'#{$NODE_COUNT}'"}
end
end
subconfig.vm.provision "shell", path: "provision.sh", :args => "'#{i}' '#{$NODE_COUNT}'"
end
end
end
No, Packer JSON doesn't currently support doing any kind of looping. You will have to approach this in a different manner. I think is you are just using Vagrant to provision VMs and not create an Image, then you should consider using Terraform instead.
I have several Vagrantfile, each one for a different provider, since Vagrant has a limitation that doesn't allow to make two or more provisions at the same time using the same Vagrantfile.
So, I split into two or more Vagrantfiles, but my "body", my provisions scripts are the same for both, the only thing that changes is the provider block.
For example:
local_nagios/Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
set = YAML.load_file '../../../settings.yaml'
Vagrant.configure(2) do |nagios|
nagios.vm.provider :virtualbox do |provider, override|
override.vm.box = 'ubuntu/trusty64'
override.vm.hostname = 'nagios.company.com'
override.vm.synced_folder '.', '/vagrant', disabled:true
override.vm.network 'public_network', bridge:set['network_interface'], ip:set['dev_nagios_ip']
provider.memory = 4096
provider.cpus = 2
end
install = set['devops_home'] + '/vagrant/lib/install'
nagios.vm.provision 'shell', path: install + '/basic'
nagios.vm.provision 'shell', path: install + '/devops'
step = set['devops_home'] + '/vagrant/server/nagios/steps'
nagios.vm.provision 'shell', path: step + '/install', args: [set['nagios_password']]
end
digital_nagios/Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
set = YAML.load_file '../../../settings.yaml'
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
override.vm.box = 'digital_ocean'
override.vm.hostname = 'nagios.company.com'
override.vm.synced_folder '.', '/vagrant', disabled:true
override.ssh.private_key_path = '~/.ssh/id_rsa'
provider.token = 'my-token'
provider.image = 'ubuntu-14-04-x64'
provider.region = 'fra1'
provider.size = '4gb'
end
install = set['devops_home'] + '/vagrant/lib/install'
nagios.vm.provision 'shell', path: install + '/basic'
nagios.vm.provision 'shell', path: install + '/devops'
step = set['devops_home'] + '/vagrant/server/nagios/steps'
nagios.vm.provision 'shell', path: step + '/install', args: [set['nagios_password']]
end
I wonder if is possible to make a template from this. Or import my common area like this:
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
...
end
import '../provision.rb'
end
I'm not familiar with Ruby, so any suggestion would be very appreciated!
you can do something like
Vagrant.configure(2) do |nagios|
nagios.vm.provider :digital_ocean do |provider, override|
...
end
eval File.read("../Vagrantfile-common")
end
All - I am setting up a multi-machine environment with Vagrant and the VirtualBox provider on a MacOSX 10.9.5.
PROBLEM: I need to attach a SATA storage controller to a CentOS 7.1 guest VM to add additional vHDDs. I do this with the following customize snippet:
config.vm.define vmName = hostname do |node|
...snip...
node.vm.provider "virtualbox" do |v|
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
...
This works perfectly fine on a vagrant up - the SATA controller is attached, and I can subsequently createhd and storageattach them. However - when I do a vagrant reload, the customize is being run - causing the failure error condition:
Stderr: VBoxManage: error: Storage controller named 'SATA Controller' already exists
VBoxManage: error: Details: code VBOX_E_OBJECT_IN_USE (0x80bb000c), component SessionMachine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "AddStorageController(Bstr(pszCtl).raw(), StorageBus_SATA, ctl.asOutParam())" at line 1044 of file VBoxManageStorageController.cpp
Versions:
Vagrant version: 1.7.2
VirtualBox version: 4.3.28r100309
If I comment out the storagectl command prior to running the vagrant reload, then the reload succeeds correctly and there are no issues as an additional controller isn't attached. However - this isn't a sustainable solution.
Is there some option to the Vagrantfile that I haven't been able to dig up yet - to force it to NOT do a customize step? Or is this possibly a bug in the VirtualBox Vagrant provider re-running the customize steps when it shouldn't be?
~~shane
For complete reference - here is my full Vagrantfile (See the "WARNING" comments below for the customize issue):
# vi: set ft=ruby :
###
# set 'num_minions' to the number of minions you'd like built
# set 'num_disks' to the number of disks to build on the minion
# set 'disk_size' to the MB size of the disks to build for the minions
###
num_minions = 5 # number of minions to build
num_disks = 12 # number of vHDDs to attach to minions
disk_size = 256 # size (MB) of each disk to attach to minions
name_prefix = "minion" # minion name prefix to use
cur_dir = Dir.pwd
disks_dir = "#{cur_dir}/disks" # set the storage location for minion disks
int_net1 = "172.16.1" # internal net 1: first 3 octets of IP space
# NOTE "configs/minion" needs to be updated to
# match "master" IP set below - otherwise salt
# will not work
Vagrant.configure("2") do |config|
config.vm.box = "centos71"
config.vm.provider "virtualbox" do |v|
v.memory = 1024
end
config.vm.define "master-01", primary: true do |node|
node.vm.hostname = "master-01"
node.vm.network :private_network, ip: "#{int_net1}.10"
node.vm.provision "shell", inline: "echo master-01 IP set to #{int_net1}.10"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.install_master = true
salt.master_config = "configs/master"
salt.run_highstate = false
salt.master_key = 'keys/master-01.pem'
salt.master_pub = 'keys/master-01.pub'
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/master-01.pem'
salt.minion_pub = 'keys/master-01.pub'
salt.seed_master = { 'dash-01' => 'keys/dash-01.pub', 'master' => 'keys/master-01.pub' }
(1..num_minions).each do |v|
vn = "#{name_prefix}-%03d" % v
salt.seed_master = { "#{vn}" => "keys/#{vn}.pub" }
end
end
end
# MINIONS with additional disk devices
(1..num_minions).each do |i|
hostname = "#{name_prefix}-%03d" % i
minion_ip = (i.to_i + 100)
config.vm.define vmName = hostname do |node|
node.vm.hostname = vmName
node.vm.network :private_network, ip: "#{int_net1}.#{minion_ip}"
node.vm.provision "shell", inline: "echo #{vmName} IP set to #{int_net1}.#{minion_ip}"
node.vm.provider "virtualbox" do |v|
############# WARNING
############# WARNING: can't "vagrant reload" - as the 'customize' snippet is
############# WARNING: re-run on reload, and you can't inject a new controller
############# WARNING: of the same type seems to be a bug in the specific
############# WARNING: provider for vagrant or my usage???
############# WARNING
v.customize ['storagectl', :id, '--name', 'SATA Controller', '--add', 'sata']
(1..num_disks).each do |disk|
diskname = File.expand_path("#{disks_dir}/#{vmName}-#{disk}.vdi")
v.customize ['createhd', '--filename', diskname, '--size', "#{disk_size}"]
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', disk, '--device', 0, '--type', 'hdd', '--medium', diskname]
end
end
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = "keys/#{vmName}.pem"
salt.minion_pub = "keys/#{vmName}.pub"
end
end
end
# setup Dashing web dashboard
config.vm.define "dash-01" do |node|
node.vm.hostname = "dash-01"
node.vm.network :private_network, ip: "#{int_net1}.11"
node.vm.provision "shell", inline: "echo dash-01 IP set to #{int_net1}.11"
node.vm.network "forwarded_port", guest: 80, host: 8080
node.vm.provision "shell", inline: "echo dash-01 port 80 forwarding: http://127.0.0.1:8080"
node.vm.synced_folder "~/cpelab/ceph-salt/salt/", "/srv/salt"
node.vm.synced_folder "~/cpelab/ceph-salt/pillar/", "/srv/pillar"
node.vm.provision :salt do |salt|
salt.minion_config = "configs/minion"
salt.minion_key = 'keys/dash-01.pem'
salt.minion_pub = 'keys/dash-01.pub'
end
end
end
I've got the following Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
NUM_HOSTS = 3
def hostname(id)
"node#{id}"
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.gui = false
vb.memory = 512
vb.cpus = 1
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "vagrant_playbook.yml"
ansible.groups = {
"vagrant" => (1..NUM_HOSTS).collect { |id| hostname(id) }
}
end
NUM_HOSTS.times do |n|
id = n + 1
config.vm.define hostname(id), primary: id == 1 do |host|
host.vm.network :private_network, ip: "10.0.33.1#{id}"
end
end
end
This assigns the private addresses on enp0s8. However, it assigns duplicate IP addresses on enp0s3: 10.0.2.15. Unfortunately, Ansible seems to be picking the duplicate address up in ansible_default_ipv4 instead of the unique address, so services running on these boxes don't work as intended. So, is there a way to:
stop Vagrant from assigning duplicate IPs? (I'm using the virtualbox provider, if that helps)
change which interface ansible_default_ipv4 uses?
some other solution which I haven't thought to search for?
This is a workaround you can use in the meantime:
{{ ansible_all_ipv4_addresses | last }}
Will work assuming it's guaranteed that your ipv4 addresses for the target box will always be sorted in a predictable order.
I'm trying to design a relatively complex system, using Vagrant and Salt-Stack to handle control and provisioning. The basic idea is to provision a machine, called master, which runs the Salt-Stack master that all my other machines will connect to.
In a previous attempt to do this, I just had Vagrant set up a Salt minion which was instructed to install the salt master and a dns server package. But I'd like to simplify key transports by using Vagrant's facilities. So what I'd like to do is have Vagrant install a Salt master and a minion, so that the minion can install the dns server, and so that Vagrant can move my keys around for me.
Here is what master's configuration looks like, in the Vagrantfile:
config.vm.define :master do |master|
master.vm.provider "virtualbox" do |vbox|
vbox.cpus = 1
vbox.memory = 384
end
master.vm.network "private_network", ip: "10.47.94.2"
master.vm.network :forwarded_port, guest: 53, host: 53
master.vm.hostname = "master"
master.vm.provision :salt do |salt|
salt.verbose = true
salt.minion_config = "salt/master"
salt.run_highstate = true
salt.install_master = true
salt.master_config = "salt/master"
salt.master_key = "salt/keys/master.pem"
salt.master_pub = "salt/keys/master.pub"
salt.minion_key = "salt/keys/master.pem"
salt.minion_pub = "salt/keys/master.pub"
salt.seed_master = {master: "salt/keys/master.pub"}
salt.run_overstate = true
end
end
But I am getting the message:
Executing job with jid 20140403131604825601
-------------------------------------------
Execution is still running on master
Execution is still running on master
Execution is still running on master
Execution is still running on master
master:
Minion did not return
and when I look at master:/var/log/salt/minion, it's empty.
Is there an obvious error in my Vagrantfile configuration? Any hints?
I see that this has not been answered for a long time. Therefore I post my personal minimal Vagrant salt master file here. This problem occured to me once when I forgot to setup the master: localhost entry to the salt-minion configuration on the master (as default it looks for a host called salt).
Please note that the minion on the master has its own key.
This is running vagrant 1.7.2 and will install a salt master 2015.5.0
# Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/centos-7.0"
# Deployment instance salt master
config.vm.define :master do |master|
master.vm.network :private_network, ip: "192.168.22.12"
master.vm.hostname = 'master'
master.vm.synced_folder "salt/roots/", "/srv/"
master.vm.provision :salt do |config|
config.install_master = true
config.minion_config = "salt/minion"
config.master_config = "salt/master"
config.minion_key = "salt/keys/minion.pem"
config.minion_pub = "salt/keys/minion.pub"
config.master_key = "salt/keys/master.pem"
config.master_pub = "salt/keys/master.pub"
config.seed_master =
{
master: "salt/keys/minion.pub"
}
config.run_highstate = true
end
end
end
Master's configuration file:
# salt/master
# empty, use only defaults
Minion's configuration file on master:
# salt/minion
master: localhost