sed script with backslashes works in console or standalone script but not a Vagrantfile - bash

I have this line:
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
Really simple line for editing my PHPMyAdmin config.inc.php AllowNoPassword to true (this is a dev environment of course).
It works perfectly in console but in a script file using vagrant it simply does not.
I do believe it is to do with the ' but I cannot understand what the difference is.
What is going on here and how to solve it?
Edit
Here is a complete example, minus a few bits of logic to simplify it and remove some private details etc:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "ubuntu/xenial64"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
config.vm.network "forwarded_port", guest: 443, host: 4343, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
add-apt-repository ppa:ondrej/php
apt-get update > /dev/null
yes | apt-get install zip
yes | apt-get install php7.1-fpm \
php7.1-curl \
php7.1-gd \
php7.1-mysql \
php7.1-mbstring \
php7.1-xml \
php7.1-mcrypt \
php7.1-soap \
php7.1-dev
# Set the right user for PHP7 ("ubuntu" user)
sed -i -e "s/www-data/ubuntu/g" /etc/php/7.1/fpm/pool.d/www.conf
# Fix error reporting so it is consistent with live server
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/fpm/php.ini
sed -i 's/^error_reporting = .*/error_reporting = E_ALL/' /etc/php/7.1/cli/php.ini
yes | apt-get install php-pear
pecl install xdebug
# Add xdebug to PHP runtime
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/fpm/php.ini
echo 'zend_extension=xdebug.so' >> /etc/php/7.1/cli/php.ini
service php7.1-fpm restart
debconf-set-selections <<< 'mysql-server mysql-server/root_password password root'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password root'
yes | apt-get -y install mysql-server
mysqladmin -u root -p'root' password ''
SHELL
$script = <<-SCRIPT
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
config.vm.provision "shell", inline: $script, privileged: false
#config.vm.provision :shell, path: "bootstrap.sh"
end

Use an escaped heredoc:
$script = <<-'SCRIPT'
cp config.sample.inc.php config.inc.php
sed -i 's/^\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = .*/\$cfg\['\''Servers'\''\]\[\$i\]\['\''AllowNoPassword'\''\] = true;/' config.inc.php
SCRIPT
The quotes around SCRIPT indicate to the Ruby interpreter that all contents should be literal -- taken precisely as given rather than prone to expansions. (Such quotes have the same meaning in shell heredocs as well).

Related

Vagrant Centos 8 with nfs - System hangs on any sync folder operation

Vagrant 2.2.18
Modules
vagrant-timezone (1.3.0, global)
vagrant-vbguest (0.30.0, global)
Host
Macbook Pro with Montery, but also seen on Windows 11
Issue: When I do a vagrant up, it seems to load fine and gets past the "Mounting NFS shared folders..." output line fine too. At the end of my Vagrantfile is a path to a provision shell script that does a bunch of setup on the Guest. This also starts to run fine without error.
When it gets to a part of that script that simply copies conf files from the share directory to various locations (such as copying an apache conf to /etc/httpd/conf.d/), the whole process stalls. No error, no timeout, I can leave it for hours and it just stops at simple copy commands.
I can vagrant ssh to the VM and it's responsive, but even doing an ls in / hangs too. It also hangs if I try to cd into /vagrant. But I can ls /etc/httpd/conf.d/ (the file copied by my script is not there).
I'm picking this has something to do with the sync setup. But this was all working fine with a previous centos/7 box. But I have also updated Vagrant and guest additions since then. Can anyone see anything obvious in my Vagrantfile? Anything I could try?
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "centos/8"
config.vm.hostname = "mysite.local"
# Mount shared folder using NFS
config.vm.synced_folder ".", "/vagrant",
id: "core",
:nfs => true,
:mount_options => ['rw', 'vers=3', 'tcp', 'fsc', 'actimeo=2']
# The IP address of the VM
config.vm.network "private_network", ip: "192.168.100.130"
# Port forwarding for database. You can now connect to the VM DB using 'localhost' and port 65433
config.vm.network "forwarded_port", guest: 5432, host: 65434
# Assign a quarter of host memory and all available CPU's to VM
# Depending on host OS this has to be done differently.
config.vm.provider :virtualbox do |vb|
host = RbConfig::CONFIG['host_os']
# Mac
if host =~ /darwin/
cpus = `sysctl -n hw.ncpu`.to_i
mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4
# Nix
elsif host =~ /linux/
cpus = `nproc`.to_i
mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / 4
# Windows...
else
cpus = 4
mem = 2048
end
vb.customize ["modifyvm", :id, "--memory", mem]
vb.customize ["modifyvm", :id, "--cpus", cpus]
vb.customize ["modifyvm", :id, "--audio", "none"]
vb.customize ["guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000]
end
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
end
# BOOTSTRAP - The OS/Environment
config.vm.provision :shell, :path => "Vagrant.bootstrap.sh"
end

How to export my vagrant local VM to a VM which is on my company server?

I have a VM with vagrant working on local, but I have to put it on the server of the company wich gave me access to a VM. I can access to this VM thanks to SSH but how to transfer my local vagrant VM instead ?
I am working on windows and the VM are on Ubuntu
Thanks
Vagrant file :
# -*- mode: ruby -*-
# vi: set ft=ruby :
require "./source.rb"
ROOT_PATH = File.dirname(__FILE__)
VAGRANTFILE_API_VERSION = "2"
def configure_extra(config)
end
def configure(config)
config.vm.box = "trusty64"
config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network :forwarded_port, host: 8000, guest: 8000
config.vm.network :forwarded_port, host: 9001, guest: 9001
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# If true, then any SSH connections made will enable agent forwarding.
# Default value: false
config.ssh.forward_agent = true
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "./data", "/home/vagrant/data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
config.vm.provider "virtualbox" do |vb|
# Boot with headless mode
vb.gui = false
host = RbConfig::CONFIG['host_os']
# Giving a quarter of system memory to VM and access to all available cpu cores
if host =~ /darwin/
cpus = `sysctl -n hw.ncpu`.to_i
# sysctl returns Bytes, converting to MB...
mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4
elsif host =~ /linux/
cpus = `nproc`.to_i
# meminfo returns KB, converting to MB...
mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / 4
else
# hardcoding values for windows...
cpus = 2
mem = 1024
end
vb.customize ["modifyvm", :id, "--memory", mem]
vb.customize ["modifyvm", :id, "--cpus", cpus]
end
# Provisioning
config.vm.provision "shell" do |shell|
vagrant_shell_scripts_configure(
shell,
File.join(ROOT_PATH, "scripts"),
"provision.sh",
{}
)
end
end
# Look for a Vagrantfile.local to load
local_vagrantfile = "#{__FILE__}.local"
if File.exists?(local_vagrantfile)
eval File.read(local_vagrantfile)
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
configure config
configure_extra config
end
Vagrant itself prepare the VM locally (network, IPs, etc and so on) but this is already configured in the VM provided by your company. What I usually do is to run the scripts (for instance sh) directly in the external (or real) VM. You can also install some provision tool (puppet for instance) in the VM and run the scripts included in your Vagrantfile. If you can post the Vagrantfile here, this would help a lot.

Adding VM /etc/host entries that point to host machine with Vagrant and Puphpet

I know how to use vagrant-hostsupdater to add entries into the host's /etc/hosts file that point to the VM, but I'm actually trying to find a dynamic way to go the OTHER direction. On my machine, I have MySQL installed with a large db. I don't want to put this inside the VM, I need the VM to be able to access it.
I can easily set it up manually. After vagrant up, I can ssh into the VM and edit the /etc/hosts there and make an entry like hostmachine.local and point to my IP address at the time. However, as I move from home to work my host machine will change so I constantly have to update that entry.
Is there a way within an .erb file or somehow to make a vagrant up take the IP of the host machine and make such an entry in a VM hosts file?
Here's one way to do it. Since the Vagrantfile is a Ruby script, we can use a bit of logic to find the local hostname and IP address. Then we use those in a simple provisioning script which adds them to the guest /etc/hosts file.
Example Vargrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# test setting the host IP address into the guest /etc/hosts
# determine host IP address - may need some other magic here
# (ref: http://stackoverflow.com/questions/5029427/ruby-get-local-ip-nix)
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
ip = my_first_private_ipv4.ip_address()
# determine host name - may need some other magic here
hostname = `hostname`
script = <<SCRIPT
echo "#{ip} #{hostname}" | tee -a /etc/hosts
SCRIPT
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.hostname = "iptest"
config.vm.provision :shell, :inline => script
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "iptest"
vb.customize ["modifyvm", :id, "--memory", "1000"]
end
end
Note: the echo | tee -a command adding to /etc/hosts will keep appending if you provision multiple times (without destroying the VM). You might need a better solution there if you run into that.
Another possible solution is to use the vagrant-hosts plugin. Host IP can be found the same way BrianC showed in his answer.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'socket'
def my_first_private_ipv4
Socket.ip_address_list.detect{|intf| intf.ipv4_private?}
end
host_ip = my_first_private_ipv4.ip_address()
Vagrant.configure(2) do |config|
config.vm.define "web", primary: true do |a|
a.vm.box = "ubuntu/trusty64"
a.vm.hostname = "web.local"
a.vm.provider "virtualbox" do |vb|
vb.memory = 2048
vb.cpus = 1
end
a.vm.provision :hosts do |provisioner|
provisioner.add_host host_ip, ['host.machine']
end
end
end
Provisioner will add a row to the VM's /etc/hosts file, mapping host machine's IP address to host.machine. Running provisioner multiple times will not result in duplicate lines in /etc/hosts.
I actually found a simpler solution given my situation, running the VM on a Mac and the fact that my local.db.yml file is not part of the source code. Instead of using a name/IP I was actually just able to go to Mac's System Preferences and find my local network name for my computer, i.e. Kris-White-Mac.local
This resolves both inside and outside of the VM, so by putting that name instead of localhost or 127.0.0.1, it works even when my IP changes.

Is it possible to run a script on a virtual machine after Vagrant finishes provisioning all of them?

I am using Vagrant v1.5.1 to create a cluster of virtual machines (VMs). After all the VMs are provisioned, is it possible to run a single script on one of the machines? The script I want to run will setup passwordless SSH from one VM to all the other VMs.
For example, my nodes provisioned in Vagrant (CentOS 6.5) are as follows.
node1
node2
node3
node4
My Vagrantfile looks like the following.
(1..4).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
end
end
After all this is done, I then need to run a script on node1 to enable passwordless SSH to node2, node3, and node4.
I know you can run scripts as each VM is being provisioned, but in this case, I want to run a script after all VMs are provisioned, since I need all VMs to be up and running to run this last script.
Is this possible in Vagrant?
I realized that I can also iterate backwards too.
r = 4..1
(r.first).downto(r.last).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = "centos65"
...omitted..
if i == 1
node.vm.provision "shell" do |s|
s.path = "/path/to/script.sh"
end
end
end
end
This will work great, but, in reality, I also need to setup passwordless SSH from node2 to node1, node3, and node4. In the approach above, this could only ever work for node1, but not for node2 (since node1 will not be provisioned).
If there's a Vagrant plugin to allow password SSH between all nodes in my cluster, that would even be better.
The question is one year old, anyway I found it because I had the same problem, so here it is the workarround I used to solve the problem, somebody might find it usefull.
We need "vagrant triggers" for this to work. The thing with vagrant triggers is that they fire for every machine you are creating, but we want to determine the moment ALL machines are UP. We can do that by checking on each UP event if that event corresponds to the last machine being created:
Vagrant.configure("2") do |config|
(1..$machine_count).each do |i|
config.vm.define vm_name = "w%d" % i do |worker|
worker.vm.hostname = vm_name
workerIP = IP
worker.vm.network :private_network, ip: workerIP
worker.trigger.after :up do
if(i == $machine_count) then
info "last machine is up"
run_remote "bash /vagrant/YOUR_SCRIPT.sh"
end
end
end
end
end
This works for providers that not support parallel execution on Vagrant (VBox, VMWare).
There is no hook in Vagrant for "run after all VMs are provisioned", so you would need to implement it yourself. A couple options I can think of:
1: Run the SSH setup script after all VMs are running.
For example if the script was named ssh_setup.sh and present in the shared folder:
$ for i in {1..4}; do vagrant ssh node$i -c 'sudo /vagrant/ssh_setup.sh'; done
2: Use the same SSH keys for all hosts and set up during provisioning
If all nodes share the same passphrase-less SSH key, you could copy into ~.ssh the needed files like authorized_keys, id_rsa, etc.
Adding an updated answer.
The vagrant-triggers plugin was merged to Vagrant 2.1.0 in may 2018.
We can simply use the only_on option option from the trigger class.
Let's say we have the following configuration:
servers=[
{:hostname => "net1",:ip => "192.168.11.11"},
{:hostname => "net2",:ip => "192.168.22.11"},
{:hostname => "net3",:ip => "192.168.33.11"}
]
We can now easily execute the trigger after the last machine is up:
# Take the hostname of the last machine in the array
last_vm = servers[(servers.length) -1][:hostname]
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
# ----- Common configuration ----- #
node.vm.box = "debian/jessie64"
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
# ----- Adding trigger - only after last VM is UP ------ #
node.trigger.after :up do |trigger|
trigger.only_on = last_vm # <---- Just use it here!
trigger.info = "Running only after last machine is up!"
end
end
end
end
And we can check the output and see the the trigger really fires only after "net3" is UP:
==> net3: Setting hostname...
==> net3: Configuring and enabling network interfaces...
==> net3: Installing rsync to the VM...
==> net3: Rsyncing folder: /home/rotem/workspaces/playground/vagrant/learning-network-modes/testing/ => /vagrant
==> net3: Running action triggers after up ...
==> net3: Running trigger...
==> net3: Running only after last machine is up!
This worked for me pretty well: I've used per VM provision scripts, and in the last script I've called post provision script via ssh on the first VM.
In Vagrantfile:
require 'fileutils'
Vagrant.require_version ">= 1.6.0"
$max_nodes = 2
$vm_name = "vm_prefix"
#...<skipped some lines that are not relevant to the case >...
Vagrant.configure("2") do |config|
config.ssh.forward_agent = true
config.ssh.insert_key = false
#ubuntu 16.04
config.vm.box = "ubuntu/xenial64"
(1..$max_nodes).each do |i|
config.vm.define vm_name = "%s-%02d" % [$vm_name, i] do |config|
config.vm.hostname = vm_name
config.vm.network "private_network", ip: "10.10.0.%02d" % [i+20], :name => 'vboxnet2'
config.vm.network :forwarded_port, guest: 22, host: "1%02d22" % [i+20], id: "ssh"
config.vm.synced_folder "./shared", "/host-shared"
config.vm.provider :virtualbox do |vb|
vb.name = vm_name
vb.gui = false
vb.memory = 4096
vb.cpus = 2
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "100"]
vb.linked_clone = true
end
# Important part:
config.vm.provision "shell", path: "common_provision.sh"
config.vm.provision "shell", path: "per_vm_provision#{i}.sh"
end
end
end
On disk:
(ensure that post_provision.sh has at least owner execute permissions: rwxr..r..)
vm$ ls /vagrant/
...<skipped some lines that are not relevant to the case >...
config.sh
common_provision.sh
per_vm_provision1.sh
per_vm_provision2.sh
per_vm_provision3.sh
...
per_vm_provisionN.sh
post_provision.sh
Vagrantfile
...<skipped some lines that are not relevant to the case >...
In config.sh:
num_vm="2" # should equal the $max_nodes in Vagrantfile
name_vm="vm_prefix" # should equal the $vm_name in Vagrantfile
username="user1"
userpass="abc123"
...<skipped some lines that are not relevant to the case >...
In common_provision.sh:
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
sed -r -i 's/\%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/' /etc/sudoers
sed -r -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
service ssh reload
# add user ${username}
useradd --create-home --home-dir /home/${username} --shell /bin/bash ${username}
usermod -aG admin ${username}
usermod -aG sudo ${username}
/bin/bash -c "echo -e \"${userpass}\n${userpass}\" | passwd ${username}"
# provision additional ssh keys
# copy ssh keys from disk
cp /vagrant/ssh/* /home/vagrant/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys
mkdir /home/${username}/.ssh
cp /vagrant/ssh/* /home/${username}/.ssh
cat /vagrant/ssh/id_rsa.pub >> /home/${username}/.ssh/authorized_keys
# not required, just for convenience
cat >> /etc/hosts <<EOF
10.10.0.21 ${name_vm}-01
10.10.0.22 ${name_vm}-02
10.10.0.23 ${name_vm}-03
...
10.10.0.2N ${name_vm}-0N
EOF
...<skipped some lines that are not relevant to the case >...
In per_vm_provision2.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm
if [ "x${num_vm}" = "x2" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
In per_vm_provisionN.sh:
#!/bin/bash
# import variables from config
source /vagrant/config.sh
...<skipped some lines that are not relevant to the case >...
# check if this is the last provisioned vm. N represents the highest number
if [ "x${num_vm}" = "xN" ] ; then
ssh vagrant#10.10.0.21 -o StrictHostKeyChecking=no -- '/vagrant/post_provision.sh'
fi
I hope, I didn't skip anything important, but I think the idea is clear in general.
Note: ssh keys for interVM access is provisioned by Vagrant by default. You can add your own ssh keys if needed using common_provision.sh

Where to store Ansible host file on Mac OS X

I am trying to get started with Ansible to provision my Vagrantbox, but I can’t figure out how to deal with host files.
According to the documentation the should be storred in /etc/ansible/hosts, but I can’t find this on my system (Mac OS X). I also seen examples where the host.ini file situated in the document root adjacent to the vagrant file.
So my question is where would you store your hostfile for setting up a single vagrant box?
While Ansible will try /etc/ansible/hosts by default, there are several ways to tell ansible where to look for an alternate inventory file :
use the -i command line switch and pass your inventory file path
add inventory = path_to_hostfile in the [defaults] section of your ~/.ansible.cfg configuration file
use export ANSIBLE_HOSTS=path_to_hostfile as suggested by DomaNitro in his answer
Now you don't mention if you want to use the ansible provisionner available in vagrant, or if you want to provision your vagrant host manually.
Let's go for the Vagrant ansible provisionner first :
Create a directory (e.g. test), and create a Vagrant file inside :
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "precise64-v1.2"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.define :webapp do |webapp|
webapp.vm.hostname = "webapp.local"
webapp.vm.network :private_network, ip: "192.168.123.2"
webapp.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 200, "--name", "vagrant-docs", "--natdnshostresolver1", "on"]
end
end
#
# Provisionning
#
config.vm.provision :ansible do |ansible|
ansible.playbook = "provision.yml"
ansible.inventory_path = "hosts"
ansible.sudo = true
#
# Use anible.tags if you want to restrict what `vagrant provision`
# Here is a list of possible tags
# ansible.tags = "foo bar"
#
# Use ansible.verbose to see detailled output for ansible runs
# ansible.verbose = 'vvv'
#
# Customize your stuff here
ansible.extra_vars = {
some_var: 42,
foo: "bar",
}
end
end
Now when you run vagrant up (or vagrant provision), Vangrant's ansible provionner will look for a file name hosts in the same directory as Vagrantfile, and will try to apply the provision.yml playbook.
You can also run it manually, without resorting to Vagrant's ansible provisionner :
ansible-playbook -i hosts provision.yml --ask-pass --sudo
Note that Vagrant+Virtualbox+Ansible trio does not always get along well. There are some versions combinations that are problematic. Try to upgrade to the latests versions if you experience issues (especially regarding network).
{shameless_plug} You can find an more extensive example mixing vagrant and ansible here {/shameless_plug}
Good luck !
If you used Brew to install Ansible, you'll most likely find the default hosts file at /usr/local/etc/ansible/hosts. But, as others pointed out, you may just want to change the default.
I like to use bash environment variables as my base project is shared with other users.
you can simply export ANSIBLE_HOSTS=/pathTo/inventory/ this can be a host file or a directory with multi files.
You can also use write it in your ~/.bash_profile so its persistent
A bunch of other variables can set that way instead of maintaining a conf file for more info check the source in ansible/lib/ansible/constants.py
Here is a description what to do after installing Ansible on Mac, it worked for me: ansible-tips-and-tricks.readthedocs.io
I downloaded the ansible.cfg file to
/Users/"yourUser"/.ansible
and afterwards you can edit the ansible.cfg file by uncommenting the
inventory = /Users/"yourUser"/.ansible
line and specifying the path to the ansible folder like shown above. You can create the hosts file in this folder then as well. To try it out locally, you can put
localhost ansible_connection=local
in the hosts file and try it out with
ansible -m ping all
If you use Vagrant's ansible provisioner, Vagrant will automatically generate an Ansible hosts file (called vagrant_ansible_inventory_default) and configure ansible-playbook to use that file. It looks like this:
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
It calls the Vagrant host "default", so your plays should either refer to "default" or "all".
On Mac i used sudo find / -type d -name "ansible" 2> /dev/null to find it,
but i don't have .../ansible/hosts folder from the box, maybe because i installed using brew as i read above, so i created at path/etc/ansible/hosts

Resources