I have the following line in my Vagrantfile:
config.vm.provision :shell, path: "provisioning/myscript.sh"
I would like to toggle running that script based on an environment variable being set in the host (which may not be present). If the value is present and equals true I want the script skipped, otherwise it should run e.g.
if [ ENV[SKIP_MY_SCRIPT] != 'true' ]
config.vm.provision :shell, path: "provisioning/myscript.sh"
end
Or is there a better way (e.g. pass env into the script somehow)?
As Vagrantfile is mainly a ruby script, there's no bad to have conditional statement in the file.
To answer the specific question How to pass host environment variable to provisioning script (can help for other use cases) you can pass arguments like
username = `whoami`.chomp
config.vm.provision "shell", privileged: false, path: "provisioning/config-git.sh", args: "#{username}"
and in your script, you read as
#!/usr/bin/env bash
username=$1
Related
I'm running a vagrant provisioning script and I'm trying to set the desktop background, but I can't get gsettings to take. It works fine at the command line, just not from the script. Everything else in the Vagrantfile works fine.
config.vm.provision "file", source: "image.jpg", destination: "~/image.jpg"
I use the file provision to move the file over, and then call gsettings from the non-sudo bootstrap.sh.
config.vm.provision :shell, path: "sudo-bootstrap.sh"
config.vm.provision :shell, path: "bootstrap.sh", privileged: false
In bootstrap.sh:
gsettings set org.gnome.desktop.background picture-uri file:///home/vagrant/image.jpg
I read about there not being a DBUS Session bus address, but adding the line to get the PID didn't work in the provisioning script. Also found it was perhaps missing schemas, but I don't have any schemas in .local.
Been hammering at this for a few hours now, no idea what I'm missing.
While the accepted answer to the DBUS Session question didn't work for me, the linked answer under it did, Run DBUS.
dbus-launch gsettings set ...
I'm using vagrant and ansible to create and provision an environment. I've gotten everything to work fine, but I found that there were a couple commands which took a long time to execute. Since ansible doesn't provide a way to see live output of a shell command, I decided to separate those commands out to separate shell scripts and execute them as shell provisioners in order to see their output. This worked when I experimented with the process by putting the shell provisioner at the end of the Vagrantfile (after the Ansible provisioner), but it's causing problems if I break up the process. Here's a high-level, pseudo example:
I have 3 playbooks: setup.yml, post-download.yml, and post-sample-data.yml
The desired flow goes like this:
Vagrantfile
provisioner: "ansible", playbook "setup.yml"
- Tasks...
- Create shell scripts for upcoming shell provisioners...
- meta: end_play
provisioner: "shell", inline: "bin/bash /path/to/created/shell/script"
(run script)
provisioner: "ansible", playbook "post-download.yml"
- Tasks...
- meta: end_play
provisioner: "shell", inline: "bin/bash /path/to/created/shell/script"
(run script)
provisioner: "ansible", playbook "post-sample-data.yml"
- Tasks...
- meta: end_play
provisioner: "shell", inline: "bin/bash /path/to/created/shell/script"
(run script)
end
When I run vagrant provision with this idea in mind, I get the following error on the first shell provisioner attempt:
/tmp/vagrant-shell: line 1: bin/sh: No such file or directory
Based on the error message, my assumption is that the vagrant shell is unable to react to changes made on the server during the Vagrantfile execution; ergo, it can't find the created shell scripts to run as provisioners after the initial ansible provisioner runs. Is that what's happening, or is there a way I can make this approach work?
In case it helps, here's the actual code from my vagrantfile:
# Kick off the pre-install Ansible provisioner
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/setup.yml"
end
# Kick off the installation, and sample data shell scripts so we can get terminal output
if settings['project']['install_method'] == 'install' || settings['project']['install_method'] == 'reinstall'
config.vm.provision "shell", inline: "bin/sh #{settings['installation']['directory']}/download.sh"
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/post-download.yml"
end
config.vm.provision "shell", inline: "bin/sh #{settings['installation']['directory']}/install.sh"
end
# Kick off the sample data shell script to download the sample data packages so we can get terminal output
if settings['use_sample_data'] == true
config.vm.provision "shell", inline: "bin/sh #{settings['installation']['directory']}/sample-data.sh"
end
# Kick off the post-sample-data Ansible provisioner
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/post-sample-data.yml"
end
# Kick off the cache warmer script so we can get terminal output
if settings['project']['warm_cache'] == true
config.vm.provision "shell", inline: "/bin/sh #{settings['installation']['directory']}/cache-warmer.sh"
end
Thanks to the comment by #tux above, I can confirm that this approach will work for showing output between playbooks so long as the ansible project is well-structured, etc.
For those curious, here's the updated version of the Vagrantfile:
# Kick off the pre-install Ansible provisioner
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/setup.yml"
end
# Kick off the installation, and sample data shell scripts so we can get terminal output
if settings['project']['install_method'] == 'install' || settings['project']['install_method'] == 'reinstall'
config.vm.provision "shell", privileged: false, inline: "/bin/sh #{settings['installation']['directory']}/download.sh"
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/post-download.yml"
end
config.vm.provision "shell", privileged: false, inline: "/bin/sh #{settings['installation']['directory']}/install.sh"
end
# Kick off the sample data shell script to download the sample data packages so we can get terminal output
if settings['use_sample_data'] == true
config.vm.provision "shell", privileged: false, inline: "/bin/sh #{settings['installation']['directory']}/sample-data.sh"
end
# Kick off the post-sample-data Ansible provisioner
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "ansible/post-sample-data.yml"
end
# Kick off the cache warmer script so we can get terminal output
if settings['project']['warm_cache'] == true
config.vm.provision "shell", inline: "/bin/sh #{settings['installation']['directory']}/cache-warmer.sh"
end
Note the use of privileged: false in all but the last script provisioner. This is necessary if you don't want the script to be executed as the root user.
I have a fairly vanilla Vagrant setup running trusty64. It's being configured by a single shell file. Among others, it contains an aliasing of python3 to python and pip3 to pip, respectively:
echo "Writing aliases to profile:"
echo "alias python=\"python3\"" >> ~/.profile
echo "alias pip=pip3" >> ~/.profile
. ~/.profile
For some mysterious reason, these lines never make it into ~/.profile. There is no error message, nor any other commotion, it's just that nothing happens. This being 2am, I am fairly sure I'm doing something wrong, I just can't figure out what it is.
I am pretty sure you're calling the provisioner with something like
config.vm.provision "shell", path: "bootstrap.sh"
This works well but its executed as root user so all the lines are added for this user only. You want to use the privileged option
privileged (boolean) - Specifies whether to execute the shell script
as a privileged user or not (sudo). By default this is "true".
config.vm.provision "shell", path: "bootstrap.sh", privileged: "false"
will execute as your vagrant user and will add lines in to /home/vagrant/.profile file
I installed StackEdit on Vagrant. I would like to start Vagrant and StackEdit by one click.
I created bash script:
#!/bin/bash
vagrant up
#ssh -p 2222 -i /d/stackedit/.vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -t '/home/vagrant/Code/start_server.sh'
start "C:\Program Files\Mozilla Firefox\firefox.exe" http://stackedit.app:5000
and start_server.sh in VM
if [ $(ps -e|grep node|wc -l) = "0" ] ; then
(export PORT=5000 && node Code/Project/public/stackedit/server.js) &
fi
sleep 5
exit 0
If I run start_server.sh via ssh manualy everything works, but when I try it with ssh in start script - now commented line - server doesn't run.
I tried copy this script to /ect/rc.local, but the result is same.
I tried add #reboot /home/vagrant/Code/start_server.sh to crontab -e too, but without success.
Can anyone help me?
My system is Windows 10. I use Git Bash.
you should put everything in your Vagrantfile
#Run provisioning
You can run your script from Vagrantfile using a shell provisioner
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh"
end
check, you have some options by default it will run as root so you can change if you want to run your script as vagrant user
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh", privileged: false
end
and also you can make sure you run your script everytime you boot the VM (by default it runs only once or when calling specifically the provision parameter)
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "Code/start_server.sh", run: "always"
end
#opening the website after the system is running
Vagrantfile is a ruby script so you can call any command from the file, but it will run the command immediately and in any occasion.
Then, if you want to run after the box is started, you can use the vagrant trigger and do something like
Vagrant.configure(2) do |config|
.....
config.trigger.after :up do |trigger|
trigger.run = {inline: 'system("open", "http://stackedit.app:5000"')
end
end
How can I run a script automatically when vagrant is up? I used provision method. But in that method i need to point out some .sh file. I dont want to point to .sh file. I need to build the script within the Vagrantfile. Please help me to fix this issue.
I tried
Vagrant::Config.run do |config|
config.vm.provision :shell, :path => "test.sh
end
I want to append the scripts in test.sh into vagrant file directly.
You can use inline script in Vagrantfile, even Here Document which made complex shell scripts embedded in it possible.
Example:
$script = <<'EOF'
echo shell provisioning...
date -R > /etc/vagrant_provisioned_at
EOF
Vagrant.configure("2") do |config|
config.vm.provision :shell, :inline => $script
end
NOTE: single quoted LimitString is to escape special characters like or$`.
Check the Docs out => http://docs.vagrantup.com/v2/provisioning/shell.html