Are Vagrant shell provisioner scripts idempotent or OTO? - shell

I want to write a Vagrantfile and accompanying shell script so that the script runs/executes only the very 1st time a user executes a vagrant up for that VM. That's because this shell script will install all sorts of system services that should only happen one time.
According to the shell provisioner docs, it looks like I might be able to do something like:
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "init-services.sh"
end
However, from the docs I can't tell if init-services.sh will be executed every time a user does a vagrant up (in which case I need to write it carefully so as to be idempotent), or whether it truly only executes the script one time, when the box is first being provisioned.
And, if it does only execute the script one time, then how does Vagrant handle updates to the script (if we want to, say, add a new service to the machine)?

However, from the docs I can't tell if init-services.sh will be executed every time a user does a vagrant up (in which case I need to write it carefully so as to be idempotent), or whether it truly only executes the script one time, when the box is first being provisioned.
yes the script will be executed only at the first time the machine is spinned up during the vagrant up. There is an option is you want to run it everytime (even though its not something you want in this case)
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "init-services.sh", :run => 'always'
end
And, if it does only execute the script one time, then how does Vagrant handle updates to the script (if we want to, say, add a new service to the machine)?
There are 2 commands you can use for this:
A specific call to vagrant provision will still force the script to run wether the machine has already been initialized or not.
Calling vagrant up --provision when spining an existing VM will run the provisioning script
on this point though vagrant will not check what are the update in your script, it will just run the whole script again; if you need to run just a specific update you will need to manage this yourself in your script file.
You can read a bit more about how the provisioning work in the doc

Related

how to check if vagrant machine is running through the shell

I am new to Vagrant and almost new to Linux. I am practicing to learn about vagrant and so I have a vagrantfile which starts 3 machines, the first machine executes a shell script as provision method but, some the last 2 commands need to be executed just when the other 2 machines are up, so it will need to skip these commands and go back to then when the other 2 machines are up... I have gone through Vagrant documentation and I have not been able to find if this is possible to do. Is there any way to do this?
Maybe is there a more "robust" solution, but im using:
vagrant status | grep "is running"

Ansible start-at-task from within Vagrant

Is there a method to using Ansible's start-at-task from within a Vagrantfile? I want to specify the exact task to start at for debugging purposes. I realize host vars will be missing, this is fine. Other similar questions don't seem to be asking exactly this.
One idea is to set an ENV_VAR, Vagrant populates that and passes it to the playbook. ie:
# export START_TASK='task-name'
# Run: "vagrant provision --provision-with resume"
config.vm.provision "resume", type: "ansible_local" do |resume|
resume.playbook = "playbooks/playbook.yml --start-at-task=ENV['START_TASK']"
end
The playbook command doesn't parse the env_var like that but I'm essentially trying to run that command. I'm basically just trying to parse that env_var and pass it to Vagrant ansible provisioner.
Note: #retry on the playbook only re-runs the entire failed playbook for that single host not just a single command so that's not a solution.
Just needed to add the following, which I couldn't find anywhere in Vagrant's documentation.
resume.start_at_task = ENV['START_AT_TASK']

How to check for a precondition when starting a vagrant machine?

I have a multi-machine Vagrantfile, and I would like to do a quick check when the user runs vagrant up machineB, and exit with an error message if it fails.
The specific test I have in mind is to curl some URL and verify a 200 response, but I don't think the details should matter.
The idea is to save the user the time it takes to start the machine, sync some folders and run the provision script, only to discover that a required resource is not available.
So far the best idea I have is to put this check at the beginning of the provision script, but I'd like to do it earlier.
I know I can just check at the beginning of the Vagrantfile, kind of like how it is done here, but then the check will run on every vagrant command, and I'd like it to run specifically only when trying to start machineB.
You can run your condition in the specific block for your machineB so it will run only when you call commands for machineB
You can check ARGV[0] argument from command line to make sure it is up
this will look something
Vagrant.configure("2") do |config|
config.vm.box = xxx
config.ssh.username = xxx
config.vm.define "machineA" do |db|
db.vm.hostname = xxx
p system ("curl http://www.google.fr")
and your condition here
end
config.vm.define "machineB", primary: true do |app|
app.vm.hostname = xxx
if "up".eql? ARGV[0]
p system ("curl http://www.google.fr")
and your condition here
end
end
not sure exactly what you want to end up with the curl but you could just use the net/http lib

Vagrant: How can you run scripts on the host via commands in the guest shell?

It's possible to open ports, network files, and there are plug-ins that allow for running guest or host [shell] commands during Vagrant's Provisioning process.
What I'd like to do is be able to (perhaps through a bash alias) run a command in the Vagrant guest/VM, and have this execute a command on the host, ideally with a variable being passed on the command line.
Example: In my host I run the Atom editor (same applies to TextMate, whatever). If I want to work on a shared file in the VM, I have to manually open that file from over in the host, either by opening it directly in the editor, or running the 'atom filename' shell command.
I want parity, so while inside the VM, I can run 'atom filename', and this will pass the filename to the 'atom $1' script outside of the VM, in the host, and open it in my host editor (Atom).
Note: We use Salt for Vagrant Provisioning, and NFS for mounting, for what it's worth. And of course, ssh with key.
Bonus question: Making this work with .gitconfig as its merge conflict editor (should just work, if the former is possible, right?).
This is a very interesting use case that I haven't heard before. There isn't a native method of handling this in Vagrant, but this functionality was added to Packer in the form of a local shell provisioner. You could open a GitHub issue on the Vagrant project and propose the same feature. Double check the current list of issues, though, because it's possible someone has beaten you to it.
In the meantime, though, you do have a workaround if you're determined to do this...
Create an ssh key pair on your host.
Use Salt to add the private key in /home/vagrant/.ssh on the box.
Use a shell provisioner to run remote ssh commands on the host from the guest.
These commands would take the form of...
ssh username#192.168.0.1 "ls -l ~"
In my experience, the 192.168.0.1 IP always points back to the host, but your mileage may vary. I'm not a networking expert by any means.
I hope this works for you and I think a local shell provisioner for Vagrant would be a reasonable feature.

How can I interact with a Vagrant shell provisioning script?

I have a shell provisioning script that invokes a command that requires user input - but when I run vagrant provision, the process hangs at that point in the script, as the command is waiting for my input, but there is nowhere to give it. Is there any way around this - i.e. to force the script to run in some interactive mode?
The specifics are that I creating a clean Ubuntu VM, and then invoking the Heroku CLI to download a database backup (this is in my provisioning script):
curl -o /tmp/db.backup `heroku pgbackups:url -a myapp`
However, because this is a clean VM, and therefore this is the first time that I have run an Heroku CLI command, I am prompted for my login credentials. Because the script is being managed by Vagrant, there is no interactive shell attached, and so the script just hangs there.
If you want to pass temporary input or variables to a Vagrant script, you can have them enter their credentials as temporary environment variables for that command by placing them first on the same line:
username=x password=x vagrant provision
and access them from within Vagrantfile as
$u = ENV['username']
$p = ENV['password']
Then you can pass them as an argument to your bash script:
config.vm.provision "shell" do |s|
s.inline: "echo username: $1, password: $2"
s.args: [$u, $p]
end
You can install something like expect in the vm to handle passing those variables to the curl command.
I'm assuming you don't want to hard code your credentials in plain text thus trying to force an interactive mode.
Thing is just as you I don't see such option in vagrant provision doc ( http://docs.vagrantup.com/v1/docs/provisioners/shell.html ) so one way or another you need to embed the authentication within your script.
Have you thought about using something like getting a token and use the heroku REST Api instead of the CLI?
https://devcenter.heroku.com/articles/authentication

Resources