Is there a method to using Ansible's start-at-task from within a Vagrantfile? I want to specify the exact task to start at for debugging purposes. I realize host vars will be missing, this is fine. Other similar questions don't seem to be asking exactly this.
One idea is to set an ENV_VAR, Vagrant populates that and passes it to the playbook. ie:
# export START_TASK='task-name'
# Run: "vagrant provision --provision-with resume"
config.vm.provision "resume", type: "ansible_local" do |resume|
resume.playbook = "playbooks/playbook.yml --start-at-task=ENV['START_TASK']"
end
The playbook command doesn't parse the env_var like that but I'm essentially trying to run that command. I'm basically just trying to parse that env_var and pass it to Vagrant ansible provisioner.
Note: #retry on the playbook only re-runs the entire failed playbook for that single host not just a single command so that's not a solution.
Just needed to add the following, which I couldn't find anywhere in Vagrant's documentation.
resume.start_at_task = ENV['START_AT_TASK']
Related
I am getting familiar with Terraform and Ansible through books. Could someone enlighten me about the following block of code?
provisioner "local-exec" {
command = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${self.public_ip},', app.yml"
}
The short answer is local-exec is for anything you want to do on your local machine instead of the remote machine.
You can do a bunch of different things:
write an ssh key into your ~/.ssh to access the server
run a sleep 30 or something to make sure the next commands wait a bit for your machine to provision
write logs to your local directory (last run, date completed, etc.)
write some env_vars to your local machine you can use to access the machine
the ansible example you provided
FYI, hashicorp hates local- and remote- exec. If you talk to one of their devs, they will tell you that it is a necessary evil. Other than maybe a sleep or write this or that, avoid it for any stateful data.
I would interpret that as Terraform should execute a local command on the Control Node.
Reading the documentation about local-exec Provisioner it turns out that
The local-exec provisioner invokes a local executable after a (annot.: remote) resource is created. This invokes a process on the machine running Terraform ...
and not on the Remote Resource.
So after Terraform has in example created a Virtual Machine, it calls an Ansible playbook to proceed further on it.
I would like to keep tmp directory on the VM in my test region. There is the following solution for the problem: setting ANSIBLE_KEEP_REMOTE_FILES to 1 on the Ansible machine.
The issue is that the ansible machine is a local docker container so I need to ensure that this variable is always set. Otherwise I'm loosing some documents. When I reboot my system and start this docker container with Ansible I'm loosing this variable.
Is there a way to set this environment variable somewhere in Ansible configuration = or in a playbook configuration somewhere? I need a permanent solution in order not to forget this variable.
Thank you!
Q: "Is there a way to set this environment variable somewhere in Ansible configuration?"
A: Yes. It is. For example
$ cat ansible.cfg
[defaults]
keep_remote_files = true
See DEFAULT_KEEP_REMOTE_FILES.
I want to pass the current user within my Vagrantfile, but I'm not sure how to do it.
I've tried this:
config.vm.provision :shell, inline: "echo $(whoami) > /etc/profile.d/me"
But it results in 'root' being put into the file, which I assume is the vagrant host's user. I want to get the username for the host.
That's because your inline shell script runs inside the vagrant box.
You can do it like this:
Get username from host depending on platform (you can simplify this if you never expect a windows host).
#host_user = Gem.win_platform? ? "#{ENV['USERNAME']}" : "#{ENV['USER']}"
Pass the username from the host as environment variable during the provisioning and use it in an inline script.
config.vm.provision "Passing host username as env var...", type: :shell, inline: $hostUser, env: {"HOST_USER" => "#{#host_user}"}
Add this outside the ruby part, it gets then run by the code above and appends the username which got passed as environment variable to the file you specified:
$hostUser = <<-SET_HOST_USER
echo "$HOST_USER" > /etc/profile.d/me"
SET_HOST_USER
I want to write a Vagrantfile and accompanying shell script so that the script runs/executes only the very 1st time a user executes a vagrant up for that VM. That's because this shell script will install all sorts of system services that should only happen one time.
According to the shell provisioner docs, it looks like I might be able to do something like:
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "init-services.sh"
end
However, from the docs I can't tell if init-services.sh will be executed every time a user does a vagrant up (in which case I need to write it carefully so as to be idempotent), or whether it truly only executes the script one time, when the box is first being provisioned.
And, if it does only execute the script one time, then how does Vagrant handle updates to the script (if we want to, say, add a new service to the machine)?
However, from the docs I can't tell if init-services.sh will be executed every time a user does a vagrant up (in which case I need to write it carefully so as to be idempotent), or whether it truly only executes the script one time, when the box is first being provisioned.
yes the script will be executed only at the first time the machine is spinned up during the vagrant up. There is an option is you want to run it everytime (even though its not something you want in this case)
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "init-services.sh", :run => 'always'
end
And, if it does only execute the script one time, then how does Vagrant handle updates to the script (if we want to, say, add a new service to the machine)?
There are 2 commands you can use for this:
A specific call to vagrant provision will still force the script to run wether the machine has already been initialized or not.
Calling vagrant up --provision when spining an existing VM will run the provisioning script
on this point though vagrant will not check what are the update in your script, it will just run the whole script again; if you need to run just a specific update you will need to manage this yourself in your script file.
You can read a bit more about how the provisioning work in the doc
I'm using Vagrant to create EC2 virtual machines and ansible to provision them. I'm using this guide, along with the ec2.py script for inventory.
I am currently provisioning one host with ansible, to which I've given a tag named Purpose (let's say the value is "Machine Purpose") so that I can do this in my ansible file (the ec2.py script provides this):
- hosts: tag_Purpose_Machine_Purpose
My problem is that if I want to add another server, and I want to provision that, I can't do that using vagrant provision server2, because that will run the ansible script, which will match the first host, too, and provision that one as well.
The reason I want to avoid that is that, even though the ansible instructions are mostly idempotent, not all of them are, so I will unnecessarily move some files etc. on node1, and more importantly, also restart the service already running there.
Is there a way to make ansible only provision the servers I specify on the command line?
You can limit the Ansible play with the parameter --limit. It's not very well documented but you can feed it group names as well as host names.
ansible-playbook ... --limit hostA
Also multiple hostnames separated by comma are possible:
ansible-playbook ... --limit hostA,hostB,hostC
You can set it in the Vagrantfile
v.vm.provision "ansible" do |ansible|
ansible.limit = 'all' # Change this
And you can load it from the command line
v.vm.provision "ansible" do |ansible|
ansible.limit = (ENV['ANSIBLE_LIMIT'] || 'all')
With
ANSIBLE_LIMIT='x' vagrant provision