Forcing the specifying of a vm-name when running vagrant provision/destroy/up - vagrant

When trying to up, provision, or destroy a vagrant machine you need to specify a vm-name (as defined in the vagrantfile) if you only want to effect one machine (e.g. run vagrant up local; vagrant up uatk vagrant up production;) - if you fail to do this then the changes effect all the virtual machines.
Therefore typing "vagrant destroy" rather than "vagrant destroy local" downs all your machines. Is it possible to either:-
a) exclude a virtual machine when no machines are specified
b) force the specifying of a vm-name?

You can specify primary machine in Vagrant, so that when you run Vagrant command without specifying target, it will run on the primary machine and not all the machines in your Vagrantfile.
Marking primary machine is done by by setting the primary flag:
config.vm.define "web", primary: true do |web|
# ...
end
See the Docs for more info.

Related

How do you set the VM name via Vagrant?

I've set up 4 VMs in Vagrant and now trying to set the name of a VM in Vagrant as I don't just want to ssh into the default VM.
I can't find any docs on the vagrant website but found this:
How to change Vagrant 'default' machine name?
However, when I try:
config.vm.define "foohost"
and do a vagrant up I get:
There are errors in the configuration of this machine. Please fix
the following errors and try again:
vm:
* The following settings shouldn't exist: define
I suspect you actually wrote
config.vm.define = "foohost"
rather than
config.vm.define "foohost"
as that would explain the error message. (Best to show several lines of actual source if you can.)
as I don't just want to ssh into the default VM.
so when you use multiple VM, you can tell vagrant which VM you want to ssh by default, using the following
config.vm.define "foohost", primary: true do |foohost|
...
end
then when you run vagrant ssh you will ssh by default in this VM.
To ssh into the other MV, you will need to specify the VM name

Vagrant cannot vagrant up the box packaged from ubuntu xenial64 16.04

I have a custom vagrant box based on the offcial box ubuntu 16.04.
I simplly run like this to get the packaged box.
vagrant init ubuntu/xenial64; vagrant up --provider virtualbox
vagrant up
vagrant ssh # enter the virtual machine and do some custom change on it
vagrant halt
vagrant package --vagrantfile Vagrantfile --output custom_ubuntu1604.box
and then i copy the file custom_ubuntu1604.box to another directory, i use the box like this:
vagrant box add ubuntu1604base custom_ubuntu1604.box
vagrant init ubuntu1604base
vagrant up # at this point the machine will be stopped at "Started Journal Servie"
my new virtualbox machine base on the new packaged box will stop at:
the screenshot
And finally it timed out:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within the
configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that Vagrant
had when attempting to connect to the machine. These errors are
usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes. Verify
that authentication configurations are also setup properly, as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
Try to set config.vm.boot_timeout in Vagrantfile more than default e.x.600. From my experience I found out it take a long time at the first time to connecting guest machine.
For example
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.provider "virtualbox"
config.vm.boot_timeout = 600
end

Totally remove a vagrant box(?) from my host

After vagrant halt and vagrant destroy and after a cd && cd .vagrant/boxes && rm -rf ... I've been continuing to see a box.
I've also pruned vagrant cache with command vagrant global-status --prune.
Here the output of vagrant status:
Current machine states:
my-vagrant-api not created (virtualbox)
What I need to do to totally remove Vagrant box from an Host?
Nothing you have to do. It is already deleted.
It is only showing vagrant machine configuration that you defined in Vagrantfile. See the output:
my-vagrant-api not created (virtualbox)
It is saying vagrant machine with name my-vagrant-api is defined in Vagrantfile but not created.
If it is created then it will look like:
vagrant status
Current machine states:
server1 poweroff (virtualbox)
server2 poweroff (virtualbox)
server3 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
If you don't want this configuration then you can simply delete that Vagrantfile

Is it possible to provision vagrant boxes from puppet modules located outside of the vagrant folder?

I have a production server architecture that is quite complex that involves several types and tiers of servers including: Web, Varnish, HAProxy, ELK, Database, Sensu, etc. The production architecture is provisioned with puppet masters and the puppet code for the masters is provisioned with Git through many, submodules.
I want to connect these Git repos to VM's locally to further develop pieces of the puppet architecture. However each Vagrant VM requires a complete self contained copy of all of the Git repos in order to stand up just one VM. This seems like a very inefficient use of drive space for local development on what will be potentially a dozen different types of servers.
Is there a way to point all of the Vagrant VM's VagrantFiles to a common local folder outside of the vagrant directory such that each Vagrant instance can still read the folder and provision the server?
Edited --
Based on comment from #Treminio, here is the portion of my VagrantFile showing the attempt to declare an absolute path from host computers root:
config.vm.provision "puppet" do |puppet|
puppet.manifest_file = "init.pp"
puppet.manifests_path = "/Users/jdugger/vm/puppet/manifests"
puppet.module_path = "/Users/jdugger/vm/puppet/modules"
puppet.hiera_config_path = "/Users/jdugger/vm/puppet/hieradata"
end
... and the error response by Vagrant:
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /Users/jdugger/vm/pupt/vagrant
default: /vagrant_data/scripts => /Users/jdugger/scripts
default: /tmp/vagrant-puppet/modules-169f1d27ef31a534405e2e9fcde2eedf => /Users/jdugger/vm/puppet/modules
default: /tmp/vagrant-puppet/manifests-be5a69bfb646cf9329b8921f221ffab8 => /Users/jdugger/vm/puppet/manifests
==> default: Running provisioner: puppet...
The `puppet` binary appears not to be in the PATH of the guest. This
could be because the PATH is not properly setup or perhaps Puppet is not
installed on this guest. Puppet provisioning can not continue without
Puppet properly installed.
This response may not be because of the puppet path - not sure - It appears that puppet is not on the guest box (isn't it supposed to be running from Vagrant?). No version comes up when with:
[vagrant#localhost ~]$ puppet --version
Response is:
-bash: puppet: command not found
Update ---
#Treminio is correct. I have been able to provision with puppet manifests and modules external to the vagrant/ directory. The Path problem appears to be because Puppet is not installed on the guest VM. To resolve this I added a shell script found here:
http://garylarizza.com/blog/2013/02/01/repeatable-puppet-development-with-vagrant/
This was added just before the puppet provisioning declaration. Just as a note there doesn't seem to be a lot of advanced examples that demonstrate the external file capability or that you need to install puppet outside of the puppet provisioner.
However each Vagrant VM requires a complete self contained copy of all of the Git repos in order to stand up just one VM.
This is incorrect.
What you want to change is the puppet.module_path value.
It can be any location on your host's disk, and Vagrant will automatically mount it inside your VM.

Vagrant access guest machine from host (windows)

I installed a vagrant virtual machine in Windows, it's working fine, I am trying to connect to the guest machine from windows, but as soon as I uncomment some thing in Vagrantfile like :
config.vm.network "private_network", ip: "192.168.33.10"
OR
config.vm.network "public_network"
when reloading vagrant, I got this error :
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'poweroff' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
I know this does not make sense but I just open VirtualBox and with right-click goes to Settings of created vagrant machine image and diable Audio and it is working after save and run vagrant up
I never encountered that error personally, having never used vagrant on windows. This issue has been discussed here.

Resources