I do have an annoying bug using Vagrant on Windows.
Whenever I do start a new VM (after init or destroy) it will not recognize the Linux VM startup. I need to cancle the command using ctrl-c and kill the Machine in vIrtualBox. The second start or any start after this will work.
Any Idea how to find the root cause for this, or which command is executed during
"[default] Waiting for VM to boot. This can take a few minutes."
Stefan
This can be a few things... the easiest way to tell the exact problem is to see what the VM itself is stalling on by running it in gui mode.
config.vm.provider "virtualbox" do |v|
v.gui = true
end
It could be stuck at the grub prompt or something... it could also just be a bad image... perhaps try a different Vagrant base box.
Related
I've been having a problem with Vagrant (1.8.1, using VirtualBox 5.0.20) on Windows 10.
When I follow the getting started tutorial https://www.vagrantup.com/docs/getting-started/ after I have typed vagrant up, my console is stuck on:
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2200
default: SSH username: vagrant
default: SSH auth method: private key
It does not continue, i can see the VM boot inside of VirtualBox, and i can use the VirtualBox GUI to log in with the default credentials, so the VM itself is working.
According to https://www.vagrantup.com/docs/virtualbox/common-issues.html
I should run VirtualBox as admin and do vagrant up from a cmd.exe with admin rights, but when i do that i get the message:
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["modifyvm", "1b9d4f9b-04d8-48bf-8d16-d3aed99d341b", "--natpf1", "delete", "ssh"]
Stderr: VBoxManage.exe: error: Code E_FAIL (0x80004005) - Unspecified error (extended info not available)
VBoxManage.exe: error: Context: "LockMachine(a->session, LockType_Write)" at line 493 of file VBoxManageModifyVM.cpp
This seems different from the 100's of posts all around the net like these:
https://github.com/Varying-Vagrant-Vagrants/VVV/issues/375
since I am not getting antying after the output listed above, it just sits there and after alike 10 minutes it comes up with the message:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
I've also read Vagrant stuck in "Waiting for VM to Boot" but it did not help me.
Is there anything else I am missing here?
In my case, vagrant up was hanging on 'Syncing VM folder' , on Windows 7 with Vagrant 1.9.3 and VBox 5.1.18 . It turned out that it requires Powershell >= 3.0.
I downloaded it from https://www.google.ca/search?q=powershell+3.0+download&ie=utf-8&oe=utf-8&client=firefox-b&gfe_rd=cr&ei=x0fdWLfsBubQXu2OorAD, and worked fine afterwards.
try to turn off the VM from VirtualBox or from command line
C:\Progra~1\Oracle\VirtualBox\VBoxManage.exe controlvm default poweroff
then restart the VM from vagrant.
In case you get an error when powering off the VM, force the shutdown
C:\Progra~1\Oracle\VirtualBox\VBoxManage.exe startvm default --type emergencystop
Then vagrant up will should work nicely
I actually already found my problem. It was a .dll from some addware scanner that was preventing the virtualbox VM from starting. I lost the link to the forum topic which helped me solve this unfortunately.
What i did was open the logs from the VM in VirtualBox and had a read trough. At some point, a line indicating an error appeared with a .dll name which was the culprit. I deleted the offending .dll files from my pc and it was fixed.
If i find the link again to the topic explaining exactly what dll it was i will post it here. Im not at the machine that i fixed the problem on now so i can't access my search history.
Hope it will work for you as it worked for me
I'm still investigating why, but as a solution it works.
our case - when typed in cmd (inside vagrand image directory) "vagrant up"
it open virtual box vm and stuck on "default: SSH auth method:
private key" as mentioned in question
so fix by this steps:
open manually virtual box (besides what already opened by vagrant up)
run the vm that had added to the list (by vagrant up)
open CMD
type "Vagrant ssh"
and it will work
hope it helped,
best regards
Sometimes I have to reboot or end my session mid testing on a machine.
As it can take ages to do a full converge it would be great to just be able to start or stop a machine like I would using vagrant commands.
Is this possible with machines created with test kitchen?
The only way to do it right now is to:
cd to the directory for your vm inside .kitchen/kitchen-vagrant/yourinstancehere and issue a vagrant reload command, that should restart your VM
Source: https://github.com/test-kitchen/kitchen-vagrant/issues/115#issuecomment-52943418
Our vagrant box takes ~1h to provision thus when vagrant up is run for the first time, at the very end of provisioning process I would like to package the box to an image in a local folder so it can be used as a base box next time it needs to be rebuilt. I'm using vagrant-triggers plugin to place the code right at the end of :up process.
Relevant (shortened) Vagrantfile:
pre_built_box_file_name = 'image.vagrant'
pre_built_box_path = 'file://' + File.join(Dir.pwd, pre_built_box_file_name)
pre_built_box_exists = File.file?(pre_built_box_path)
Vagrant.configure(2) do |config|
config.vm.box = 'ubuntu/trusty64'
config.vm.box_url = pre_built_box_path if pre_built_box_exists
config.trigger.after :up do
if not pre_built_box_exists
system("echo 'Building gett vagrant image for re-use...'; vagrant halt; vagrant package --output #{pre_built_box_file_name}; vagrant up;")
end
end
end
The problem is that vagrant locks the machine while the current (vagrant up) process is running:
An action 'halt' was attempted on the machine 'gett',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.
I understand the dangers of two processes provisioning or modifying the machine at one given time, but this is a special case where I'm certain the provisioning has completed.
How can I manually "unlock" vagrant machine during provisioning so I can run vagrant halt; vagrant package; vagrant up; from within config.trigger.after :up?
Or is there at least a way to start vagrant up without locking the machine?
vagrant
This issue has been fixed in GH #3664 (2015). If this still happening, probably it's related to plugins (such as AWS). So try without plugins.
vagrant-aws
If you're using AWS, then follow this bug/feature report: #428 - Unable to ssh into instance during provisioning, which is currently pending.
However there is a pull request which fixes the issue:
Allow status and ssh to run without a lock #457
So apply the fix manually, or waits until it's fixed in the next release.
In case you've got this error related to machines which aren't valid, then try running the vagrant global-status --prune command.
Definitely a bit more of a hack than a solution, but I'd rather a hack than nothing.
I ran into this issue and nothing that was suggested here was working for me. Even though this is 6 years old, it's what came up on a google (along with precious little else), I thought I'd share what solved it for me in case anyone else lands here.
My Setup
I'm using vagrant with ansible-local provisioner on a local virtualbox VM, which provisions remote AWS EC2 instances. (i.e. the ansible-local runs on the virtualbox instance, vagrant provisions the virtualbox instance, ansible handles the cloud). This setup is largely because my host OS is Windows and it's a little easier to take Microsoft out of the equation on this one.
My Mistake
Ran an ansible shell task with a command that doesn't terminate without user input (and did not run it with the & to run in the background).
My Frustration
Even in the linux subsystem, trying a ps aux | grep ruby or ps aux | grep vagrant was unhelpful because the PID would change every time. Probably a reason for this, likely has something to do with how the subsystem works, but I don't know what that reason is.
My Solution
Just kill the AWS EC2 instances manually. In the console, in the CLI, pick your flavor. Your terminal where you were running vagrant provision or vagrant up should then finally complete and spit out the summary output, even if you ctrl + C'd out of the command.
Hoping this helps someone!
Is there any difference between typing vagrant halt and right clicking on the box in Virtual Box and selecting close>power off
Also on my Windows 7 machine running Vagrant on VirtualBox should I shut down Vagrant using vagrant halt before putting the machine to sleep or hibernating or does it make any difference.
No, no difference. You can see the source for the halt command here.
There's no particular need to shut them down or suspend them when your host sleeps, as far as I know (although I mostly use Vagrant on a Mac), but sometimes there can be peculiar behaviour. For example:
Have a Vagrant box running (in my case Ubuntu 14.04)
Close the host computer. (goes to sleep)
time passes...
Open the host computer.
Log on to Vagrant box and observe system time. It is off (behind) by the amount of time the host was asleep.
I first noticed this because AWS rejects commands that haven't been signed within the last 5 minutes. It's easy to fix with a VBox option in the Vagrantfile to set the NTP update threshold to a lower value (like 10 seconds):
config.vm.provider "virtualbox" do |v|
v.customize [ :guestproperty, :set, :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 10000 ]
end
Unlike halt/suspend, you should use always use vagrant destroy in lieu of VBox to delete a VM, in order to give the provisioners an opportunity to clean up.
I don't want to run provision on my vagrant (VirtualBox) machine. I want to vagrant up and the machine marked as "provisioned", even though actually the machine is not provisioned yet. I just want to mark it as "provisioned".
Is this possible? Perhaps, is there some file i can edit in .vagrant?
It seems vagrant looks for the existence of a file :
.vagrant/machines/[machine-name]/[provider]/action_provision
However, it seems that there is more logic to that in
[vagrant-install-path]/lib/vagrant/action/builtin/provision.rb
You can start investigating from there to what exactly vagrant needs to consider a machine as provisioned.
I personally didn't have time to look more into it since I fixed my issue with a workaround :).
Started the machine with vagrant up, so that chef_solo provisioner started running, and then hitting CTRL+C twice (so that chef says "exiting without cleanup") helped making the VM be marked as provisioned, so that it could be started without the --no-provision flag.
Hoping this comes of any help.