To run kitchen converge and set up my test kitchen vagrant instance, such as in this guide, I have noticed that I must first create a $COOKBOOK_ROOT_DIR/.kitchen/default-centos-72.yml file. After the file has been created with kitchen converge I must then control + c edit the file to include the password: vagrant line and then run kitchen converge again. In the end the file will look something like this:
---
hostname: 127.0.0.1
port: '2222'
username: vagrant
password: vagrant
ssh_key: "$COOKBOOK_ROOT_DIR/.kitchen/kitchen-vagrant/kitchen-$COOKBOOK_NAME-default-centos-72/.vagrant/machines/default/virtualbox/private_key"
last_action: converge
How can I have chef kitchen automatically know to use password: vagrant before running kitchen converge? Or better yet how can I have chef create test instances without any ssh passwords?
Yep #coderanger got it. I needed to downgrade to vagrant 1.8.4 and virtualbox 4.3.4 because virtualbox version 5+ doesnt work with vagrant 1.8.4.
Related
First, apologies: I'm a newbie.
I've created a very basic Vagrantfile by running Vagrant init. I only made a few changes:
config.vm.box = "generic/fedora28"
config.vm.box_version = "1.8.32"
config.vm.provider "libvirt" do |lv|
lv.memory = "4096"
end
(There are also a few items in my config.vm.provision section).
After running vagrant up , the process gets stuck at
==> default: Waiting for domain to get an IP address...
I'm running this off a Fedora 27 box, which uses version 2.0.2 of the Vagrant package (even though current is 2.1.5).
I've tried adding this line:
config.vm.network "private_network", ip: "192.168.100.101"
but it had no effect.
Can anyone help?
I have a Vagrant File that spins up 4 VMs, of image generic/ubuntu2004 on libvirt kvm, to make a k3s cluster that's accessible on the LAN. (multipass + k3s is only accessible via localhost b/c it doesn't allow easy bridging.)
I ran both of these commands > 50 times
sudo vagrant destroy --force --parallel
sudo vagrant up
On the ~51'st time, I noticed vagrant up got stuck on "Waiting for domain to get an IP address..."
What fixed it for me was sudo reboot. You know the classic have you tried unplugging it and plugging it back in?
Something else to try (I rebooted before trying it)
https://bugzilla.redhat.com/show_bug.cgi?id=1283989
While working with Chef, Kitchen, Vagrant and Virtual Box today... I encountered a bizarre issue when attempting to use the bento boxes hosted by Hashicorp (https://atlas.hashicorp.com/bento/) to do some Chef cookbook development/testing.
While spinning up a new cookbook, I wanted to test some newer versions of CentOS 7.2 and Ubuntu 16.04 which are not currently live in our environment. I turned to hashicorp's bento boxes to pull them down into my .kitchen.yml config.
.kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
customize:
memory: 1024
platforms:
- name: ubuntu-16.04
suites:
- name: default
run_list:
- recipe[sandbox::default]
attributes:
Used chef generate cookbook to create a new cookbook and as you can see above, was using a very vanilla kitchen config to get things started.
When running kitchen create I kept encountering the following error as an SSH Timeout when provisioning the VM using Vagrant and Virtual Box.
ERROR:
Timed out while waiting for the machine to boot.
This means that Vagrant was unable to communicate with the guest machine
within the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
When observing the Virtual Box VM Console, I noted the following (see screen shot below)...
A start job is running for Raise network interfaces (2 min 39s / 5min 3s)
Observing that Vagrant would timeout prior to the start job completing... I attempted to resolve by increasing the boot_timeout from a default of 300 seconds to 600 seconds in my .kitchen.yml
However, further testing proved that this did not resolve the issue even though the VM would successfully initialize after 5mins 3s... Kitchen / Vagrant were unable to SSH to the Host and the failure and the Vagrant SSH timeout persisted.
Ultimately, to resolve this issue I upgraded ChefDK, Vagrant, and VirtualBox to the latest versions available.
Experienced the issue with...
Virtual Box 5.0.30 r112061
Vagrant 1.8.6
Chef Development Kit 0.19.6
Resolved the issue by upgrading to...
Virtual Box 5.1.10 r112026
Vagrant 1.9.0
Chef Development Kit Version: 1.0.3
Following the version upgrades, the Vagrant SSH Timeouts disappeared completely and the box was created successfully within a few seconds.
Virtual Box VM Console
I am new to Vagrant but good in Docker.
In Vagrant I am aware of the fact that
config.vm.provision :shell,path: "bootstrap.sh", run: 'always'
in the Vagrantfile will provision vagrant box while doing vagrant up. With this, the vagrant box interactive console appears after the intended provisioning is done.
But I need to configure in such a way that, first the control goes in to vagrant box console and then the intended script is up and running. Because my requirement is to run a script automatically post vagrant up and not to run a bootstrapped script.
In analogy with Docker, my question can be seen as
what is the Vagrant equivalent for CMD in Dockerfile ?
You can look at vagrant triggers. You can run dedicated script/command after each specific vagrant command (up, destroy ...)
For example
Vagrant.configure("2") do |config|
# Your existing Vagrant configuration
...
# start apache on the guest after the guest starts
config.trigger.after :up do |trigger|
trigger.run_remote = {inline: "service apache2 start"}
end
end
I'm currently using a test kitchen to try and converge a Windows 7 machine with VMware Fusion as the provisioner to eventually deploy a chef cookbook. Every time I run a kitchen converge the process hangs on "Waiting for machine to boot. This may take a few minutes" and then fails due to a timeout. When I open Fusion I see the following:
Does anyone know what's happening? I've been struggling for a while to get this VM converged and haven't been able to get the VM up and running successfully to the point where I can deploy my cookbooks and I'm out of ideas.
My .kitchen.yml:
---
driver:
name: vagrant
ssh:
insert_key: false
customize:
cpus: 2
memory: 4096
transport:
name: winrm
provisioner:
name: chef_solo
platforms:
- name: windows-7
driver_config:
box: opentable/win-7-professional-i386-nocm
suites:
- name: default
run_list:
- recipe[my_recipe]
attributes:
I tried finding a sane Windows 7 Vagrant box for a Puppet presentation awhile ago and ran into similar issues. I had to run a powershell script to install Puppet before anything. Even still, I ran into similar issues, and had to do some extra work.
I was using the designerror box from Atlas. Perhaps my notes could assist you getting your environment up and running. It's Puppet, but a similar (easier?) process probably is needed for Chef: https://github.com/stark525/itbestprac-pres/tree/master/vagrant
Windows 7 boxes are typically home grown and owned, so you should probably build your own box if the project warrants the commitment. Ultimately, Windows provides a number of challenges to publicly distributable Vagrant boxes.
It appears that all of the VMware Windows boxes on Altas are misconfigured in one way or another. I manually built my own box (amarkon/windows-7-ult-n-x64) which now works correctly.
I am trying to load my vagrant box with salt, asking it to install Apache.
I am using salty-vagrant in masterless mode.
The vagrant box gets loaded, but it gets stuck in the console with the following message:
[default] Running provisioner: salt...
Checking if salt-minion is installed
salt-minion found
Checking if salt-call is installed
salt-call found
Salt did not need installing or configuring.
Calling state.highstate... (this may take a while)
When I check the vagrant salt log, the following is found:
[salt.utils ][ERROR ] This master address: 'salt' was previously resolvable but now fails to resolve! The previously resolved ip addr will continue to be used
[salt.minion ][WARNING ] Master hostname: salt not found. Retrying in 30 seconds
Has anyone faced this issue before?
You need to make sure you are passing a minion config with the following option set:
file_client: local
Read all the steps in the Masterless Quick Start: https://github.com/saltstack/salty-vagrant#masterless-quick-start