I am trying to learn how to do local development of chef recipes. I am following this guide https://gist.github.com/smford22/f00f46471047422bd8a7
I am prefixing all the kitchen commands with chef exec because if I try to run kitchen directly, I get all sorts of ruby/gem errors.
When I run chef exec kitchen converge it gets stuck on installing the Chef Omnibus, hanging on "Trying wget..."
If I login to the VM and try to run curl and wget commands like curl https://google.com it indeed cannot access the internet.
chef exec kitchen -v
Test Kitchen version 1.23.2
chef -v
Chef Development Kit Version: 3.5.13
chef-client version: 14.7.17
delivery version: master (6862f27aba89109a9630f0b6c6798efec56b4efe)
berks version: 7.0.6
kitchen version: 1.23.2
inspec version: 3.0.52
.kitchen.yml:
---
driver:
name: vagrant
## The private_network feature lets you setup a private network on the VM guest
## via localhost on the host.
## see also: https://www.vagrantup.com/docs/networking/private_network.html
# network:
# - ["private_network", {ip: "33.33.33.33"}]
provisioner:
name: chef_zero
## The verifier section determines which test platform you want to use.
verifier:
name: inspec
format: doc
platforms:
- name: centos-6.7
suites:
- name: default
run_list:
- recipe[chef_httpd::default]
attributes:
Are you accessing the internet through proxy?
If yes, you need to configure the same for your Vagrant VMs using vagrant-proxyconf plugin.
Documentation: http://tmatilai.github.io/vagrant-proxyconf/
Related
We in the middle to developed molecule testing using Vagrant as driver and libvirt as provider.
However, the VM that Vagrant create from molecule use user 'vagrant' to perform validation or installation inside the VM. We plan to use 'root' user instead of default user as 'vagrant'
We try to include below option in molecule.yml,
---
dependency:
name: galaxy
driver:
name: vagrant
provider:
name: libvirt
type: libvirt
options:
memory: 192
cpus: 1
ssh_user: root
ssh_password: 'xxxxxxx'
platforms:
- name: test-vm
box: centos7
interfaces:
- auto_config: true
network_name: private_network
type: static
ip: 192.168.122.100
config_options:
ssh.remote_user: "'root'"
We try to search for details options that molecule.yml can use inside from molecule site but not able to found it.
Thus, the issue currently is:
Molecule use 'vagrant' user instead of root to perform any playbook tasks.
When login manually into the Vagrant VM using 'vagrant ssh' , it directly login as user 'vagrant' instead of 'root'
Need assistance from expertise for this kind of issue.
I run kitchen test, cant accept the license, answer yes does nothing
kitchen.yml
provisioner:
name: chef_zero
always_update_cookbooks: true
retry_on_exit_code:
- 35 # 35 is the exit code signaling that the node is rebooting
max_retries: 1
client_rb:
exit_status: :enabled # Opt-in to the standardized exit codes
client_fork: false # Forked instances don't return the real exit code
environment: _default
chef_license: accept
product_name: chef
chef-client: 14
Try to update from accept to accept-no-persist in the provisioner section per the github issue: https://github.com/test-kitchen/test-kitchen/issues/1553
provisioner:
name: chef_zero
always_update_cookbooks: true
log_level: info
chef_license: accept-no-persist
product_name: chef
product_version: 14
note (via tas50 comment on the github issue): Test Kitchen 1.x does not support chef licensing. The plan is to backport the work on the 2.x branch to the 1-stable branch and release a new 1.X release with the support.
I have an environment setup with Test-Kitchen v1.5.0, Vagrant v1.8.1. I have a recipe that uses chef vault to decrypt our encrypted passwords that our in our data_bags_path/passwords/pilot.json file.
I am using the solution here https://github.com/chef/chef-vault/issues/58 that daxgames provides towards the end of the page.
My .kitchen.yml:
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
environment: lgrid2-dev
node_name: "ltylapp400a"
client_key: "/etc/chef/ltylapp400a.pem"
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["/Users/212466756/.chef", "/etc/chef", "disabled:false"]
suites:
- name: ltylapp400a
run_list:
- role[lgrid-db]
attributes:
chef_client:
A snippet from my recipe that deals with chef-vault:
case node["customer_conf"]["status"]
when 'pilot'
passwords = ChefVault::Item.load('passwords', 'pilot')
when 'production'
passwords = ChefVault::Item.load('passwords', node[:hostname][1..3])
end
My directory structure for relevant data_bags:
data_bags
--passwords
--pilot.json
--pilot_keys.json
The error I am getting is that my client.pem that vagrant generates at /etc/chef/ltylapp400a.pem can not decrypt the contents of that databag. Chef suggest that I run knife vault refresh, I am not connected to my chef server on my local machine so if I run this it will give an error about not having a chef server to connect to. My question is how I can add my new key that vagrant generated to the pilot_keys.json so that it is able to decrypt that data_bag?
The more detailed answers are better I am still somewhat new to chef, test-kitchen, etc...
I was able to get this working, below are my results and conclusions. As I stated above my issue was I was unable to decrypt the data_bag since I could not add the new key that vagrant created to the pilot_key.json file since I was not connected to the chef server and could not run a knife vault refresh/update. What I had to do was get the client.pem key from a server that already had access to the pilot.json data_bag. I used our utility server key since it will not be destroyed in the near future.
So on my local PC I have a .chef/ directory under my home directory, I have the client.pem key I copied from the utility server and I sync this with the /tmp/kitchen/ which acts as the /etc/chef directory in the test-kitchen environment.
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
node_name: "utilityServer"
client_key: "/tmp/kitchen/client.pem" #The Chef::Vault needs a client.pem file to authenticate back to the data_bag to decrypt it, this needs to be stored at /tmp/kitchen/client.pem
environment: dev
no_proxy: 10.0.2.2
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["~/.chef","/tmp/kitchen/","disabled:false"] # Allows the vagrant box to have access to your .chef directory in your home directory. This is where you will store the client.pem for authentication.
suites:
- name: lzzzdbx400a
run_list:
- role[lgrid-db]
attributes:
The data_bags/passwords/pilot_key.json looks like this:
{
"id": "pilot_keys",
"admins": [
"utilityServer"
],
"clients": [
"webserver",
"database"
],
"search_query":"*:*"
"utilityServer":"key",
"webserver":"key",
"database": "key"
}
Since the utilityServer key was already able to decrypt the passwords/pilot data_bag it ran through fine during the next time I ran kitchen converge.
During previously struggles with Kitchen and chef-vault I used the synced_folders method to access key. Revisited this topic I found another solution.
Kitchen Support
To make this work in kitchen, just put a cleartext
data bag in the data_bags folder that your kitchen run refers to
(probably in test/integration/data_bags). Then the vault commands fall
back into using that dummy data when you use chef_vault_item to
retrieve it.
reference: http://hedge-ops.com/chef-vault-tutorial/
To run kitchen converge and set up my test kitchen vagrant instance, such as in this guide, I have noticed that I must first create a $COOKBOOK_ROOT_DIR/.kitchen/default-centos-72.yml file. After the file has been created with kitchen converge I must then control + c edit the file to include the password: vagrant line and then run kitchen converge again. In the end the file will look something like this:
---
hostname: 127.0.0.1
port: '2222'
username: vagrant
password: vagrant
ssh_key: "$COOKBOOK_ROOT_DIR/.kitchen/kitchen-vagrant/kitchen-$COOKBOOK_NAME-default-centos-72/.vagrant/machines/default/virtualbox/private_key"
last_action: converge
How can I have chef kitchen automatically know to use password: vagrant before running kitchen converge? Or better yet how can I have chef create test instances without any ssh passwords?
Yep #coderanger got it. I needed to downgrade to vagrant 1.8.4 and virtualbox 4.3.4 because virtualbox version 5+ doesnt work with vagrant 1.8.4.
I'm currently using a test kitchen to try and converge a Windows 7 machine with VMware Fusion as the provisioner to eventually deploy a chef cookbook. Every time I run a kitchen converge the process hangs on "Waiting for machine to boot. This may take a few minutes" and then fails due to a timeout. When I open Fusion I see the following:
Does anyone know what's happening? I've been struggling for a while to get this VM converged and haven't been able to get the VM up and running successfully to the point where I can deploy my cookbooks and I'm out of ideas.
My .kitchen.yml:
---
driver:
name: vagrant
ssh:
insert_key: false
customize:
cpus: 2
memory: 4096
transport:
name: winrm
provisioner:
name: chef_solo
platforms:
- name: windows-7
driver_config:
box: opentable/win-7-professional-i386-nocm
suites:
- name: default
run_list:
- recipe[my_recipe]
attributes:
I tried finding a sane Windows 7 Vagrant box for a Puppet presentation awhile ago and ran into similar issues. I had to run a powershell script to install Puppet before anything. Even still, I ran into similar issues, and had to do some extra work.
I was using the designerror box from Atlas. Perhaps my notes could assist you getting your environment up and running. It's Puppet, but a similar (easier?) process probably is needed for Chef: https://github.com/stark525/itbestprac-pres/tree/master/vagrant
Windows 7 boxes are typically home grown and owned, so you should probably build your own box if the project warrants the commitment. Ultimately, Windows provides a number of challenges to publicly distributable Vagrant boxes.
It appears that all of the VMware Windows boxes on Altas are misconfigured in one way or another. I manually built my own box (amarkon/windows-7-ult-n-x64) which now works correctly.