I'm trying to set up a workflow to develop Chef cookbooks locally. We're currently using Chef Server with the provisioned nodes using chef-client.
As part of the new workflow, we want to be able to start using Vagrant to test cookbooks locally to avoid incurring in the costs of testing on a remote machine in a cloud.
I'm able to launch and provision a local Vagrant machine, but the one thing I'm not really sure how to do is to have Chef load the local version of the cookbook, but still talk to the Chef server for everything else (environments, roles, data bags, etc.), so I don't have to upload the cookbook via knife every time I make a change I want to test. Is this possible?
In other words, can I make chef-client talk to the local chef-zero server only for the cookbooks but to the remote Chef server for everything else? Or maybe a different approach that would yield the same effect? I'm open to suggestions.
UPDATE
I think an example will help to express what I'm looking for. I'm realizing that this may not really be what I need, but I'm curious about how to achieve it anyway. In this scenario, a recipe reads from a databag stored in the remote Chef server
metadata.rb
name 'proxy-cookbook'
version '0.0.0'
.kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-12.04
suites:
- name: default
run_list:
- recipe[proxy-cookbook::default]
attributes:
recipes/default.rb
...
key = data_bag_item("key", "main")
....
Now, I know I can create something along the lines of:
data_bags/main.json
{
"id": "main",
"key": "s3cr3tk3y"
}
And have my kitchen tests read from that data bag; but that is exactly what I'm trying to avoid. Is it possible to either:
Instruct test-kitchen to get the actual data bag from chef server,
Have chef-zero retrieve a temporary copy of the data bags for local tests, or
Quickly "dump" the contents of a remote Chef server locally?
I hope that makes sense. I can add some context if necessary.
Test kitchen is the best way to drive vagrant. It provides the integration you're looking for with chef zero. Enables you to completely emulate your production chef setup locally and test your cookbook against multiple platforms.
Test kitchen has replaced the older workflows I used to have chef development. Very well worthwhile learning.
Example
Generate a demo cookbook that installs java using the community cookbook. Tools like Berkshelf (to manage cookbook dependencies) and chef zero are setup automatically.
chef generate cookbook demo
Creates the following files:
└── demo
├── .kitchen.yml
├── Berksfile
├── metadata.rb
├── recipes
│ └── default.rb
└── test
└── integration
├── default
│ └── serverspec
│ └── default_spec.rb
.kitchen.yml
Update the platform versions. Kitchen is told to use vagrant and chef zero.
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-14.04
- name: centos-6.6
suites:
- name: default
run_list:
- recipe[demo::default]
attributes:
Berksfile
This file controls how cookbook dependencies are managed. The special "metadata" setting tells Berkshelf to refer to the cookbook metadata file.
source 'https://supermarket.chef.io'
metadata
metadata.rb
Add the "apt" and "java" cookbooks as a dependencies:
name 'demo'
..
..
depends "apt"
depends "java"
recipes/default.rb
include_recipe "apt"
include_recipe "java"
test/integration/default/serverspec/default_spec.rb
Test for the installation of the JDK package
require 'spec_helper'
describe package("openjdk-6-jdk") do
it { should be_installed }
end
Running the example
$ kitchen verify default-ubuntu-1404
-----> Starting Kitchen (v1.4.0)
..
..
Package "openjdk-6-jdk"
should be installed
Finished in 0.1007 seconds (files took 0.268 seconds to load)
1 example, 0 failures
Finished verifying <default-ubuntu-1404> (0m13.73s).
-----> Kitchen is finished. (0m14.20s)
Update
The following example demonstrates using test kitchen with roles (works for data bags and other items you want loaded into chef-zero):
Can the java cookbook be used to install a local copy of oracle java?
I think I found what I was looking for.
You can use knife to download the Chef server objects that you need. You can bootstrap this in .kitchen.yml so you don't have to do it manually every time.
.kitchen.yml
...
driver:
name: vagrant
pre_create_command: 'mkdir -p chef-server; knife download /data_bags /roles /environments --chef-repo-path chef-server/'
...
provisioner:
name: chef_zero
data_bags_path: chef-server/data_bags
roles_path: chef-server/roles
environments_path: chef-server/environments
client_rb:
environment: development
...
And then I just added the chef-server directory to .gitignore
.gitignore
chef-server/
There might be a less redundant way of doing this, but this works for me right now, and since I just wanted to document this, I'm leaving it like that.
Related
I am trying to run this
Berksfile:
source 'https://supermarket.chef.io'
metadata
metadata.rb:
name 'my_jenkins_cookbook'
depends 'git'
depends 'ruby_rbenv'
depends 'jenkins'
depends 'java'
depends 'docker'
version '0.0.2'
I have also tried will the required cookbooks in local and it doesn't locate jenkins_job resource from the jenkins_job cookbook.
Can someone help me with this blockage please?
TERMINAL OUTPUT:
[DEBUG] Running command 4-run-chef-solo
[DEBUG] No test for command 4-run-chef-solo
[ERROR] Command 4-run-chef-solo (chef-solo -c /tmp/chef/solo.rb -j /tmp/chef/jenkins.json) failed
[DEBUG] Command 4-run-chef-solo output: Starting Chef Infra Client, version 17.0.242
Patents: https://www.chef.io/patents
resolving cookbooks for run list: ["my_jenkins_cookbook::jenkins_jobs", "my_jenkins_cookbook::jenkins_views", "my_jenkins_cookbook::setup_jenkins_users"]
============================================================
Error Resolving Cookbooks for Run List:
============================================================
Missing Cookbooks:
------------------
No such cookbook: docker
Expanded Run List:
------------------
* my_jenkins_cookbook::jenkins_jobs
* my_jenkins_cookbook::jenkins_views
* my_jenkins_cookbook::setup_jenkins_users
System Info:
------------
chef_version=17.0.242
platform=ubuntu
platform_version=18.04
ruby=ruby 3.0.1p64 (2021-04-05 revision 0fb782ee38) [x86_64-linux]
program_name=/usr/bin/chef-solo
executable=/opt/chef/bin/chef-solo
Running handlers:
ERROR: Running exception handlers
Running handlers complete
ERROR: Exception handlers complete
Chef Infra Client failed. 0 resources updated in 26 seconds
FATAL: Stacktrace dumped to /tmp/chef/local-mode-cache/cache/chef-stacktrace.out
FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
FATAL: Net::HTTPServerException: 412 "Precondition Failed"
how are you running this? Is this straight on a node with chef-client or through Test Kitchen?
It looks to me, like you're running chef-solo directly, but the cookbooks haven't been loaded onto the server.
Issue
Looks like the chef-solo execution could not find the dependent cookbooks, as indicated in the error log. So here is the alternative approach to solve the issue.
Resolution
Alternative suggestion for chef-solo is to run chef-client using -z option.
Chef client version 11 and above having an option called chef-client -z. It is called Chef client execution in local mode.
It is the suggested way to run cookbooks without Chef server.
Chef documentation for Chef-client states the below,
$ chef-client OPTION VALUE OPTION VALUE ...
-z, --local-mode
Run the chef-client in local mode. This allows all commands that work against the Chef server to also work against the local chef-repo.
Local mode does not require a configuration file, instead it will look
for a directory named /cookbooks and will set chef_repo_path to be
just above that. (Local mode will honor the settings in a
configuration file, if desired.)
Local mode will store temporary and cache files under the
<chef_repo_path>/.cache directory by default. This allows a normal
user to run the chef-client in local mode without requiring root
access.
So please follow the instructions below,
You need to create a project repository with your cookbooks and other stuff.
Project repo is nothing but a directory containing other directories for Roles, Databags, Cookbooks and Environments
Create a directory called cookbooks in the project repo
Since you already have the Berksfile in your cookbooks, run berks vendor cookbooks command from the each cookbook directory to pull all your dependent cookbooks and store it in the cookbooks directory
Run the below command from the project repo to invoke the Chef client in local mode.
chef-client -z -o 'provide_your_overridden_runlist'
This steps needs to be executed in the Jenkins script section
Read more in Chef Docs,
Chef client run in local mode
Berks Vendor CLI command
In a cookbook, default.rb is the file which will be picked/executed. I added service.rb in the same folder cookbooks/cookbook_main/recipes folder, in which default.rb resides. I then uploaded the cookbook and executed the chef-client on a remote VM which acts as a node.
Logs showed that it detected both the rb files under recipes folder. Here is the problem, Script present in default.rb was executed but not the one in service.rb. Why?
PS: (New to Chef, so please correct if am wrong !)
Since you only have cookbook_main in your run_list like so:
chef.add_recipe "cookbook_main"
When you only have the cookbook specified in your run list without specifying a recipe, Chef will only run the default recipe. For example, these two lines are essentially equivalent:
chef.add_recipe "cookbook_main"
chef.add_recipe "cookbook_main::default"
If you want to run the new service recipe you need to tell Chef to run it. There are a couple of ways to do this. One is to explicitly add it to your run list in your Vagrantfile:
chef.add_recipe "cookbook_main"
chef.add_recipe "cookbook_main::service"
Otherwise you can include it via your default recipe. So add this line somewhere in your default.rb recipe:
include_recipe 'cookbook_main::service'
I am a new to chef cookbooks and currently working on a task. I have already completed the tutorial on chef.io but i am struggling to understand how can i install a cookbook provided at chef-io.
So as of now, I have downloaded the cookbook. Its .tar file and i extracted it. I can see respective default.rb and other files but i am unable to get that how can i add this cookbook to my existing cookbooks which are creating a vm image.
Is there any guide or tutorial that i can follow ?
In addition to Josh's answer it sounds to me like want to add it to your chef-repo after downloading it and extracting the gzip file?
Just add the maven directory to your cookbooks directory. Or you could just do knife cookbook site install maven from within your chef-repo directory.
Or maybe you want to upload it to your Chef Server?
knife cookbook upload maven See: https://docs.chef.io/knife_cookbook.html#upload
If I'm understanding your question correctly, then what you need to do is create a Chef role, and then list all of the recipes that you want to execute in your role's run_list. As for documentation, check out Chef's documentation on roles: Chef Roles
Firstly if i get your question right, you are trying to download an already existing cookbook from the community. If so, you can follow these steps :
1) Download the cookbook which is in the .tar format as you specified, extract it and place that particular cookbook within your chef-repo path from where you want to upload it.
2) Once done, do a "knife cookbook upload cookbook-name"
Now the main part is, if you are trying to upload this cookbook and make it part of your already existing run-list. You need to add this within the run-list.
Else if you are doing it via role, you will need to add this cookbook recipe within your role's run-list.
But keep in mind, whatever cookbook's you download from the community might have dependencies on other cookbook's so choose wisely. The lesser cookbook's the better as this makes your run-list converge faster for a faster chef-client run.
Hope this helps.
Regards,
Akshay
A few weeks ago I came across Vagrant and fell in love. I'm currently trying it out on a new project and it's working great locally. I'm using Chef Solo to build all the box's software and Berkshelf to manage the cookbooks.
What I'm wondering is how everyone is managing their development, staging and production severs for each project. While I'm working on this project locally, I would like to have a development server and eventually staging and production servers once the project is complete.
I have successfully setup Vagrant with Amazon's Ec2 using the Vagrant AWS Plugin, but it would appear that Vagrant doesn't let you vagrant up a local box and a development box at the same time, you can only have one.
I wrote a small bash script that basically gives me this capability. I can run
$ vagrant up local
$ vagrant up development
$ vagrant up staging
$ vagrant up production
And it will build each box I specify by looking for a Vagrantfile dedicated to that box's configuration. I have a directory name machines where each vagrant file lives. The directory structure I have looks like this
- app/
- public/
- vagrant/
- attributes/
- machines/
- local.rb
- development.rb
- staging.rb
- production.rb
- recipes/
- templates/
Is this a sensible solution?
How are you managing multiple servers for a single project. Can ALL the configuration live in one Vagrantfile, should it?
Please never ever do this. Vagrant is not built for it and it will break. For example, if you ever need to manipulate production instances from a different workstation, good luck :-( If you are looking for tools to manage server provisioning, it is definitely some slim pickings, but there are tools to try:
CloudFormation (and Heat for OpenStack)
Terraform
chef-metal
The first is a bit ungainly but very powerful if you are all AWS-based (or OpenStack). The latter two are very young but look promising.
I am trying to create tasks that sync from my production/staging environments to a local vagrant box.
I am hoping for a command like this: cap vagrant sync_production_database which would perform a database dump on the remote server, download it, and then import it on the vagrant box. Unfortunately, I can't find a way to execute a capistrano task on another environment.
I have my environments set up like so:
config
├── deploy
│ ├── production.rb
│ ├── staging.rb
│ └── vagrant.rb
└── deploy.rb
And here is an example of what I am trying to accomplish:
desc 'sync database'
task :sync_production_database do
# executed on remote server
# this is obviously not working
on(:production) do |host|
# dump database and download it
end
# executed on vagrant box
on roles(:web) do |host|
end
end
First, I think it is better to use the stage parameter of the cap command to designate the remote stage servers rather than your local stage servers. This means your command assumes :vagrant as always being the local stage.
Then if vagrant stage servers have a role which the remote servers don't have, you can execute different tasks on each stage via the following:
# Assuming the following stage definitions in deploy/production.rb and deploy/vagrant.rb respectively
server 'production.example.com', roles: %w{web app}
server 'vagrant.local', roles: %w{web localhost}
# the following will execute tasks on each host
desc 'sync database'
task :sync_database do
# executed on remote server(s)
on roles(:app) do |host|
# dump database and download it
end
# Load the servers in deploy/vagrant.rb
invoke(:vagrant)
# executed on vagrant box server(s)
on roles(:localhost) do |host|
# Create database and load dump from remote
end
end
This works because roles(...) returns all servers loaded with the given role and since each stage has a unique role, you can retrieve only what servers you want by specifying their respective role.
Normally, without invoke(:vagrant), roles(:localhost) in the example above wouldn't return anything since Capistrano only loads the servers defined in the given stage by default. To get around this, you can force load servers in your vagrant stage using invoke(:vagrant). So then, roles(:app) returns the servers for the given stage and roles(:localhost) returns your vagrant servers.