When creating or editing a Role in the GUI I can't find a way to add a cookbook.
I have my recipes grouped in different cookbooks, but it seems when I create a role I have to add every recipe one by one. It does not make sense to me.
For instance, I have a cookbook that adds about 30 configuration files, each one with its own recipe. I would like to add that cookbook to a role, I don't know how.
It seems when I create a role I can add sub-roles and recipes, but not cookbooks. To me is like somebody asks me for my recipes for soups, and instead of giving him my book with soup recipes I start searching around for all my recipes about soup.
Any idea ?
Thanks,
Luis
That's the way chef works. You set a run list of recipes.
If you wish to do all in one pass create a recipe named all_recipesfor exemple and containing
include_recipe "you_cookbook::recipe_name1"
include_recipe "you_cookbook::recipe_name2"
include_recipe "you_cookbook::recipe_name3"
But if you intent is to manage 30 files in the same way, you may loop over them in a single recipe instead of doing one recipe per file.
Having multiple recipes in a cookbook is more to manage different cases (service or cronjob, unix system or windows system, etc).
All in all it sounds like a bad design at first. (No judgement, just a feeling with the few information you gave)
For your soup exemple, you're taking it the wrong way.
It's not that someone ask you for recipes, it's you telling a chef to cook things.
If you want him to cook everything in a cokbook, you have to tell him that's what you want, but a recipe for a particular soup may reference other 'common' recipes (like peel potatoes) etc.
Related
Assume that a normal deployment script does a lot of things and many of them are related to preparing the OS itself.
These tasks are taking a LOT of time to run even if there is nothing new to do and I want to prevent running them more often than, let's say once a day.
I know that I can use tags to filter what I am running but that's not the point: I need to make ansible aware that these "sections" executed successfully one hour ago and due to this, it would skip the entire block now.
I was expecting that caching of facts was supposed to do this but somehow I wasnt able to see any read case.
You need to figure out how to determine what "executed successfully" means. Is is just that a given playbook ran to completion? Certain roles ran to completion? Certain indicators exist that allow you determine success?
As you mention, I don't think fact caching is going to help you here unless you want to introduce custom facts into each host (http://docs.ansible.com/ansible/playbooks_variables.html#local-facts-facts-d)
I typically come up with a set of variables/facts that indicate a role has already been run. Sometimes this involves making shell calls and registering vars, looking at gathered facts and determining if certain files exist. My typical pattern for a role looks something like this
roles/my_role/main.yml
- name: load facts
include: facts.yml
- name: run config tasks if needed
include: config.yml
when: fact_var1 and fact_vars2 and inv_var1 and reg_var1
You could also dynamically write a yaml variable file that get's included in your playbooks and contains variables about the configured state of your environment. This is a more global option and doesn't really work if you need to look at the configured status of individual machines. An extension of this would be to write status variables to host_vars or group_vars inventory variable files. These would get loaded automatically on a host by host basis.
Unfortunately, as far as I know, fact caching only caches host based facts such as those created by the setup module so wouldn't allow you to use set_fact to register a fact that, for example, a role had been completed and then conditionally check for that at the start of the role.
Instead you might want to consider using Ansible with another product to orchestrate it in a more complex fashion.
Does anyone know if you can run lets say 2 error handlers in the same cookbook and have both of them run, one after another? And if so how would you do that.
Handlers are entirely independent of each other and you can run as many as you want. If you are using the chef_handler cookbook, use the titular resource multiple times.
Currently I want to use (opscode) Chef to configure all our routes on our machines. Since I'm very lazy, I already searched on the internet for an ready-to-go cookbook but couldn't find anything. I know, that Chef has a feature to configure routes "https://docs.chef.io/resource_route.html", but this is not enough for our use-case. We have VMs in different placement zones (prod, preprod, dev) in MZ and DMZ with different gateways on each.
If I can't find a cookbook that can differentiate that, I need to write one by myself. My idea was to analyze the node-name via ruby and use a loop and the chef-route resource to create all routes.
if /_prod/ =~ Chef::Config[:node_name]
So my hope is, that somebody is already using chef to configure routes in a enterprise-size and can help me out or that the community provides me some ideas on developing the cookbook by myself
Your environment description (around chef particularly) is not really detailed, so I'll answer on how I see it:
Chef environments to locks cookbooks in the dev/QA/Prod (could be
extended to dev/dev DMZ/QA/QA DMZ/Prod/Prod DMZ , etc)
One wrapper (role) cookbook to set attributes like gateway, static routes per type of box or per group of routes you wish to set
A code cookbook containing the recipe using the attributes defined before.
Depending on the way you'll have one or many wrapper cookbooks on your node runlist. Making a change to a route (in a wrapper) will go through locking them in the corresponding environment.
For the routes management, maybe a wrapper per "zone" is the best idea if one of your zone match exactly one environment.
WARNING: This is an exemple based on my current environment and how I would do it, I do not actually use the code below.
For our infrastructure, we have 3 QA environments (too much) within the same security zone (vlan), so we need to change the routing with the apps lifecycle, it's where the locking mechanism comes handy to change part of the nodes routing and not the whole nodes in the zone.
For the cookbook (the point 3 above, let's name it 'my_routing_cookbook'), it's quite "simple"
In the attributes let's have:
default['sec']['default'] = { gw: '192.168.1.250', device: 'eth1' }
default['sec']['routes']['172.16.0.0/16'] = { gw: '192.168.1.254', device: 'eth0' }
default['sec']['routes']['10.0.0.0/8'] = { gw: '192.168.1.254', device: 'eth0' }
In the recipe:
route '0.0.0.0/0' do
gateway node['sec']['default']['gw']
device node['sec']['default']['device']
end
node['sec']['routes'].each as |r,properties|
route r do
gateway properties['gw']
device properties['device']
end
end
The default gateway could be in the route list, I just think it's easiest for non networking people to retain it's the default gateway like this.
For the point 2, each wrapper cookbook will depend on this one and set it's own attributes. Thoose cookbooks will have a default.rb just calling include_recipe 'my_routing_cookbook'
Hope it will help you getting started.
I have a scenario where I need to replace certain Strings in an attribute file within a cookbook with user input from within a Bash script.
In the current puppet setup this is done simply by using sed on the module files, since the modules are stored in the file structure as files and folders.
How can I replicate this in the Chef eco-system? Is there a known shortcut?
Or would I have to download the cookbook as a file using knife, modify the content and then re-upload again to make the changes?
Not sure this is the best approach. You can definitely use knife download, sed, and knife upload as you mentioned but a better way would be to make it data driven. Either store the values in a data bag or role, and manipulate those either using knife or another API client. Then in your recipe code you can read out the values and use them.
I want the sites defined in nodes.pp to come from a .yml file. I'm thinking if the .pp file is itself processed first from an .erb file then this would be easy. But as far as I can tell the .pp files cannot be templates themselves, eg. nodes.pp.erb.
I want to keep the nodes definition in yml rather than in .pp because I want to use the same definition for things like vagrant test of deployment. I find it easier to consume a common .yml rather than parse nodes.pp to extract the info.
the obvious solution is to generate nodes.pp on-demand from a nodes.pp.erb, eg. in a rake task, but I wonder if Puppet itself has a solution to my conundrum.
I think puppet hiera would work well for you, check out:
https://github.com/puppetlabs/hiera#readme