I have chef configured to add "/etc/chef/ohai_plugins" to Ohai::Config[:plugin_path]. However, the Chef documentation says:
"The Ohai executable ignores settings in the client.rb file when Ohai is run independently of the chef-client."
So, how can I get a stand-alone run of ohai to load and use the plugins in that custom path?
(Background: I have a custom plugin that reports some information that we keep track of for a fleet of servers, like whether a server has been patched for heartbleed or shellshock. I want to be able to run "ssh somehost ohai", parse the JSON that gets sent back, and extract the information I need.)
Thanks.
Outside of chef, you can add an additional plugin path using the -d switch, e.g.
$ ohai -d /etc/chef/ohai_plugins
The relevant source code is at:
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L25-L28
https://github.com/chef/ohai/blob/master/lib/ohai/application.rb#L78-L80
The option to specify a custom config file for Ohai was sadly removed last year with https://github.com/chef/ohai/commit/ebabd088673cf3e36d600bd96aeba004077842f1
Hope this answers your question.
This will be possible soon via the implementation of Chef RFC 53: https://github.com/chef/chef-rfc/blob/master/rfc053-ohai-config.md
Related
I'm a newbie in rpm build. and i did my best i can to describe the little complicated question with my amature english...
i have a script(.sh) with some code,what the script do is to setup the code,and it need some user input.
sadly i found out scripts running by rpm can not get user input.
and i know that's not right usage. i'm not trying to get user input
anymore.
my question is:
i'm now trying to get those input with a config file along with the rpm package,but i don't know how to get the rpm package path at the SPEC file macros or the script file running by SPEC file macros.
rpm packages are not supposed to "adapt" themselves to user input. I would recommend you to make sure the installation of the package is always the same. Once the package is installed, you can tell users how to configure the program.
Take git for example: it provides /etc/gitconfig which contains the default packaged configuration. Users can then make their changes to the configuration and save those in ~/.gitconfig. Thus the user configuration is separated from the packaged configuration, so you can keep updating git without losing your configuration.
I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server). My basic idea is something like:
class security::selinux_module {
exec { 'selinux_module_check':
command => "grep module selinux_module_source.te | awk '{print $3}' | sed 's/;//' > /tmp/selinux_module_check.txt",
source => 'puppet:///modules/security/selinux_module_source.te',
}
}
Though when trying to run it on the client server I get:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter source on Exec[selinux_module_check] at /etc/puppet/environments/master/security/manifests/selinux_module.pp:3 on node client.domain.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Any assistance on this would be greatly appreciated.
You can use Puppet's generate() function to run commands on the master during catalog compilation and capture their output, but this is rarely a good thing to do, especially if the commands in question are expensive. If you intend to transfer the resulting output to the client for some kind of use there, then you also need to pay careful attention to ensuring that it is appropriate for the client, which might not be the case if the client differs too much from the server.
I'm trying to find a way to run a command on a SELinux .te file that is located on the puppet server, but not the client (I use the puppet-selinux module from puppetforge to compile the .te file into the .pp module file, so I don't need it on the client server
The simplest approach would be to run the needed command directly, once for all, from an interactive shell, and to put the result in a file from which the agent can obtain it, via Puppet or otherwise. Only if the type enforcement file were dynamically generated would it make any sense to compile it every time you build a catalog.
I suggest, however, that you build a package (RPM, DEB, whatever) containing the selinux policy file and any needed installation scripts. Put that package in your local repository, and manage it via a Package resource.
chef-client v12.15.19 (MSI installer)
On Windows Server 2012 R2
Instead of having to be in the directory where knife.rb is located or put knife.rb in one of the pre-determined locations where knife looks for that config, can I just pass it in as an argument?
Example: knife.bat node list -config_file c:\some\other\place\knife.rb
I'm just curious if this is possible because I didn't see this anywhere in the docs. I know I could workaround this with an environment variable and possibly other ways, but I just was wondering if there was an arg to pass the knife.rb directly.
Yes, you can specify a different configuration file. Here's a snip of the knife --help output from the latest Chef 12.x releases (at the time of this writing):
-c, --config CONFIG The configuration file to use
--config-option OPTION=VALUE Override a single configuration option
So you should be able to do knife -c .... I use this regularly to manage different knife configuration files for different hosted chef instances. I don't see it mentioned in any docs explicitly either.
Yes, you can specify but mind the parameters sequence.
knife cookbook upload my_test_cookbook -c ./knife.rb
You can specify options only after your main command is complete.
The same you can check in knife help :
Usage: knife sub-command (options)
I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.
I'm looking for the correct way to set chef-client's log level when creating a server using knife ec2.
My initial thought was setting the log level configuration in my knife.rb file like this:
log_level :debug
This didn't result in any visible change in the log level.
I also tried turning on the -VV option in knife ec2, but that just results in verbose output from knife ec2 itself.
When I run chef-client directly after logging onto the server, I can get debug information with no problems using:
sudo chef-client -l debug
That would be sufficient, but I'm investigating an underlying problem that only occurs on the initial server bootstrap.
Here is a simple hack/work-around that I discovered to control the log level. In knife.rb, include
a line like the following, for example:
chef_client_path 'chef-client -l debug'
You can see why this works by looking at
this line
of the Chef source code. I confirmed that this works in Chef 11.6.
It may work in other versions.
Currently the default bootstrap templates always set the default :auto log level to the node. There is at least one ticket that seems to be related.
So your only option now is to create your own bootstrap template that adds log_level :debug to /etc/chef/client.rb. You can copy and modify e.g. the default "chef-full" template and then pass it as a parameter to knife.